What are the best indicators that a faculty development initiative is successful? Faculty developers at Teaching and Learning centers across the nation demonstrate faculty how to effectively evaluate student learning among many other topics, and instructors can use the change in student performance to indicate the instruction was successful. Yet, faculty developers do not “practice what they preach” when it comes to evaluating how well their learners (faculty) improve their performance in response to the faculty development initiatives. For many years, I’ve witnessed the disconnect between “preach” and “practice”. After reading Meyer & Murrell’s 2014 article (National Survey of Faculty Development Evaluation Outcome Measures and Procedures) and the ongoing focus on volume and variety for “successful” faculty development initiatives during the professional conversations in online forums, I felt compelled to tackle this issue head-on.
I will not bore you with a theoretical discussion or literature review on faculty development. Let’s jump right into the practical advice. We have a problem to solve. Our problem is that we need to know if our faculty development initiatives are successfully resulting in improved faculty performance. Meyer and Murrell (2014) state that “the survey results found that over 90% of institutions used measures of the faculty person’s assessment of satisfaction and usefulness of the training itself, rather than student outcomes or changes in teaching methodology” (p. 1). Our current methods are not actually measuring what we really need to know about our effectiveness.
What should we measure to know our true impact on faculty practices? Because many variables influence student outcomes, we may not be able to determine successful faculty development from student outcomes alone. Student outcomes are an indirect measure for faculty development. On the other hand, immediate observable changes in faculty practices do directly indicate the successful faculty development. Logically, we need to measure the change in faculty practices. Unfortunately, I continually observe institutions focusing on the incorrect measures. Institutions generate data on the number of workshops, workshop attendance, and satisfaction with the workshop experience. These are not essential measures to determine the success. If we are giving faculty an enjoyable experience but not impacting their performance, we do little more than providing very expensive mid-day entertainment breaks for employees.
How do we shift our faculty development evaluations to measure our true impact on faculty practices?
- Close your eyes and imagine an ideal world with no logistical, political, financial, cultural, or other contextual barriers.
- Ask yourself “What would I see faculty do differently that would tell me they learned from the faculty development events?” Write those down
- For each behavior, you listed, answer the question “How can I collect data that clearly demonstrates the desired behavior occurred?”
- Look at your list. How are these different than what your institution currently measures?
In my experience, effective faculty development results in clearly observable changes in faculty behaviors. Let’s say I must design a faculty development experience to accomplish the following learning objective: “Learners will be able to develop specific, measurable, attainable, relevant, and time-bound course objectives in their courses.” To evaluate the learning experience, I ask participants to complete a satisfaction survey, and I reviewed the revised syllabus from participants’ actual courses upon the completion of the faculty development training. In the survey, most participants enjoyed the training. The satisfaction survey helps me design to encourage voluntary participation, but it does little to inform me about the learning outcome. From the syllabi, I can see course objectives are written much better in the syllabi after the training; thus, I know we made progress on the learning outcome. What if I had merely asked faculty “Will this educational activity result in a change in your performance?” Do you think I could actually determine if we met our learning objectives? Self-reporting bias is well explained in the literature. The direct observation-based measurement provided superior information about training success.
Any faculty development initiative should use direct observation-based measures as the primary assessment rather than relying solely on a self-satisfaction survey or faculty self-report. I will propose that all faculty training programs should be flipped and personalized to maximize learning with limited (personnel and financial) resources. With proper planning, even large universities can manage this approach. I provided a brief example of a flipped and personalized faculty development approach in one of my recent articles (see Ozdemir & Stebbins, 2015).
In my “flipped and personalized faculty development” approach, faculty access the standard learning materials within an online, asynchronous, and self-paced course and work with a trainer to accomplish learning objectives in a real life setting. Together, the trainer and faculty identify which materials the faculty will submit to the trainer to demonstrate the faculty member’s progress. These materials might include course syllabi, course design, teaching performance, manuscripts to publish, etc. The trainer analyzes the material submitted by the faculty and provides personalized feedback for improvement. Success is measured by directly observable changes in faculty practices, not participation scores (workshop frequency and attendance). Faculty satisfaction and attendance provide secondary data to improve the faculty development experience. Instead of counting the number of different faculty development opportunities available, let’s move towards counting the number of faculty success stories.
If you like this article or you provide similar impact-focused faculty development initiatives, please share your comments below. I look forward to hearing from you.
Meyer, K. A., & Murrell, V. S. (2014). A national survey of faculty development evaluation outcome measures and procedures. Online Learning, 18(3). Retrieved from https://olj.onlinelearningconsortium.org/index.php/olj/article/view/450
Ozdemir, D., & Stebbins, C. (2015). Curriculum Mapping for the Utilization of an Online Learning Analytics System in a Competency-Based Master of Health Care Administration Program: A Case Study. Journal of Health Administration Education, 32(4), 543–562.