ADDIE is a popular framework among the instructional designers and developers. It stands for Analysis, Design, Development, Implementation, and Evaluation. Many instructional design models and theories derived from this framework. Many critiques about its applicability also exist. In this article, I will focus on the E of ADDIE: Evaluation. Based on my experience in the field and continuing conversations with other instructional designers, I continue to see that the E does not take the deserved attention by the instructional designers. Perhaps, the instructional designers stay out of those conversations due to internal policies or their current job descriptions. I believe evaluation is the most critical stage for the instructional designers.
In short, instructional designers follow a streamlined course development process. The process starts with one-on-one conversations with faculty. The instructional designer attempts to understand the contexts, timeline, resources needed for the course. This process can be considered the analysis stage which may include additional substages. After this stage, the instructional designer and faculty start to develop a course blueprint following backward design. The design process begins with the identification of the end goals (course goals, objectives, program learning outcomes, and competencies). Later the instructional designer and the faculty design the assessment strategies to assess the end goals. They also design the teaching and learning activities to support each student’s learning and assist them to accomplish the end goals. During the development stage, the faculty and the instructional designer continue the collaboration to develop the actual course according to the course design blueprint. There may be instances in which they may need to go back to the design and make certain modifications. After the faculty and the instructional designer develop the course, the course goes through a course a design review. Some institutions utilize well-known rubrics like QualityMatters. Others may use a homegrown version. After the course passes the review, instructional designers continue to assist the instructor during the implementation (teaching) stage to develop scaffolding activities for those students who may fall behind in the course. Most of the time, the instructional designer either moves to the next course and instructor after the implementation phase. If the instructional designer meets the same instructor again, they may go over what went well and what did not. However, I am not sure if this conversation can genuinely be considered a rigorous evaluation stage.
If you are an instructional designer and you are involved in the evaluation stage of the course more thoroughly, please do share how you evaluate the courses you designed and developed with the faculty. I am very interested in learning from you. Below are my recommendations based on my experiences.
- Create and use a course evaluation form which includes triangulated data from the instructor, peer, student reviews, and course statistics and student performance data. QualityMatters and other versions of course design rubrics can only provide a limited formative evaluation of the course. Without triangulated data, the formative evaluation cannot be merely used to reflect on the quality of the course. The instructional designers should continue to monitor course data to evaluate the components of the course and take notes and present them to the instructors when necessary. Just like the instructional designer collaborates in previous stages of ADDIE, the collaboration should continue in the evaluation stage during the course implementation and immediately after the course ends. Perhaps, the question on accessing student data (both student course evaluations and course data such as content access, discussion forum statistics, student grades, ) for the instructional designer would be related to existing FERPA regulations. However, I am not sure what constitutes a better legitimate educational interest than the instructional designer’s interest to improve the course and student’s learning experience by evaluating the course based on student data.
- Take advantage of the existing technologies. Along with the embedded learning analytics within the modern learning management systems, take advantage of other tools such as Microsoft Power BI, Microsoft Powerapps, perhaps Google Suite tools. Many electronic platforms are available within your institutions to collect data about your courses and conduct effective course evaluations.
- Emphasize the evaluation stage to your faculty clients. Rather than approaching ADDIE as a linear framework, approach it as a cyclical framework. Your evaluation results should contribute to the next course design and development cycle. Make sure that you close the loop in the process.
- Communicate clearly to the faculty that it is the evaluation of the course (the product that you and the faculty developed together). It is not the evaluation of the faculty. Therefore, ensure that the evaluation stage includes the equal contribution of both parties.
Without a thorough evaluation, we, the instructional designers, cannot ensure that our practices are evidence-based, and we provide quality courses. Every course is different, and that means every course deserves its own evaluation by the end of the ADDIE cycle. If you already implemented cyclical course design and development processes with emphasis on the evaluation stage, please do share in the comments section of this article.
I look forward to hearing from you!