LEARNING TASK
Instructional design models emerged in the 1960s and have since proliferated across instructional technology journals and educational literature (Branch and Dousay, 2015). These models function as conceptual frameworks and organizational guides, enabling IDs to analyze, construct, and assess structured learning systems. These models offer a diverse set of functions that support clarity in planning, coordination, and evaluation whether applied to expansive educational settings or specific training tasks.
Why use models? They allow us to conceptualize complex realities through simplified representations. A model reduces intricate processes, structures, or abstract concepts into more manageable forms. Since reality often presents overwhelming variability and situation-specific elements, models focus on what remains consistent across settings. Seel (1997) classifies instructional design models into three categories: theoretical or conceptual, organizational, and planning-prognosis. The models discussed here fall under organizational models, designed to function as broad, adaptable guides for instructional planning (Branch and Dousay, 2015).
INSTRUCTIONAL DESIGN MODELS BY PHASE
ID Models | Analysis | Design | Development | Implementation | Evaluation |
---|---|---|---|---|---|
Gerlach & Ely (1980) | Assess entry behavior; Identify content & objectives | Determine strategy, organize groups, allocate time/space, select resources | Prepare instructional materials | Implement strategies and learning groups | Evaluate performance; analyze feedback; revise as needed |
ASSURE | Analyze learners: entry traits, learning styles, competencies | ABCD format; plan strategy, media selection, and activities | Select/adapt/create materials | Utilize materials; learner engagement and practice | Evaluate outcomes and revise instruction |
Morrison, Ross, Kalman, Kemp | Identify learner characteristics, problems, and task requirements | Set objectives; sequence content; choose strategies | Develop materials; organize project tasks | Deliver structured instruction; adapt as needed | Conduct evaluations; revise with feedback |
PIE Model | Gather data on learning and media use | Create lesson plan with tech tools | Build instructional content/tools | Execute media strategy and instruction | Evaluate learner achievement and revise |
4C/ID | Analyze learning tasks and performance goals | Design whole tasks, procedures, and practices | Develop materials and simulations | Progressively deliver tasks with support | Not explicitly included in the model |
Wiggins & McTighe (UbD) | Identify desired results (Stage 1) | Plan assessments (Stage 2) | Sequence lessons (Stage 3) | Flexible unit delivery | Evaluate and align instruction |
Agile Model | Task and instructional analysis | Strategy and system design | Develop materials and supports | Implement, train, and disseminate | Ongoing and final evaluation |
IPISD | Analyze roles, tasks, and current offerings | Structure outcomes and course sequence | Develop content and instruction packages | Train instructors; deliver materials | Conduct internal/external evaluations |
Gentry | Prioritize goals with needs analysis | Define objectives, strategies, and media | Develop prototypes and finalize content | Install and manage delivery systems | Gather data and improve instruction | Dorsey, Goodrum, and Schwen | Establish user needs and vision collaboratively through early user involvement and feedback loops. | Create conceptual prototypes with user input; iterate based on user-centered feedback to refine design. | Build and evolve low- to high-fidelity prototypes in rapid cycles; adjust design in response to real-time testing. | Implement functional prototypes gradually as they stabilize; integrate user insights continuously. | Conduct ongoing user tests (micro, midi, macro levels) across cycles; evaluate based on usability and functionality. |
Diamond | Assess needs, define goals, examine feasibility, align with institutional priorities, and select projects. | Outline goals, choose instructional formats, plan content development, and prepare for implementation. | Produce materials, select and adapt existing resources, and prepare for field-testing. | Coordinate logistics, implement the course, and ensure faculty involvement. | Evaluate during design and implementation phases; revise materials based on feedback and measured effectiveness. |
Smith and Ragan | Analyze learning context, learner characteristics, and learning tasks. | Generate instructional strategies including organizational, delivery, and management strategies. | Produce instruction based on strategies and specifications. | Translate specifications into trainer guides and instructional materials. | Conduct formative and summative evaluations; revise instruction based on results. |
Dick, Carey, and Carey | Assess needs, identify instructional goals, conduct instructional analysis, and analyze learners and contexts. | Write performance objectives, develop assessment instruments, and create instructional strategies. | Develop and select instructional materials aligned with objectives and strategies. | Deliver instruction after formative evaluation cycles and material refinement. | Conduct iterative formative evaluations (three cycles), and summative evaluation to assess goal achievement. |
Merrill model: The Pebble in the Pond model | Identify instructional goals by determining the skill to be mastered; analyze tasks involved and categorize generalizable skills. | Structure task-centered strategies with demonstration, application, and integration components; plan expanding instructional activities. | Use ‘Pebble in the Pond’ strategy: design expanding content layers around authentic tasks, prepare learning activities for demonstration and application. | Place learners in real-world, task-centered learning contexts to promote direct application of newly acquired knowledge. | Gather learner feedback continuously, assess skill integration, and revise based on reflection, peer critiques, and performance in authentic contexts. |
Classroom-oriented instructional design models often align with how educators operate in real-world teaching environments. Teachers—whether in elementary schools, high school classrooms, vocational settings, or colleges—typically work with fixed schedules, set class durations, and clear instructional responsibilities. These models also find occasional use in corporate training programs with similar structures. Given the limits on time and resources, most teachers lean toward modifying existing materials rather than building new ones from scratch. Since many subjects are taught just once a year, there’s usually little demand for ongoing evaluation or continuous revision. Instead, these models offer broad steps that help instructors organize and deliver content. While they aren’t widely recognized or systematically applied, some models are practical and approachable for teachers. Applying these models requires careful consideration for developers supporting classroom instruction, especially since some educators might view instructional design as rigid or overly procedural. Still, models like Gerlach and Ely (1980), Heinich, Molenda, Russell, and Smaldino (1999), Newby, Stepich, Lehman, and Russell (2000), and Morrison, Ross, and Kemp (2001) have proven workable within typical classroom constraints.
Product-oriented instructional design models focus on building standalone materials meant for learners who work independently, often without direct instructor support. These models assume the need for original content, repeated testing, and user-friendly design, especially in contexts where learners rely on facilitators or systems, not teachers. Rapid prototyping sometimes invites early user feedback, but in many cases, interaction is minimal. These models suit corporate training or tech roll-outs, where clear product requirements already exist. As digital and distance learning expand, so does the demand for polished, scalable instructional tools. Examples include models from Bergman and Moore (1990), De Hoog et al. (1994), Bates (1995), Nieveen (1997), and Seels and Glasgow (1998).
System-oriented instructional design models support the development of large-scale instruction, typically full courses or programs, and rely on specialized teams with access to advanced tools. These models prioritize original development, extensive front-end analysis, and structured revisions. They begin by clearly defining the problem, often using formats informed by Gilbert (1978) or Mager and Pipe (1984), who emphasized checking for non-instructional barriers first. Unlike product models, system models align closely with institutional goals and emphasize scalability. Examples include Branson (1975), Gentry (1994), Dorsey, Goodrum, and Schwen (1997), Diamond (2008), Smith and Ragan (1999), and Dick, Carey, and Carey (2009).
Classroom-, product-, and system-oriented instructional design models all offer structured pathways for supporting learning, yet they differ in scale, complexity, and intended use. Classroom models suit teachers operating within fixed schedules and limited resources. These models offer pliable steps and prioritize practicality over exhaustive development. Product models focus on independent learning experiences, emphasizing original material, user-friendliness, and iterative refinement, especially in corporate or tech-driven settings. System models, meanwhile, tackle broader instructional efforts like full curricula and require specialized teams, detailed analysis, and alignment with organizational goals. Despite these differences, all three categories rely on structured design thinking and thoughtful planning.
Instructional design models provide varied frameworks for planning and delivering instruction, each reflecting distinct assumptions about context, scale, and learner needs. Some emphasize linear structure and comprehensive evaluation, while others prioritize iteration, collaboration, or task-specific outcomes. These models serve different settings such as formal classrooms and higher education, corporate training, and digital learning environments. Their strengths and constraints depend on how Instructional designers apply them, considering available resources and the demands of instruction.
The following table outlines a comparison of selected instructional design models, summarizing where they work best, what they offer, and what limitations to anticipate.
INSTRUCTIONAL DESIGN MODELS COMPARISON
ID Model | Strengths | Weaknesses | When to Best Apply |
---|---|---|---|
ADDIE | Simple, flexible, widely recognized; clear phase structure | Linear model; may not reflect real-time iteration | Ideal for structured training development and beginner instructional designers |
Dick, Carey, and Carey | Systematic and detailed; includes strong evaluation focus | Time-consuming; can be rigid for smaller or agile projects | Best for complex instructional development in business, military, and education sectors |
Smith and Ragan | Emphasizes instructional strategies; rooted in cognitive theory | May appear highly linear; implementation guidance is brief | Strong fit for projects needing strategy development and cognitive task alignment |
Merrill (Pebble in the Pond) | Task-centered and based on first principles of instruction; highly learner-focused | Conceptual; less operational detail; needs integration with other models | Best for real-world, authentic tasks requiring skill demonstration and integration |
IPISD | Highly detailed; designed for performance-based training; clear control mechanisms | Rigid, complex, military-specific origin limits flexibility | Best for military training or large-scale projects requiring strict control and documentation |
Gentry | Balanced development and support components; great for large-scale systems | May seem mechanistic; relies on detailed implementation tools | Useful in higher education or managing broad instructional systems |
Agile Model | Empirical, iterative, and user-participatory; fast prototyping | Lacks detailed guidance on design stages; assumes team collaboration | Effective in fast-paced, collaborative environments like software-based or corporate eLearning |
Seels and Glasgow | Integrates project management and diffusion; iterative feedback loop | Requires strong project management knowledge | Well-suited for long-term projects requiring scalability and stakeholder buy-in |
Morrison, Ross, Kalman, Kemp | Non-linear and flexible; comprehensive with support resource planning | Can overwhelm novice designers due to its breadth | Best for instructional designers managing both design and delivery in dynamic educational settings |
ASSURE | Emphasizes media use and learner characteristics | Less guidance for complex course development | Best for lesson-level planning and media-rich environments |
PIE Model | Focuses on practical instructional delivery with technology integration | Limited detail in design and development phases | Ideal for classroom settings using tech and structured media planning |
Dorsey, Goodrum, Schwen | Strong user involvement; rapid cycles improve usability | Conceptual and lacks detailed implementation process | Great for rapid prototyping in digital product or course development |
Diamond | Curriculum-level planning with institutional alignment | Focused on higher ed; less detail for day-to-day teaching | Best for program-wide instructional design in universities or colleges |
PDPIE Framework | Clear instructional design flow tailored to online course development | Emphasizes ID role only; limited collaboration guidance | Best for instructional designers building asynchronous or blended online courses |
RID (Rapid Instructional Design) | Accelerated, learner-centered; emphasizes reflection, practice, and job application | Lacks depth in analysis, development, and evaluation; not standalone | Ideal for plug-in to ISD for fast-paced training needing strong engagement and practical focus |
HPT (Human Performance Technology) | Grounded in systems thinking; focuses on actual vs. desired performance; flexible interventions | Not exclusively instructional; may be complex to implement across organizations | Best for solving performance issues with a mix of instructional and non-instructional interventions |
Table 2: Comparison of ID models
Assessment functions as an indispensable mechanism for diagnosing readiness, guiding instruction, certifying achievement, and encouraging learner autonomy. Each type responds to a specific instructional need; some prioritize pre-instruction insights, while others focus on feedback loops, summative judgments, or student self-monitoring. The design and application of each depend on context, available resources, and instructional goals. Though the tools and techniques may differ, the intent remains consistent: to support purposeful, responsive learning.
The table below outlines a comparative view of assessment types, including their distinct features, intended purpose, advantages, limitations, and appropriate moments for use.
COMPARISON OF ASSESSMENT TYPES
Type of Assessment | Key Features | Purpose | Strengths | Weaknesses | When to Best Apply | Examples |
---|---|---|---|---|---|---|
Diagnostic Assessment | Pre-assessment to gauge prior knowledge, skills, misconceptions | Identify strengths and weaknesses before instruction | Helps tailor instruction; reveals misconceptions | Not used for grading; time-consuming | Beginning of unit or course | Pre-tests, self-assessments, interviews |
Formative Assessment | Ongoing feedback during learning; not graded | Support learning and guide instructional adjustments | Encourages active learning; low-stakes feedback | Difficult to track across time; may lack rigor | Throughout instruction; during tasks | Journals, quizzes, peer feedback |
Summative Assessment | Culminating assessment; evaluates what students learned | Measure achievement at the end of instruction | Clear accountability; easy to grade and communicate | Stresses memorization; limited feedback | End of a unit or semester | Final exams, projects, performance tasks |
Authentic Assessment | Real-world tasks; evaluated with rubrics; active learning | Demonstrate application of skills in real-world scenarios | Engages students; aligns with real-world tasks | Hard to standardize; grading may be subjective | Project-based or task-oriented instruction | Case studies, design prototypes, simulations |
Assessment of Learning | Used to rank and grade; test-based; typically summative | Certify achievement and communicate progress | Widely recognized; structured data collection | Often emphasizes competition; neglects improvement | Final assessments and report cards | Standardized tests, final grades, transcripts |
Assessment for Learning | Teacher modifies teaching based on feedback; focus on progress | Guide learning decisions and next steps | Personalized feedback; improves engagement | Demands teacher adaptability; complex to implement | During instructional units to guide teaching | Checklists, feedback forms, annotated drafts |
Assessment as Learning | Student-driven; self-assessment; reflection and self-monitoring | Foster metacognition and self-regulated learning | Empowers student ownership of learning | Requires maturity; not all students self-regulate | During learning to build self-assessment skills | Reflection logs, learning goals, peer evaluations |
Digital Assessment | Technology-enabled; multiple attempts; real-time feedback | Enhance learning, access, and efficiency using tech | Immediate feedback; flexibility; customizable pathways | Tech issues, equity gaps, limited tactile/human connection | Online or blended courses; scalable settings | Online quizzes, LMS rubrics, video responses |
Renewable Assessment | Creates lasting value; used beyond classroom context | Encourage critical thinking and relevance beyond school | Authentic, valuable artifacts; boosts motivation | Time-intensive; demands strong design and mentorship | Capstone projects, service learning, or internships | Client presentations, public wikis, outreach materials |
Table 3: Comparison of assessment types
reflection
It now makes sense why instructional design models must work alongside learning and instructional theories. These theories serve as the rationale behind every decision we make when designing instruction. Without them, an Instructional Design model stands on shaky ground. The same logic applies when choosing how to assess learners.
In the private school where I taught, norm-referenced grading still dictated academic recognition. Yet when gauging whether students met specific learning goals, we relied on standards-based checks aligned with the Most Essential Learning Competencies (MELCs). Years ago, academic rank meant everything. I remember how we waited for results, wondering who would clinch the top spot. If three students landed a 90, teachers would recheck scores, sometimes awarding multiple first honors if a tie persisted. The system centered on score accumulation across all subjects. But scoring high did not always reflect a student’s actual ability.
Today, the K–12 system prioritizes criterion-referenced evaluation. Recognition now rewards mastery, not competition. If Students 1, 2, and 3 consistently meet the same standards, they all qualify for “With Highest Honors.” This shift promotes fairness. Still, concerns persist. PISA results show Filipino learners trailing in reading comprehension, math, and science, prompting people to question whether standards-based assessment weakens competence. But is the problem really with the students?
Back in college, before deployment for pre-service teaching, my instructor addressed an issue that lingered long before K–12 reforms—promoting learners unprepared for the next level. She admonished us: if a child ends up in your class without the necessary foundation, take ownership. Don’t pin it on the previous teacher, the administrator, or the parent. That student becomes your responsibility. “It takes a village to raise a child,” she reminded us—a proverb as relevant in classrooms as in life. Addressing this issue requires collective accountability.
As someone preparing for a role in instructional design, I intend to focus on diagnostic strategies that prevent such gaps from snowballing. I won’t cling to derivative routines that prioritize ranking over relevance. I’ll work toward systems that promote mastery, use data judiciously, and help teachers design interventions that empower rather than penalize.
One assessment type that resonates with me is digital. I’ve always valued how technology simplifies feedback loops—digital rubrics, quick submissions, and structured comment systems all support fast, visible communication between teacher and learner. But convenience isn’t absolute. Academic dishonesty, spotty connectivity, and difficulties evaluating hands-on performance remain persistent concerns, especially in contexts where access is far from equitable. That disconnect prompted my research proposal for one of our courses.
Initially, I explored blockchain as a system to secure transparency in digital assessments. Its tamper-resistant structure offered a promising safeguard for fair grading and verifiable submissions. Yet, as I examined real conditions in many Philippine schools, where devices are shared, connections are unreliable, and digital fluency remains uneven, I saw the impracticality of pursuing assessment solutions that ignore infrastructural gaps.
Instead, I shifted toward instructional design. The proposal, Integrating Blockchain as a Learning Tool for Simplifying Decentralized Technologies in Philippine K–12 Classrooms, investigates how blockchain might be introduced not as a backend grading system but as a topic itself—unpacked through accessible teaching strategies. It focuses on identifying core blockchain concepts appropriate for junior high learners, testing delivery strategies that align with the national digital literacy framework, and recognizing institutional constraints that influence classroom-level decisions.
Nevertheless, I hope the direction remains coherent with what I’m trying to point out.
As a future instructional designer, I aim to center clarity and accessibility when introducing complex or emerging technologies. While instructional innovations such as blockchain offer possibilities for reinforcing transparency, their integration must align with the learner’s cognitive readiness, institutional context, and available infrastructure. Instructional models will serve as the primary scaffolds for planning, whether that entails sequencing content through Reigeluth’s Elaboration Theory or applying Keller’s ARCS Model to sustain engagement. Assessment types will support these frameworks. Criterion-referenced tools or digital evaluations are used; their value depends on whether they measure the intended outcomes without imposing technical barriers. The goal remains: to construct instructional environments where both the delivery and the assessment remain equitable and coherent.
References
21st Century Education Systems, Inc. (n.d.). The ASSURE model. https://21stcenturyeducationinc.weebly.com/the-assure-model.html
Branch, R. M., & Dousay, T. A. (2015). Survey of instructional design models (5th ed.). Association for Educational Communications and Technology.
Center for Educational Technology & Instruction. (2015). Instructional design principles lecture (SAM) [Video]. YouTube. https://www.youtube.com/watch?v=1h2iREzXAxU
Chappell, M. (2018, September 26). Instructional design using the ADDIE model. eLearning Industry. https://elearningindustry.com/addie-model-instructional-design-using
Clark, D. R. (2014). Rapid instructional design (RID). http://www.nwlink.com/~donclark/hrd/learning/id/RID.html
D’Angelo, T., Bunch, J. C., & Thoron, A. (2018). Instructional design using the Dick and Carey systems approach. https://edis.ifas.ufl.edu/publication/WC294
Dessinger, J. C., Moseley, J. L., & Van Tiem, D. M. (2012). Performance improvement/HPT model: Guiding the process. Performance Improvement, 51(3). https://deepblue.lib.umich.edu/bitstream/handle/2027.42/90576/20251_ftp.pdf
Earl, L. (2003). Assessment as learning: Using classroom assessment to maximise student learning. Corwin Press. https://web.uvic.ca/~thopper/iweb09/GillPaul/Site/Assessment_files/Assessment.pdf
Graham, C. R., Slaugh, B., McMurry, A. I., Sorensen, S. D., & Ventura, B. (2024). Assessment plan (Chap. 4). In Blended teaching in higher education: A guide for instructors and course designers. EdTech Books. https://edtechbooks.org/he_blended/chapter_4_assessment_plan
Gregory, R. (2016). The MRK model of instructional design [Video]. YouTube. https://www.youtube.com/watch?v=_LwUQWmWlF0&ab_channel=RhondaGregory
Masters, G. N. (2014). Assessment: Getting to the essence. Centre for Strategic Education. https://research.acer.edu.au/ar_misc/18
McGriff, S. J. (2000). Instructional system design (ISD): Using the ADDIE model. https://www.lib.purdue.edu/sites/default/files/directory/butler38/ADDIE.pdf
Merrill, M. D. (2002). A pebble-in-the-pond model for instructional design. Performance Improvement, 41(7), 39–44. http://www.clarktraining.com/content/articles/PebbleInThePond.pdf
Merrill, M. D. (2002). First principles of instruction. ETR&D, 50(3), 43–59. https://ocw.metu.edu.tr/pluginfile.php/9336/mod_resource/content/1/firstprinciplesbymerrill.pdf
Merrill, M. D. (2008, August 12). Merrill on instructional design [Video]. YouTube. https://www.youtube.com/watch?v=i_TKaO2-jXA
Northern Illinois University Center for Innovative Teaching and Learning. (2012). Formative and summative assessment. In Instructional guide for university faculty and teaching assistants. https://www.niu.edu/citl/resources/guides/instructional-guide
Pappas, C. (2014, December 6). Instructional design models and theories: Elaboration theory. https://elearningindustry.com/instructional-design-models-elaboration-theory
Sharif, A. (2015). PDPIE framework: Online course development quality cycle. In Expanding learning scenarios: Opening out the educational landscape, Proceedings of the European Distance and E-Learning Network Annual Conference 2015. https://www.researchgate.net/publication/276290503_PDPIE_Framework_Online_Course_Development_Quality_Cycle
Subhash, P. D., & Ram, S. (n.d.). Types of assessment and evaluation. https://egyankosh.ac.in/bitstream/123456789/80503/1/Unit-13.pdf
Treser, M. (2015, August–September). Getting to know ADDIE. eLearning Industry.
University of Connecticut. (n.d.). Alternative authentic assessment methods. https://cetl.uconn.edu/resources/teaching-and-learning-assessment/teaching-and-learning-assessment-overview/assessment-design/alternative-authentic-assessment-methods
van Merrienboer. (n.d.). van Merriënboer’s 4C/ID Model. https://drive.google.com/file/d/1PzcSSeOAZnYRjeBvWNp8lnwtUlsoh2R9/view
Watson, E. (n.d.). Defining assessment. https://www.ualberta.ca/centre-for-teaching-and-learning/media-library/resources/assessment/defining-assessment.pdf
Wiggins, G., & McTighe, J. (1998). What is backward design? (Chap. 1). In G. Wiggins & J. McTighe, Understanding by design. Association for Supervision and Curriculum Development. https://educationaltechnology.net/wp-content/uploads/2016/01/backward-design.pdf
Yates, N. (2017, May 27). Kemp’s model of instructional design [Video Playlist]. YouTube. https://www.youtube.com/playlist?list=PL-_oM9qDZ39FcQ1ylVkgIviauYjNHHsss