Instructional design (ID) commonly addresses 5 iterative phases: analysis, design, development, implementation, and evaluation. Instructional analysis and learner analysis are processes in the systematic approach of ID of a learning event or product. These occur simultaneously in the analysis phase along with a context analysis because they’re intrinsically tied to the performance objectives, which is the outcome of the analysis phase. Other important activities in the analysis phase are the needs assessment (NA) and the performance analysis, both of which precede the instructional analysis and learner analysis.
The NA will identify the gap between the optimal status and actual status of the learners. The performance analysis is conducted to determine if the problem can be addressed with instruction. If so, a goal statement is produced based on the findings of the performance analysis. The instructional analysis breaks down the goal statement into supraordinate, subordinate, and entry-level skills by identifying the aspects that will need to be taught to reach the goal. The learner analysis identifies the learners’ current knowledge, skills, attitudes, as well as other pertinent information such as preferences or cultural constraints that may impact learning. Overall, the goal of ID is to design effective, efficient, and innovative learning experiences.
In the instructional analysis, the instructional designer determines what the learners will actually be doing to reach the goal and the instructional pathway. During the goal analysis, the instructional designer will graphically display the specific steps needed. In the diagram of your analysis, she can include alternative actions, breaks in the process, and the type of learning. Types of learning outcomes include verbal, intellectual, cognitive strategy, psychomotor, or attitudinal. The type of learning condition requires different types of analysis. For example, verbal information can be clustered according to a particular schema. For intellectual or psychomotor skills, instructional designers use a hierarchical approach because a subordinate skill must be achieved before a supraordinate one.
The outcome of the goal analysis becomes the supraordinate skills. During the subordinate skill analysis of a complex skill, the supraordinate steps are broken down into main rules, concepts, and discriminations. The corresponding verbal information and attitudinal skills are attached horizontally. Once the substeps have been fleshed out, the instructional designer determines the entry level skills. These are what the learner should already know how to do in order to successfully achieve the new learning goal. For example, the instruction will generally require a certain reading level, language ability, and topic-specific knowledge.
As aforementioned, the learner analysis is done simultaneously with the instructional analysis because they inform one another. The learner analysis functions include understanding the wide array of variables that affect the learner. These variables include entry skills, educational level, prior topic knowledge, attitudes toward content, attitudes about the delivery system, attitude toward the organization, learning preferences, group characteristics, and motivation. The instructional designer collects information on the learners by conducting structured interviews with those familiar with the current performance. Additionally, the instructional designer conducts site visits to observe the learners in the performance and instructional contexts. Furthermore, they can collect data on the learners via pretests, self-reports, or one-on-one informal discussions.
The output of the learner analysis is a report on all the previously mentioned variables potentially affecting the learner. The context analysis is interrelated with the learner analysis as it collects information on another category of variables affecting the learner: administrative support, physical site, social aspects of the site, and relevance of skill (goal) to the workplace/school.
All three analyses (instructional, learner, and context) are critical to the appropriate design and development of instruction. If any of the skills (supraordinate, subordinate, and entry level) are overlooked or learning context variables not addressed, this will diminish the effectiveness of the instruction. For example, if your target audience is English language learners, you’ll need to collect data on their language skills, reading levels, and cultural norms; otherwise, the instruction created will not meet the needs of the learners, and therefore be a waste of time, money, and effort.
Beresford and Stolovich (2012) defined human performance improvement (HPI) as three perspectives: vision, concept, and end. Vision is for individuals to succeed in areas that are valued by their organization’s stakeholders. Concept uses the vision to accomplish the organization’s goals through successful interactions with not only the organization’s stakeholders, but also with the customers, regulatory agencies, and society. End refers to terminal behaviors, products, and other outcomes that provide a return on investment (ROI). I’ll use Beresford and Stolovich’s perspectives on HPI in my toolbox to address the needs of an organization.
Gilbert (2007) provided HPI with a formula for worthy performances (Pw), which is Pw = Av/Bc, where Av refers to valued accomplishments and Bc refers to costly behaviors. The term “costly” can have positive and negative connotation; it references the costs involved with each performance (e.g., salaries, resources, and training). Gilbert’s formula is a powerful tool for better determining worthy performances.
The first step in improving a particular performance is to conduct a needs assessment (NA) to better understand the current performance in relation to the desired outcomes such as industry standards (benchmarking) coupled with the vision of an organization. A NA helps organizations identify the gap (need) between the actual and optimal performance levels of an organization. I would rely on the Aultschuld’s (2010) three-phase NA model (preassessment, NA, postassessment), as a guide for interacting with a NA team and NA committee of stakeholders. In the preassessment, my team would gather data on the topic from key informants, literature, and extant resources.
The NA team would follow up on emergent themes describing the perceived need and gather specific information via interviews, questionnaires, and focus groups on what the respondents’ value as possible solutions. The NA postassessment process identifies the problem succinctly. Is the gap due to a lack of incentives, knowledge, skills, or institutional support? Training is not always the answer. Interactions and behaviors can be improved via instructional and/or noninstructional interventions. For instance, HPI can be as simplistic as buying a better writing instrument (e.g., Dr. Grip pen) to expedite note-taking on the job. This would be a noninstructional intervention.
I’d utilize the various job aids provided in Altschuld’s series of books to identify and address the problem in light of the organizations concepts. For example, I favor Ishikawa’s Fishbone Diagram with the bones representing the various issues within labeled categories of performance. Moreover, I’d collect solutions from stakeholders and conduct a Sork feasibility study to determine the appropriate solutions. Given the complexity of a NA, the Altschuld series would serve as another item in my HPI toolbox.
I created a manual of methods for problem analysis (PA) for novice instructional designers that can be used on a daily basis when a full NA is impossible. I studied Jonassen’s typology of problems to determine the types and possible actions required. I learned if the problem is well-structured, then a quick solution can be found because it is easily solved. If it is ill-structured, then I should conduct a PA to get to the root of the problem. I would use Harless’ (1973) list of 13 questions for PA. I recognize his first one as being very important: Is there a problem? After a problem(s) is identified, I would use Toyoda’s Why Tree for root cause analysis; this technique keeps asking why for each response given until the root(s) is identified. Then I would use Sanders and Thiagarajan’s (2002) 6-box model to see which areas of an organization are affected by these performance problems: knowledge, information, motives, process, resources, wellness. I also learned from Jonassen’s (2004) work that we should collect our problems in a fault database. This is something I have been doing to improve our turnaround in resolving learning management system (LMS) issues at my workplace to increase our ROI for cost, labor, and learning outcomes.
For interventions at my workplace, I use job aids, embedded performance systems, and the aforementioned idea for a fault database. I purchased Rossett and Gautier-Down’s (1991) HPI resource book, A Handbook of Job Aids. This book provides matrices (Frames Type II) for the user to discern which job aid should be used with which type of task. I also create job aids for the workplace to facilitate teaching and learning. For example, I create how-to guides for instructional technology software (e.g., Camtasia Studio) for instructors who are unable to attend training and must learn on their own. Job aids are useful HPI tools for infrequent tasks like the occasional instructional video one might need to create for class. I have also been focusing on providing performance support mechanisms for right-time needs for students and instructors. I noticed an overreliance on the instructional designer to answer all LMS related questions. To provide an embedded support system, I added a webpage on our LMS to answer frequently asked questions. This has greatly reduced my cue of email requests, all the while improving the performance of those affected. In closing, for my HPI general framework, I rely on Beresford and Stolovich’s HPI perspectives of vision, concept, and end. To put my framework into action, I rely on the works of Gilbert, Altschuld, Jonassen, Harless, Ishikawa, Sanders, Thiagarajan, and Toyoda.
Altschuld, J. W., & Kumar, D. D. (2010). Needs assessment. Thousand Oaks, CA: SAGE Publications.
Beresford B., & Stolovitch, H. D. (2012). The development and evolution of human performance improvement. In R. A. Reiser & J. V. Dempsey (Eds.) Trends and issues in instructional design & technology (3rd ed.) (pp. 135-146). Boston, MA: Allyn & Bacon Pearson Education.
Harless, J. H. (1974). An analysis of front-end analysis. Improving Human Performance, 2(4), 229-244.
Jonassen, D. H. (2004). Learning to solve problems: An instructional design guide. San Francisco, CA: Pfeiffer.
Rossett, A., & Gautier-Downes, J. (1991). A handbook of job aids. San Francisco: CA. Pfeiffer & Company.