5 Pitfalls of Online Teaching

Female student looking frustrated with books and computer

I took my first series of online courses for professional development in 2009.  The courses were highly interactively and well-designed because they were taught by experts in the field of computer-assisted language learning.  A shout-out to my professors in the Teaching English to Speakers of Other Languages (TESOL) certificate program, Principles and Practices of Online Teaching!  (See blog on this topic). Ever since then, I’ve compared online courses to those.

As a working instructional designer and current PhD student enrolled in online courses, I bring a well-rounded perspective to the topic of distance education.  I’ve researched and written about how to develop an online community of inquiry.  It has become my personal agenda to ensure that students taking online courses don’t get frustrated from the course design and lack of teacher presence.

Here’s a list of what I consider the top 5 pitfalls that will surely decrease student learning outcomes and student satisfaction:

  1. Lack of pattern in weekly assignments will cause confusion, especially in a hybrid (blended) course. For example, as you plan threaded discussions, quizzes, and assignments, make sure they follow a pattern; otherwise, indicate on your syllabus any gaps in the established pattern of assignments.
  2. Numerous clicks to find content leads to frustration. To increase findability, use clear navigation practices to reduce time lost on task and frustration levels (Simunich, Robins, & Kelly, 2012).
  3. Lack of synchronous sessions to connect with the human leads to reduced achievement. To increase student achievement, include synchronous sessions (Bernard et al., 2004), Arbaugh and Hornik (2006) suggested video conferencing, voice messaging, or some other types of multimedia.
  4. Instructors not responding to students’ discussions in a timely manner. There are  several theories on human learning about delivering targeted instruction at the right time such as Vygotsky’s (1978) zone of proximal development, Ebbinghaus’ serial position effect (primacy and recency effect), and the presence or absence of retrieval cues in Cormier’s information processing theory.  Students need prompt feedback that targets their instructional needs (Arbaugh, 2001).  See my blog post on instructor feedback for online courses.
  5. Lack of student-student interactions (Bernard et al., 2004).  Make sure students can talk to one another and share their finished projects.

Do you agree with my top 5?

References

Arbaugh, J. B. (2001). How instructor immediacy behaviors affect student satisfaction and learning in web-based courses. Business Communication Quarterly, 30, 42-54.

Arbaugh, J. B., & Hornik, S. (2006). Do Chickering and Gamson’s seven principles also apply to online MBAs? The Journal of Educators Online, 3(2), 1-18.

Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, C. A., Tamim, R., Surkes,  M. A., & Bethel, E. C. (2009). A meta-analysis of three types of ITs in distance education. Review of Educational Research, 79, 1243-1288.

Simunich, B., Robins, D., & Kelly, V. (2012). Does findability matter? Findability, student motivation, and self-efficacy in online courses.  Quality Matters (QM) Research Grant, Kent State University.

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.

Join me at the MSERA 2016 in Mobile, Alabama!

Photo of Sandra Annette Rogers
Say hello if you see me.

Join me in Mobile, AL this November 2nd-4th for the Mid-South Educational Research Associations (MSERA) 2015 annual meeting.  Click this link to see the full conference schedule.  The conference takes place at the Renaissance Mobile Riverview Plaza Hotel on Water Street downtown.  For more information on the MSERA, visit their Website.  The great thing about #MSERA is that they are friendly and welcome newcomers—and they remember your name the time they see you!

I’ll be making two brief paper presentations and chairing these same sessions. Here’s my schedule:

  • 2:00 eLearning Session in Grand Bay Room I/II: November 3 (Thursday)

    Rubric to Evaluate Online Course Syllabi Plans for Engendering a Community of Inquiry

    Sandra A. Rogers & James Van Haneghan, University of South Alabama


    10:00 Instructional Design Session in Windjammer Room: November 4th (Friday)

    Magis Instructional Design Model for Ignatian-based Distance Education

    Sandra A. Rogers, Spring Hill College

     

Goodbye eCollege, Hello Schoology!

Venn Diagram of the tools and features of eCollege compared to those of Schoology LMS

Here’s a link to the PDF of this image.  Pearson is closing its door on eCollege and eCompanion, so we adopted a new learning management system (LMS).  Schoology by comparison has so many more features for our learners.

Problem Analysis: 3 Job Aids to Find Root Causes

Instructional and Learner Analysis in Instructional Design

Acronym: Analysis, Design, Development, Implementation, Evaluation

Instructional design (ID) is commonly segmented into 5 iterative phases: analysis, design, development, implementation, and evaluation. Instructional analysis and learner analysis are processes in the systematic approach of ID of a learning event or product. These occur simultaneously in the analysis phase along with a context analysis because they’re intrinsically tied to the performance objectives, which is the outcome of the analysis phase. Other important activities in the analysis phase are the needs assessment (NA) and the performance analysis, both of which precede the instructional analysis and learner analysis.

The NA will identify the gap between the optimal status and actual status of the learners. The performance analysis is conducted to determine if the problem can be addressed with instruction. If so, a goal statement is produced based on the findings of the performance analysis. The instructional analysis breaks down the goal statement into supraordinate, subordinate, and entry level skills by identifying the aspects that will need to be taught to reach the goal. The learner analysis identifies the learners’ current knowledge, skills, attitudes, as well as other pertinent information such as preferences or cultural contraints that may impact learning. Overall, the goal of ID is to design effective, efficient, and innovative learning experiences.

In the instructional analysis, the instructional designer determines what the learners will actually be doing to reach the goal and the instructional pathway. During the goal analysis, the instructional designer will graphically display the specific steps needed. In the diagram of your analysis, she can include alternative actions, breaks in the process, and the type of learning. Types of learning outcomes include: verbal, intellectual, cognitive strategy, psychomotor, or attitudinal. The type of learning condition requires different types of analysis. For example, verbal information can be clustered according to a particular schema. For intellectual or psychomotor skills, instructional designers use a hierarchical approach because a subordinate skill must be achieved before a supraordinate one.

The outcome of the goal analysis becomes the supraordinate skills. During the subordinate skill analysis of a complex skill, the supraordinate steps are broken down into main rules, concepts, and discriminations. The corresponding verbal information and attitudinal skills are attached horizontally. Once the substeps have been fleshed out, the instructional designer determines the entry level skills. These are what the learner should already know how to do in order to successful achieve the new learning goal. For example, the instruction will generally require a certain reading level, language ability, and topic specific knowledge.

As aforementioned, the learner analysis is done simultaneously with the instructional analysis because they inform one another. The learner analysis functions include understanding the wide array of variables that affect the learner. These variables include entry skills, educational level, prior topic knowledge, attitudes toward content, attitudes about the delivery system, attitude toward the organization, learning preferences, group characteristics, and motivation. The instructional designer collects information on the learners by conducting structured interviews with those familiar with the current performance. Additionally, the instructional designer conducts site visits to observe the learners in the performance and instructional contexts. Furthermore, they can collect data on the learners via pretests, self-reports, or one-on-one informal discussions.

The output of the learner analysis is a report on all the previously mentioned variables potentially affecting the learner. The context analysis is interrelated with the learner analysis as it collects information on another category of variables affecting the learner: administrative support, physical site, social aspects of the site, and relevance of skill (goal) to the workplace/school.

All three analyses (instructional, learner, and context) are critical to the appropriate design and development of instruction. If any of the skills (supraordinate, subordinate, and entry level) are overlooked or learning context variables not addressed, this will diminish the effectiveness of the instruction. For example, if your target audience is English language learners, you’ll need to collect data on their language skills, reading levels, and cultural norms; otherwise, the instruction created will not meet the needs of the learners, and therefore be a waste of time, money, and effort.

Quality Matters for Online Instruction

Quality Matters (QM) logo

What is it?

Quality Matters™ (QM) is a peer-review process for providing feedback and guidance for online course design.  According to the QM website, it originated from the MarylandOnline Consortium project in 2003. They received a grant from the US Department of Education to create a rubric and review process based on research and best practices.  In 2014, it became its own nonprofit organization.  Through a subscription service, the organization now provides training, resources, conference events, and research collaborations.  They currently have 5000 QM certified reviewers to assist subscribers with the peer review process of their online courses.

Who uses it?

QM provides specific rubrics and guidelines for the quality assurance review process for K-12, higher education, publishers, and continuing education programs that offer distance education.  QM has a new program to bring the rubric and process to students.  The QM process is specifically for hybrid and fully online courses; it’s not for web-enhanced face-to-face courses.  QM currently has 900 subscribers.  Subscription prices are adjusted to the size of your online programs.

How does it work?

A subscribing institution (or individual) requests a QM review of their course and submits an application.  QM recommends that you familiarize yourself with the rubric through the training process in advance of the review.  They also recommend that the course for review not be new—that it has been through a few semesters to work out the bugs.  A QM coordinator for your course assigns you a team of reviewers consisting of a team leader and two other certified peer reviewers, one of which is an subject matter expert.  They read your self-report about the course and review your course using the rubric and guidelines.  The rubric covers these general standards: 1. Course Overview & Introduction, 2. Learning Objectives (Competencies), 3. Assessment & Measurement, 4. Instructional Materials, 5. Course Activities & Learner Interaction, 6. Course Technology, 7. Learner Support, and 8. Accessibility & Usability.  The team contacts you with questions throughout the 4-6 week process.  Then they present you with your evaluation with time to address any major issues before finalizing the report.

What are the benefits?

Those courses that pass the review process receive recognition on the QM website.  Even if you meet the standards, the peer reviewers provide you with recommendations for further improvements.  Instructors can use this feedback for other courses they teach or debrief with colleagues about it.  This serves as an ongoing continuous improvement process.  This is something that institutions can promote to their clients and instructors can add to the curriculum vitae.  From personal experience in becoming a QM certified peer reviewer, I can attest to the benefits of knowing the best practices and accessibility requirements for online course design.  It has helped me to become a better online instructor and provided me with a wealth of knowledge for my work as an instructional designer.  I’m grateful to the Innovation in Learning Center at the University of South Alabama for training me on the QM process and providing the opportunity to become a certified peer reviewer.

Join me at SITE 2016 in Savannah, GA!

Photo of Sandra Annette Rogers
Say hello if you see me.

Two of my proposals were accepted for presentation at the Society for Information Technology and Teacher Education (SITE) International Conference in Savannah, GA.  I’d love to connect with any of my readers who are also going to SITE. This will be my second time to attend this conference and my first time in the city of Savannah.  I can’t wait!

Here’s my current schedule for the conference: (All times are Eastern Standard Time.)

1. Brief Paper: Rubric to Evaluate Online Course Syllabi Plans for Engendering a Community of Inquiry, March 22, 2016 at 11:50- 12:10 P.M., in the Hyatt Regency F.

2.  Poster Session: Saudi ELLs’ Digital Gameplay Habits and Effects on SLA: A Case Study,  March 23, 2016 at 5:30-7:00 P.M. in the Hyatt Regency Harborside Center. See my poster below.

My Human Performance Improvement Toolbox

HPI Image for blog

Beresford and Stolovich (2012) defined human performance improvement (HPI) as three perspectives: vision, concept, and end. Vision is for individuals to succeed in areas that are valued by their organization’s stakeholders. Concept is to use the vision to accomplish the organization’s goals through successful interactions with not only the organization’s stakeholders, but also with the customers, regulatory agencies, and society. End refers to terminal behaviors, products, and other outcomes that provide a return on investment (ROI).  I’ll use Beresford and Stolovich’s perspectives on HPI in my toolbox to address the needs of an organization.

Gilbert (2007) provided HPI with a formula for worthy performances (Pw), which is Pw = Av/Bc, where Av refers to valued accomplishments and Bc refers to costly behaviors. The term “costly” can have positive and negative connotation; it references the costs involved with each performance (e.g., salaries, resources, and trainings). Gilbert’s formula is a powerful tool for better determining worthy performances.

The first step in improving a particular performance is to conduct a needs assessment (NA) to better understand the current performance in relation to the desired outcomes such as industry standards (benchmarking) coupled with the vision of an organization. A NA helps organizations identify the gap (need) between the actual and optimal performance levels of an organization. I would rely on the Aultschuld’s (2010) three-phase NA model (preassessment, NA, postassessment), as a guide for interacting with a NA team and NA committee of stakeholders. In the preassessment, my team would gather data on the topic from key informants, literature, and extant resources.

The NA team would follow up on emergent themes describing the perceived need and gather specific information via interviews, questionnaires, and focus groups on what the respondents value as possible solutions. The NA postassessment process identifies the problem succinctly. Is the gap due to a lack of incentives, knowledge, skills, or institutional support?  Training is not always the answer.  Interactions and behaviors can be improved via instructional and/or noninstructional interventions. For instance, HPI can be as simplistic as buying a better writing instrument (e.g., Dr. Grip pen) to expedite note-taking on the job. This would be a noninstructional intervention.

I’d utilize the various job aids provided in Aultschuld’s series of books to identify and address the problem in light of the organizations concepts. For example, I favor Ishikawa’s Fishbone Diagram with the bones representing the various issues within labeled categories of performance. Moreover, I’d collect solutions from stakeholders and conduct a Sork feasibility study to determine the appropriate solutions.  Given the complexity of a NA, the Aultschuld series would serve as another item in my HPI toolbox.

I created a manual of methods for problem analysis (PA) for novice instructional designers that can be used on a daily basis when a full NA is impossible.  I studied Jonassen’s typology of problems to determine the type and possible actions required.  I learned if the problem is well-structured, then a quick solution can be found because it is easily solved.  If it is ill-structured, then I should conduct a PA to get to the root of the problem. I would use Harless’ (1974) list of 14 questions for PA. I recognize his first one as being very important: Is there a problem? After a problem(s) is identified, I would use Toyoda’s Why Tree for root cause analysis; this technique keeps asking why for each response given until the root(s) is identified. Then I would use Sanders and Thiagarajan’s 6-box model to see which areas of an organization are affected by these performance problems: knowledge, information, motives, process, resources, wellness. I also learned from Jonassen’s (2004) work that we should collect our problems in a fault database.  This is something I have been doing to improve our turnaround in resolving learning management system (LMS) issues at my workplace to increase our ROI for cost, labor, and learning outcomes.

For interventions at my workplace, I use job aids, embedded performance systems, and the aforementioned idea for a fault database. I purchased Rossett and Gautier-Down’s (1991) HPI resource book, A Handbook of Job Aids.  This book provides matrices (Frames Type II) for the user to discern which job aid should be used with which type of task. I also create job aids for the workplace to facilitate teaching and learning.  For example, I create how-to guides for instructional technology software (e.g., Camtasia Studio) for instructors who are unable to attend trainings and must learn on their own.  Job aids are useful HPI tools for infrequent tasks like the occasional instructional video one might need to create for class. I have also been focusing on providing performance support mechanisms for right-time needs for students and instructors.  I noticed an overreliance on the instructional designer to answer all LMS related questions.  To provide an embedded support system, I added a webpage on our LMS to answer frequently asked questions. This has greatly reduced my cue of email requests, all the while improving the performance of those affected. In closing, for my HPI general framework, I rely on Beresford and Stolovich’s HPI perspectives of vision, concept, and end.  To put my framework into action, I rely on the works of Gilbert, Autschuld, Jonassen, Harless, Ishikawa, Sanders, Thiagarajan, and Toyoda.

References

Altschuld, J. W., & Kumar, D. D. (2010). Needs assessment. Thousand Oaks, CA: SAGE Publications.

Beresford B., & Stolovitch, H. D. (2012). The development and evolution of human performance improvement. In R. A. Reiser & J. V. Dempsey (Eds.) Trends and issues in instructional design & technology (3rd ed.) (pp. 135-146). Boston, MA: Allyn & Bacon Pearson Education.

Harless, J. H. (1974). An analysis of front-end analysis. Improving Human Performance, 2(4), 229-244.

Jonassen, D. H. (2004). Learning to solve problems: An instructional design guide. San Francisco, CA: Pfeiffer.

Rossett, A., & Gautier-Downes, J. (1991). A handbook of job aids. San Francisco: CA. Pfeiffer & Company.

e-Learning Instructional Strategies to Teach to the Whole Person

Heart Tagxedo for blog post image

Teaching to the whole person is more important than ever.  But how can we do this in an online learning environment?  I work at a Jesuit and Catholic college where I’ve been learning about Jesuit education and Ignatian pedagogy. The principles of Ignatian pedagogy include context, experience, reflection, action, and evaluation (Korth, 1993).  To address these in distance education, I’m developing an instructional design (ID) model that is a combination of learner-centered, experience-centered, activity-centered, and content-centered to fully address the whole person in online courses. Ragan, Smith, and Curda (2008) stated that a combination ID model is possible.  Not only is it possible, to include research-based best practices, it is absolutely necessary to provide diverse and rich experiences in online environments.  Otherwise, a single-mode of learning will become monotonous and decrease student motivation to learn.

Table 1 provides instructional strategies for the online environment that engender higher-order thinking (cognitive presence) for each approach.  This chart represents an initial listing to assist educators with strategy selection depending on various affordances and constraints such as time, resources, et cetera. For example, an activity-centered lesson is based on an interactive task and requires collaborative tools and student groupings. Content-centered lessons are passive tasks where the student generally only interacts with the content; the exception being discussions of content. Experience-centered-activities require a hands-on approach to developing something or serving/working with others. The learner-centered activity provides the learner with more autonomy over their pursuit of knowledge and includes metacognitive actions for self-regulation of learning; the affordances and constraints for this type of activity are highly dependent on the task.

Table 1

Cognitive Online Instructional Strategies to Teach to the Whole Person

Activity-Centered Content-Centered Experience-Centered Learner-Centered
· Analysis of case studies

· Critically review an article

· HyperInquiry team project

· Academic controversy assignment

· Develop a book trailer on topic

· WebQuest

· Pretest/Posttest

· Write a literature review

· Complete modules on topic in computer-adapted lab/program

· Write essay

· Make a presentation

· Discuss content with peers and instructor

 

  • Develop questionnaires

·Develop a personal model of topic

·Participate in a simulation

·Develop a workshop

·Develop a wiki on topic

· Develop a podcast on topic or narrated PowerPoint

· Develop a how-to guide or video tutorial on procedure

· Write a blog post on topic

· Serve others as a mentor, tutor, or volunteer on topic

· Virtual fieldtrip

· Peer-review of papers or projects

· Students create m/c questions for review

· Design a project

· Evaluate a program

· Write an autobiography of your interaction with topic

· Complete self-evaluation

· Develop a personal learning network

· Capture reflections in journal, audio, or video

· Curate digital books and articles on topic for lifelong learning

Note. I linked some of these activities to sources of my own and others. Check back soon for an update!

References

Korth, S. J. (1993). Precis of Ignatian pedagogy: A practical approach.  International Center for Jesuit Education, Rome, Italy.

Ragan, T. J., Smith, P. L., & Curda, L. K. (2008). Outcome referenced, conditions-based theories and models. In J.M. Spector, M. D. Merrill, J. van Merriënboer, & M. P. Driscoll (Eds.), Handbook of research on educational communications and technology (3rd ed.) (pp. 383- 399). New York, NY: Lawrence Erlbaum Associates/Taylor and Francis Group.

5 Important Instructional Strategies

Tag words from my blog

An instructional strategy is something that an instructional designer (or educator) uses as a vehicle to deliver information.  Some instructional strategies require the Internet like WebQuests, HyperInquiry, and well-designed educational videogames, while others are used within the mind metacognitively like mnemonics for memory.  However, the vast majority are used to present instruction in multimodal formats.  Other strategies include academic controversy, advance organizers, chunking of information, imagery, and spatial strategies (i.e., Frames Type I and II matrix, concept mapping). The best ones are based on cognitive science and learning theory.  Instructional strategies differ from learning strategies in that the latter are for the learner to use for encoding information (also known as a cognitive strategy).  Here are some useful cognitive strategies for enhancing learning and retention: making it meaningful, organize the information, visualize it, and elaborate on it.  In my opinion, learning strategies should be embedded within instruction and modeled by the teacher to increase use.

Instructional strategies are based on the goals and learning objectives identified during the analysis phase in the instructional design process.  The instructional strategies must match the intended end behaviors, condition, and criteria of the objectives.  For example, if you’re developing an online course, it would be important to include an advance organizer (AO) for each unit to build a bridge between the information learned and the new content.  This bridging strategy is based on Ausubel’s subsumption theory  because it taps into your prior knowledge and adds new information in a structured way to build schema on the topic (West, Farmer, & Wolff, 1991).  AOs are written like an abstract with all the key information but brief.  They have seven different features that are critical to making this more than simply an introduction to a unit; for example, AOs must encourage students to tap into their prior knowledge on the topic.

Concept mapping is the most commonly used spatial strategy.  It makes a graphical depiction of the content in a connected frame.  There are different types of concept maps based on the type of information you need to teach: spider maps for different categories (typologies), chain map for linear processes, hierarchy map for complex topics and their interrelationships of the system, subsystem, and parts (West, Farmer, & Wolff, 1991).  This is related to the instructional strategy of chunking information into meaningful units.  You need to chunk the information before you map it.

Chunking and concept mapping are based on some of the same learning theories such as Sweller’s cognitive load theory, Miller’s seven-plus-or-minus-two principle, and Baddeley’s working memory model. All of these theories describe a limited capacity of working memory.  Cognitive load theory proposes several conditions to optimize learning such as reducing the amount of “noise” (extraneous elements in the broad sense) during a learning event.  For example, long lectures need to be reduced to five minutes or less due to the human brain’s inability to pay attention, process, and store lengthy amounts of information.

Other types of spatial strategies are frames, type one and two. Frames, type I is described by Reigeluth (1983) as a combination of ‘big picture and telescoping’.  Instructional designers use frames, type I as a way to unpack and emphasize the big ideas of a unit of information into a meaningful structure to build on existing schema.  Frames, type II is a rule-bound matrix and requires higher-order thinking skills to complete, whereas frames, type one, is for simple recall, comprehension, and application (West, Farmer, & Wolff, 1991).  Usually, the information for both types of frames is presented in a 2-D matrix. These instructional strategies are also based on the theory of cognitive load in that the structure and relationships of the information will reduce extraneous thought processing and instead focus on the intrinsic and germane elements.  It’s also based on schema theory, which was first posited by Piaget.  Frames, type I and II, provide the structure to build on existing schema.  Of all the instructional strategies, these five are the ones that I rely on the most as an instructional designer.

References

Reigeluth, C. M. (1983).  The elaboration theory of instruction. In C. M. Reigeluth (Ed.) Instructional-design theories and models: An overview of their current status (pp. ).  Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers.

West, C. K., Farmer, J. A., & Wolff, P. M. (1991). Instructional design: Implications from cognitive science. Englewood Cliffs, NJ: Prentice Hall.