My Research Statement
I endeavor to become a public scholar who investigates learning design through a multi-theoretical, sociocultural lens with insights from cognitive neuroscience to produce effective, efficient, and engaging instruction. My research areas of interest are the possible affordances of game-based learning and simulations for the classroom, second language acquisition, and transformative distance education course design for universal access. As a public scholar, I share research artifacts, interventions, and inventions produced freely through Creative Commons and open education resource platforms.
Summary of My Research
I have led five of six research studies in my early career, two of which were in preparation for my dissertation on gaming for English language learning (ELL). First, I evaluated a relevant videogame produced by the U.S. Department of State (DOS) called Trace Effects. Second, I conducted a case study on the gaming habits of the dominant group of language learners at the English language center where my dissertation research would take place. Both were paramount to my initiation into gaming research for language learning. In my current research study, I am conducting a mixed methods case study of instructional designers’ perspectives on developing active learning online within the universal design for learning framework.
Logic Modeling
Logic modeling is a way to graphically illustrate an organization’s effort and its relationship to the outcomes. In 2014, I obtained permission from the DOS to evaluate Trace Effects that teaches American English and culture to non-English-speaking juveniles worldwide. Logic modeling was used to explicate and evaluate the program theory embedded in the videogame, support material, and stakeholder activities through questionnaires, interviews, gameplay observations, and secondary resources. I utilized the University of Wisconsin-Extension’s logic frame (2003).
Data analysis indicated that the evaluand was based on cognitivism, constructivism, the communicative approach to language acquisition, the Teachers of English to Speakers of Other Languages Technology Standards Framework (2008), and gaming as an instructional strategy (Rogers, 2014). Moreover, these DOS’s pillars were embedded within the game’s levels: entrepreneurship, community activism, empowering women, science and innovation, environmental conservation, and conflict resolution. See the logic model’s inputs/outputs and short/long-term outcomes that were developed according to the pragmatic-use-evaluation-branch paradigm, as defined by Mertens and Wilson (2012). In alignment with Scriven’s Key Evaluation Checklist (2007), member checks were conducted to obtain feedback on the accuracy of the logic model.
Case Study
Qualitative case study. A single, instrumental case study was conducted to understand the digital gameplay habits of Saudi ELLs and their effects on second language acquisition (SLA). The purpose was to obtain data on the bounded system of Saudi students regarding their gaming habits, general personal attitudes and cultural views toward gaming, and attitude toward gaming for SLA. The study focused on intermediate and advanced level ELL participants’ digital gaming habits during and after school in the US and Saudi Arabia. Interviews, focus groups, and questionnaires were used to collect information.
Participants reported learning history, English vocabulary, and English skills (i.e., listening, speaking, and reading) from gaming (Rogers & Johnson 2016). Other skills learned included gaming literacy, social cues, and problem-solving. English and problem-solving were the essential skills for the participants to learn from gaming. Images and rules were the most salient game features for success, while the capability to speak with native English speakers (NES) during gameplay was cited as extremely useful in learning English. Open, axial, and selective coding were used to determine emergent themes. A key informant verified the themes. Figure 1 is a hierarchical diagram of themes culled with the joy of playing games being the dominant one. This case study helped determine that the use of gaming for SLA would be a good fit for my dissertation with this language group and provided parameters for game selection.
Figure 1. Emergent themes about digital gaming based on Saudi student responses
Mixed-methods-collective-case-study. My dissertation focused on role-playing videogames combined with language strategic gameplay to improve ELLs’ skills and vocabulary. The design included 15 participants who received 25 hours of weekly English language instruction at school. The treatment group was prescribed EverQuest® II (2016) with the SLA optimizing strategic gameplay for 4-hours a week for a month. The control group participated in the regular extracurricular program activities instead. The Cambridge Michigan Language Assessment (CaMLA, 2016) pretest-posttest composite mean gain scores were used to assess the participants’ grammar, listening, reading, and vocabulary at the term’s end.
The control group outperformed the treatment group on the CaMLA by 1.7 mean gain score units. The treatment group learned, on average, 15 new words representing a 30% increase on the gameplay vocabulary test. Due to the small sample size and nonrandom assignment, this study lacked the rigor and statistical power to make valid and reliable quantitative claims of the findings. Brinkerhoff’s (2003) success case model was used to further investigate extreme cases. Individual case impact profiles of such were developed in terms of the following learning performance drivers: skills, knowledge, expectations, motivation, and incentives (Rossett & Sheldon, 2001). This provided contextual clues as to what makes the case (un)successful. Extreme case impact profiles suggested the following footprints for success on the CaMLA: reading at the fifth-grade level, interacting with NES, and having decided on a college major. Plausible causes for lack of success from the treatment were cell phone use, use of native language during gameplay, resistance to the prescribed SLA optimizing features and activities, and the challenging reading level of the videogame. Figure 2 illustrates emergent themes culled from interviews, focus groups, and questionnaires on gaming and language learning. The research methodology helped explain the failure of the treatment.
Figure 2. Emergent themes on gaming and language learning
Action Research
My current research agenda is the replication of our e-learning study with the Online Community of Inquiry Syllabus Rubric© (Rogers & Van Haneghan, 2016) across disciplines and at different institutions to validate this instrument. The rubric is based on general concepts from Garrison, Anderson, and Archer’s (2000) Community of Inquiry (COI) framework, quality distance education rubrics (California State University-Chico, 2009; Johnson, 2007; Quality Matters™, 2014; & Roblyer & Wiencke, 2004), and significant literature. It consists of the following categories: instructional design for cognitive presence, technology tools for COI, COI loop for social presence, support for learner characteristics, and instruction and feedback for teaching presence. The 5-point rubric has the following scale for the criteria: low, basic, moderate, above average, and exemplary. Points awarded determine the course’s potential level of building an online COI (i.e., low, moderate, or high). Our underlying theoretical premise was the more interactive and cognitively challenging the course, the higher the level of student satisfaction and course achievement. See rubric (Copyright 2015 by Rogers & Van Haneghan).
The rubric’s purpose is to provide a pragmatic solution to problematic teacher-led (passive knowledge) online courses with little student interaction and few academic challenges. In 2016, good interrater-reliability agreement was obtained with the rubric in the review of 23 undergraduate and graduate education online course syllabi, intraclass correlation coefficient (ICC) = .754, p < .001 and 95% CI [.514, .892]. Results indicated the potential for above-average cognitive presence (M = 4.7); however, social presence (M = 3.1) was moderate, and teaching presence (M = 2.7) was basic. In 2018, this study was replicated at a different institution across disciplines with 31 syllabi; the findings mirrored the previous study’s levels of COI presences (Rogers & Khoury, 2018). For action research, the rubric criteria and results can serve as talking points between instructional designers and course developers. Table 1 provides common instructional design feedback based on the 2018 online course syllabi analysis.
Table 1
Common Feedback Based on the Community of Inquiry Syllabus Rubric Analysis
Category |
Recommendations |
Instructional Design for Cognitive Presence | Include higher order thinking activities such as case analysis, papers that require synthesis or evaluation. See the list of cognitive activities in the Online Course Design Guide. |
Education Technology for COI |
|
COI Loop for Social Presence |
|
Support for Learner Characteristics |
|
Instruction and Feedback for Teaching Presence |
|
References
Brinkerhoff, R. O. (2003). The success case method: Find out quickly what’s working and what’s not. Berrett-Koehler.
Cambridge Michigan Language Assessments [Website]. (2016). CaMLA English Placement Test. http://cambridgemichigan.org/institutions/products-services/tests/placement-progress/camla-ept/
Enhancing program performance with logic models. (2003). University of Wisconsin-Extension Program Development. http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education 2(2-3), 87-105. doi:10.1016/s1096-7516(00)00016-6
Johnson, E. S. (2007). Promoting learner-learner interactions through ecological assessments of the online environment. Journal of Online Learning and Teaching, 3(2). http://jolt.merlot.org/vol3no2/johnson.htm
Mertens, D. M., & Wilson, A. T. (2012). Program evaluation theory and practice: A comprehensive guide. Guilford.
QM Higher Education Rubric Fifth Edition. (2014). Quality Matters. https://www.qualitymatters.org/sites/default/files/PDFs/StandardsfromtheQMHigherEducationRubric.pdf
Roblyer, M., & Wiencke, W. (2004). Exploring the interaction equation: Validating a rubric to assess and encourage interaction in distance courses. Journal of Asynchronous Learning Networks, 8(4).
Rogers, S. (2014). Program theory logic model of Trace Effects video game. Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education, 1662-1674. AACE.
Rogers, S., & Johnson, B. (2016). Saudi English language learners’ digital gameplay: A case study. Proceedings of Society for Information Technology & Teacher Education International Conference, 579-584. AACE.
Rogers, S., & Khoury, S. (2018, October). Rubric to evaluate online course syllabi plans for engendering a community of inquiry: Round II. Paper presented at the meeting of the Association of Educational Technology & Communications, Kansas City, MO.
Rogers, S., & Van Haneghan, J. (2016). Rubric to evaluate online course syllabi plans for engendering a community of inquiry. Proceedings of Society for Information Technology & Teacher Education International Conference, 349-357. AACE.
Rogers, S. A. (2017). A MMORPG with language learning strategic activities to improve English grammar, listening, reading, and vocabulary (Doctoral dissertation). Available from ProQuest Dissertations and Theses database. (UMI No. 10265484)
Rossett, A., & Sheldon, K. (2001). Beyond the podium: Delivering training and performance to a digital world. Jossey-Bass/Pfeiffer.
Rubric for Online Instruction. (2009). Center for Excellence in Learning and Teaching. California State University-Chico. http://www.csuchico.edu/tlp/resources/rubric/rubric.pdf
Scriven, M. (2007). Key Evaluation Checklist. https://wmich.edu/sites/default/files/attachments/u350/2014/key%20evaluation%20checklist.pdf
Technology Standards Framework. (2008). TESOL International Association. http://www.tesol.org/docs/books/bk_technologystandards_framework_721.pdf
Trace Effects [Website]. (2012). U.S. Department of State. http://traceeffects.state.gov/
Sandra Annette Rogers, Ph.D.
I ran Gwet’s AgreeStat last night to obtain a better indicator for inter-rater reliability (e.g., Gwet’s AC1).
LikeLike