I also provided a pre-conference workshop on how to use my Online Community of Inquiry Syllabus Rubric (copyright 2015 by Rogers & Van Haneghan). In both sessions, I used Mentimeter to engage the participants, as well as pair-share activities. Both sessions were well received. Some of the instructional designers stated that they want to use my rubric for their work and research! I had such a wonderful time at the UH, and the ITLD staff and professors were very kind to me. I’m a Texan, so I appreciated the Texas hospitality!
After completing my doctoral program in 2017, I looked for employment that would utilize and reward my Ph.D. and research efforts. I was working for a small college near my alma mater for which I am grateful to have had my start as an instructional designer (ID). My reference to how I got here refers to leaving a small college to work for the number one public university in the US. One of my new acquaintances said my work experience sounded like I was a ’20-year overnight success’! Jokes aside, all my past work experience (20 years as an educator + 7 as an ID) has lead me to this new role. View my LinkedIn Profile to learn more.
I’d like to give a shout out to the Educause Listserv for instructional designers for alerting me to this position. When I read the posting from the University of California-Los Angeles (UCLA), I knew I had a good chance because I met all the criteria including the preferred ones. This was due to my broad work experience including fellowships and partly from my serving as the only ID for a small college and wearing multiple hats (e.g., designer, trainer, learning management system administrator). By 2019, I had fined tuned my resume, portfolio, and interview skills after submitting 25+ applications and landing relevant interviews with the Carnegie Foundation, Harvard, and Global LT.
My New Role
I’m part of a team of 5 IDs working on the UCLA Chancellor’s initiative for online teaching and learning. We’re a diverse team in our skill set and experience with shared education and interests. I’ve been on the job for six months now and have learned so much from my team and colleagues across campus. There are other IDs on our campus working to support specific departments or academic units, while we assist any instructor who’s interested in designing a hybrid or fully online course in our new Instructional Design Studio.
I’m currently co-designing two new courses with different instructors that will be offered in Spring quarter. UCLA uses Moodle as their LMS. I blogged about one of the courses I’m co-designing for the new minor in urban literature for the English Department. It’s a hybrid Irish literature course. The other course is for the Classics Department and will cover medical terminology through the sociocultural and historical context of Greek and Latin. For that course, I’m co-developing H5P interactive learning objects to review course concepts and terminology. See my blog post on H5P: Free Software.
My other role as instructional designer is to support existing online courses and provide technical training. I’ve been able to shadow and learn from one of my new colleagues that has been at UCLA for many years. I’m helping her support existing courses (e.g., refreshing dates, checking links, configuring TA discussion sessions), as well as transitioning to new technologies. For example, UCLA instructors will use Respondus LockDown Browser and Monitor for unproctored online tests. I used this at my prior workplace, so I’ve taken the initiative to learn all I can to train my team, instructors, and TAs, as well as develop supporting documents (e.g, FAQs, practice tests, student guides).
Any new job comes with a learning curve. For technologists, it’s even steeper! I wasn’t familiar with designing courses on Moodle nor Canvas, which are both used for distance education in the UC system. My first month on the job I had to learn both of these in addition to workplace culture, university policy, UCLA campus, and all the acronyms used to describe the various learning communities of practice. Plus, I decided to get a new type of computer, a MS Surface Pro (tablet with stylus and detachable keyboard and special dongle for connections). My transition would have been a lot easier if I had gone with something familiar.
Fortunately, I didn’t have to learn my way around Los Angeles because I lived here 20 years ago. It was a challenge moving across the country and leaving my house behind to be setup as rental property for the first time. I’m feeling settled in now. I’ve even reconnected with old friends here. I also have family in California. Other LA challenges have been the earthquakes this summer and the nearby fires this fall. I’ve got my emergency kit in the car and backpack in the house.
It looks like our new office space will be ready upon our return on Jan 2nd. I look forward to working with my teammates and instructors in our new space. We’re housed in the Young Research Library on campus that has great multimedia interactive pod spaces and the 451 Cafe area where we can meet with instructors besides are office space. I have 4 other new courses in the initial planning phase that I’ll report on as they develop. They are all equally exciting to me. I feel extremely blessed to have this opportunity.
I can’t believe this is my 200th blog! As mentioned on #199, I’ve gone back and revised blogs as I’ve grown academically. If you’ve been with me for the past decade, thank you! If you’re a new reader, welcome. What comes next may be a podcast or vlog. I’d love to hear your feedback.
I started Teacherrogers’ blog in the summer of 2010. It has served as my landing place, a cognitive airstrip of sorts, for sharing ideas and revisiting them often. In the past decade, my blog posts became more academic through my scholarly endeavors. In 2010, I was teaching English as a second language and developmental reading. Back then, my blogs addressed computer-assisted language learning, reading strategies, and online teaching. Nowadays, they address more general teaching and learning theories, instructional design, innovative educational technologies, and digital literacy. My most visited blog of all time was written in 2012, the Instructional Design of an Online Course.
My blog has helped me synthesize what I’ve learned and experienced on the job, in my doctoral studies, and through action research. My blogs have also served as job aids for instructors that I support at work. As my readership has grown, I’d like to believe they have helped inform my readers. Teaching Tips is the most populated category. Based on site visits, I can tell which blogs are popular. I just wish more visitors would leave a comment, especially students who are reading my blog to help them with their homework. I’ve also benefited from critical comments, as I endeavor to adequately and appropriately cover topics. Please keep those comments coming!
In curating my online presence, I recently encountered Ghent University subscribed to a syndication of my blog through Newstex. Unfortunately, it doesn’t list me as the author. Even worse, Newstex doesn’t have permission to syndicate my blog. I’ve contacted them informally through Twitter to no avail. The next step is to send them a polite email. As my technology manifesto states, as system information users, we should all be benevolent, evaluative, and vigilant of our online interactions.
I’m currently #47 in Feedspot’s Top Educational Technology blogs. There’s about 1K more subscribers to my blog through their RSS feed. If I used Facebook (FB), I might be reaching more readers, but I consider FB unsafe; Read my blog on that topic. If you’re interested in learning more about my online presence, visit my blog roll.
Even though maintaining this blog is time consuming, I don’t plan on stopping anytime soon. One of my new goals is to use my blog for the basis of a podcast. I’d love to hear from my readers on the topics that matter to them. What should I blog about for my 200th blog? Podcast ideas? Please leave your thoughts in the comment section below or via Twitter @Teacherrogers. To my readers, a huge thank you!
I’m co-designing a new Irish literature hybrid course where college students will use Google Cardboard with their mobile phone applications (app) for virtual reality (VR) experiences with 360 media. This is my first time preparing VR learning experiences, and I wanted to share what I’ve figured out so far. This is a work-in-progress in prep for spring quarter, so I’ll continue to return to this blog with updates as I learn more.
The English course is lecture-based and will include other interactive technologies for blogging reflections, annotating text, and georeferencing sites. For their virtual travel blog, students will view selected areas in Ireland that are referenced in the literature and write a reflection. Our team will use both professionally made and self-produced 360 VR media of the Dublin environs that match specific instances described by Irish lyricists, poets, and writers. Here’s a professional VR example of Glendalough, an Irish monastic cemetery.
The purpose of using VR is to provide a sense of being there. It provides the viewer with the sense of being present within the 360 media. It removes the artifice of flattened images and stills. It serves as a virtual field trip for situated learning when actual travel is not a viable option.
Any VR device manufacturer and app will suffice; we selected the Google Cardboard as a low cost option. Our students will install the free Google Cardboard App on their smartphone. Those without a smartphone can tab through the 360 images on their desktop.
Unfortunately, the Google Cardboard app isn’t compatible with all phones! My husband tried to install it on his LG Android that’s only 2 years old, and it states it’s not compatible. Here are industry recommendations: “In general, Cardboard apps and games will work with any Android 4.1 or above phone and even iPhones, as long as they’re running iOS 8 or above” (3G, 2019, para. 12).
We’re using the free Google Cardboard camera app to capture spherical VR images and videos. It’s fairly easy to use and share images between smart devices. However, sharing VR media in a course setting presents a challenge, as it requires a VR hosting platform to view. Our learning management system (LMS) uses Kaltura for video hosting, which states that it supports 360 video for VR interactions. So far, it’s not working. Our workaround is to use a free basic account with 360cities.net to host our VR media for the course. Keep the full size of your original VR image, as reducing the size corrupts (flattens) it.
I practiced capturing photos with the Google Cardboard Camera app. It instructs you to hold the phone vertically and snap the photo and rotate 360 degrees with your phone to capture your surroundings. I noticed that by focusing on the main object with the first snap, you’re left with a slightly visible vertical line where the images don’t match up. To avoid ruining your focal point, begin the first snap to the side of the main feature. The Cardboard camera photos are cylindrical. They don’t capture the ground or sky above. You’ll see blue for sky and grey for ground, but there’s a distinct line between the image and artifice.
VR Viewing Procedure
From your smartphone, access the linked content via the web or, in our instance, course page on the LMS app. Select the icon for VR to enable it. Then place the phone in the Google Cardboard device. You may need to remove your phone’s protective case for it to fit. The experience will feel as if you’re there instead of looking at a picture. The intended VR experience should provide situated cognition of the environs and, as is the case with our course, neural connections to the topic of study.
Some VR experiences include annotated media. The Google Cardboard device has a metal button on it that you use to select projected annotations. The mobile app also comes with some great examples from around the world. Right now, I’m reviewing Irish content readily available on the free Google Expeditions app that provides both VR and augmented reality (AR) experiences. If you have experience with any of the aforementioned technologies, or want to suggest related ones, please leave a comment below.
This article was originally posted on the AACE Review by Sandra Rogers.
The Data & Society Research Institute has produced and shared informative articles on the many facets of fake news producers, sharers, promoters, and denouncers of real news as part of their Media Manipulation Initiative. In Dead Reckoning (Caplan, Hanson, & Donovan, February 2018), the authors acknowledged that fake news is an ill-structured problem that’s difficult to define in its many disguises (e.g., hoaxes, conspiracy theories, supposed parody or satire, trolling, partisan biased content, hyper-partisan propaganda disguised as news, and state-sponsored propaganda). Nevertheless, they stated the critical need for it to be defined to produce a problem statement. Only in this way can a proper needs assessment and subsequent solutions be explored.
Based on their critical discourse analysis of information reviewed during their field research, they identified two descriptions for fake news, problematic content and the critique of mainstream media’s efforts to produce trustworthy news. [They reported how]… the denouncement of mainstream media as fake news serves to legitimatize alternative media sources. Beyond defining fake news, the authors seek parameters for what makes news real in their efforts to address information disorder.
Neither Man nor Machine Can Defeat Fake News
Kurzweil (1999) predicted that in the year 2029 humans will develop software that masters intelligence. However, the idea that cognition can be produced through computation has been refuted (Searle, 1980; McGinn, 2000). In Dead Reckoning, the authors addressed the problem of combating fake news as twofold; Artificial intelligence (AI) currently lacks the capability to detect subtleties, and news organizations are unable to provide the manpower to verify the vast proliferation of unmoderated global media. The problem is that once addressed, fake news producers circumvent the triage of security. Several efforts are underway in developing algorithms for machine learning such as PBS’ NewsTracker and Lopez-Brau and Uddenberg’s Open Mind.
Fake News Endangers Our Democracy & Leads to Global Cyberwars
The social media applications that have become part of the fabric of our society are used as propaganda tools by foreign and domestic entities. For example, prior to the 2016 Presidential election, Facebook’s ads and users’ news streams were inundated with fake news that generated more engagement from August to September than that of 19 major news agencies altogether (Buzz Feed News, 2016). The authors shared how concerned parties (e.g., news industry, platform corporations, civil organizations, and the government) have moved beyond whether fake news should be regulated to who will set standards and enforce regulations. “…without systemic oversight and auditing platform companies’ security practices, information warfare will surely intensify (Caplan, Hanson, & Donovan, p. 25, February 2018).”
The potential for fake news to reach Americans through digital news consumption from smartphone apps and text alerts compounds the issue. The Pew Research Center surveyed 2004 random Americans who consume digital news and found these habits based on two surveys per day for one week: 36% used online news sites, 35% used social media, 20% searched the Internet, 15% used email, 7% relied on family, and the remaining 9% was categorized as other (Mitchell, Gottfried, Shearer, & Lu, February 9, 2017).
Strategic Arbitration of Truth
Caplan, et al. state how organizations and AI developers approach defining fake news by type, features, and news signifiers of intent (e.g., characteristics of common fake news providers, common indications of fake news posts, and sharing patterns). For example, one common news signifier of fake news is the use of enticing terms such as ‘shocking.’ Digital intervention efforts include developing a taxonomy for verification of content, developing responsive corporate policy, banning accounts of fake news promoters, tightening verification process for posting and opening accounts, and informing users how to identify fake news. See the Public Data Lab’s Field Guide to Fake News and Other Information Disorders.
Caplan, et al. raise many unanswered questions in the struggle to defeat fake news. How can we arbitrate truth without giving more attention to fake news? Will Google’s AdSense allow users to control where their ads are placed? Can Facebook really reduce the influence of fake news promoters on their site all the time? Caplan, Hanson, and Donovan (2018) proposed these powerful strategies to combat fake news:
Trust and verify- By trust, they mean going beyond fact-checking, and content moderation, and incorporate interoperable mechanisms for digital content verification through collaborative projects with other news agencies;
Disrupt economic incentives- Stop the pay-per-click mill of online advertising without a say in the type of site it will appear in;
Online platform providers need to ban accounts or otherwise not feature content based on falsehoods, click-bait, or spam; and
Call for news regulation within the boundary of the First Amendment’s Good Samaritan provision.
The blog was originally posted on the AACE Review by Sandra Rogers.
While fake news and information bubbles are not new, awareness of their impact on public opinion has increased. The Wall Street Journal (2016) reported on a study that found secondary and postsecondary students could not distinguish between real and sponsored content in Internet searches. This became apparent when observing my college-bound niece google her bank on the Internet and quickly click the name at the top of the list within the sponsored content and then have the computer freeze from a potential malware attack. If teenagers cannot discern between promoted and regular content, imagine their encounters with fake news. The WSJ article recommended lateral reading (i.e., leave site to learn about it) and for adults to ask teens about their selection choices during Internet searches. In the instance with my niece, she was unaware of sponsored content. She also didn’t know that the first item in a browser’s search results generally is strategically pushed to the top because of search engine optimization (SEO) with keywords (meta-tagging).
Figure 1. Tag cloud of words from blog post
How can we help? What are good heuristics to determine the quality of online content?
Solution 1. Critical Reading and Thinking Skills
Determine the purpose of the Website by its domain (e.g., .com, .org, .gov). Analyze its content and graphics. Analytical questions to consider are as follows:
Is it current? Broken hyperlinks indicate a lack of attention to the site.
Does it look professional? Is it well written?
Does it have a point of contact?
Does the writer provide proper citations?
What is the author’s tone? Is the content biased toward a view? If so, is it substantiated with empirical evidence? Does the author present the complete narrative or are certain important elements omitted?
Do the graphics illustrate a valid point? Do they make sense statistically?
Are you an IT specialist, researcher, or educator? Each field has different approaches to thinking. The strategies you select would depend upon the nature of the content, as different content requires different ways of thinking. Bruning, Schraw, and Norby (2011) refer to this as thinking frames such as how one would think about scientific inquiry and the use of research methods. If you’re an educator, you might be interested in a WebQuest I developed to help students create their own job aid for critical thinking. It asks students to tap into the critical lens of their future field of study.
Solution 2. Primary Sources
Combat fake news by seeking the original source of information. Take time to verify the authenticity of what is begin shared online. Use various sources whenever possible for triangulation (e.g., interviews, observations, and documentation). This will ensure that what you read is corroborated by other articles presenting the same information. A good legislative resource is the U.S. Government Publication Office that provides congressional records, bills, codes, and Federal Register items. Their govinfo.gov website explains how to check the integrity of a government document found on the web by revealing its verification seal upon printing. It’s a digital signature placed in their PDFs; if the document has been modified, it breaks the verification.
Solution 3. Technology Resources
Use technology to decipher the trustworthiness of online content. Several Internet browser extensions provide visible alerts. For example, the Fake News Detector extension displays the word FAKE in red capital letters or orange for CLICKBAIT/Probably FAKE on the web page. It’s available in the Chrome store along with a few others and their user ratings. I started curating reputable fact-checking tools such as PolitiFact and Snopes on my Scoop.it! e-magazine, The Critical Reader. Some extensions are application-specific such as the Official Media Bias/Fact Check Extension that determines the veracity of articles on Facebook. It provides factuality (e.g., High), references, popularity, and positionality (e.g., left-center) at the base of the article on your Facebook feed. I personally use this one displayed in Figure 2.
Figure 2. Facebook post of Smithsonian article with Official Media Bias/Fact Check results
Solution 4. Seek Professional Content
Seek information from reputable researchers and educational leaders. Most professions adhere to ethical standards as a promise to their constituents. For example, the American Educational Research Association (AERA) states that members will not fabricate, falsify, nor plagiarize “in proposing, performing, or reviewing research, or in reporting research results (AERA Code of Ethics, 2011).” This standard is taken very seriously in the field of educational research. Those in the past that didn’t heed ethical rules have paid the cost of being outed with plagiarism tools such as was the case for the German Education Minister and Former Defense Minister’s plagiarized dissertations and subsequent unseating of their government appointments (CNN World, 2013).
As an educator, I took the Kappa Delta Pi (KDP) pledge of fidelity to humanity, science, service, and toil, as an initiate into this international honor society. The ideal of science relates to the topic of this discussion. “…This Ideal implies that, as an educator, one will be faithful to the cause of free inquiry and will strive to eliminate prejudice and superstition by withholding judgment until accurate and adequate evidence is obtained. One will not distort evidence to support a favorite theory; not be blinded by the new or spectacular; nor condemn the old simply because it is old. All this is implied in the Ideal of Science” (KDP Initiation Ceremony, 2015, p. 4).
Do you have good fact-checking resources or more solutions to share? Please share those in the comments section.
This was previously posted on the AACE Review by Sandra Rogers.
In Media Manipulation and Disinformation Online, Marwick and Lewis (2017) of the Data & Society Research Institute described the agents of media manipulation, their modus operandi, motivators, and how they’ve taken advantage of the vulnerability of online media. The researchers described the manipulators as right-wing extremists (RWE), also known as alt-right, who run the gamut from sexists (including male sexual conquest communities) to white nationalists to anti-immigration activists and even those who rebuke RWE identification but whose actions confer such classification.
These manipulators rally behind a shared belief on online forums, blogs, podcasts, and social media through pranks or ruinous trolling anonymity, usurping participatory culture methods (networking, humor, mentorship) for harassment, and competitive cyber brigades that earn status by escalating bullying such as the sharing of a target’s private information. The researchers proposed that the use of the more digestible term of alt-right to convey the collective agenda of misogynists, racists, and fascists propelled their causes into the mainstream discourse through various media streams. Therefore, I’ll use the term RWE instead.
MEDIA ECOSYSTEM MALEABILITY
The Internet provides a shared space for good and evil. Subcultures such as white nationalists can converge with other anti-establishment doers on an international scale thanks to the connected world we live in. Marwick and Lewis reported on how RWE groups have taken advantage of certain media tactics to gain viewers’ attention such as novelty and sensationalism, as well as their interactions with the public via social media, to manipulate it for their agenda. For instance, YouTube provides any individual with a portal and potential revenue to contribute to the media ecosystem. The researchers shared the example of the use of YouTube by conspiracy theorists, which can be used as fodder for extremist networks as conspiracies generally focus on the loss of control of important ideals, health, and safety.
The more outrageous conspiracies get picked up by the media for their viewers, and in doing so, are partly to blame for their proliferation. In the case study provided with this article, The White Student Union, an influencer successfully sought moral outrage as a publicity stunt. Why isn’t the media more astute about this? “The mainstream media’s predilection for sensationalism, need for constant novelty, and emphasis on profits over civic responsibility made them vulnerable to strategic manipulation (p. 47) (Marwick & Lewis, 2017).”
ONLINE ATTENTION HACKING
Marwick and Lewis shared how certain RWE influencers gained prominence based on their technological skills and manipulative tactics. One tactic they’re using is to package their hate in a way that appeals to millennials. They use attention hacking to increase their status such as hate speech, which is later recanted as trickster trolling all the while gaining the media’s attention for further propagation. Then there are the RWE so-called news outlets and blogs that promote a hyper-partisan agenda and falsehoods. These were successful in attention hacking the nation running up to the 2016 Presidential election at a scale that out-paced that of the regular news outlets on Facebook (Buzz Feed News, 2016). Are they unstoppable?
The researchers indicated that the only formidable enemy of alt-right media is the opposing factions within its fractured, yet shared hate, assemblage. Unfortunately, mainstream media’s reporting on political figures who engage in conspiracy theories, albeit noteworthy as to their mindset, raises it to the level of other important newsworthy of debate. Berger and Luckmann (1966) referred to this as ‘reality maintenance’ through dialogue, reality-confirmation through interactions, ongoingly modified, and legitimized through certain conversations. The media needs to stop the amplification of RWE messages; otherwise, as Marwick and Lewis stated, it could gravely darken our democracy.
ONLINE MANIPULATORS SHARED MODUS OPERANDI
Marwick and Lewis reported the following shared tactics various RWE groups use for online exploits:
Ambiguity of persona or ideology,
Baiting a single or community target’s emotions,
Bots for amplification of propaganda that appears legitimately from a real person,
“…Embeddedness in Internet culture… (p. 28),”
Exploitation of young male rebelliousness,
Hate speech and offensive language (under the guise of First Amendment protections),
Irony to cloak ideology and/or skewer intended targets,
Memes for stickiness of propaganda,
Mentorship in argumentation, marketing strategies, and subversive literature in their communities of interest,
Networked and agile groups,
“…Permanent warfare… (p.12)” call to action,
Pseudo scholarship to deceive readers,
“…Quasi moral arguments… (p. 7)”
Shocking images for filtering network membership,
“Trading stories up the chain… (p. 38)” from low-level news outlets to mainstream, and
Trolling others with asocial behavior.
This is a frightful attempt at the social reconstruction of our reality, as the verbal and nonverbal language we use objectifies and rules the order (Berger and Luckmann, 1966).
According to Marwick and Lewis, media manipulators are motivated by pushing their ideological agendas, the joy of sowing chaos in the lives of targets, financial gain, and/or status. The RWE’s shared use of online venues to build a counter-narrative and to radicalize recruits is not going away any time soon. This was best explained in their article as, with the Internet, the usual media gatekeepers have been removed.
Some claimed their impetus was financial and not politically motivated such as the teenagers in Veles, Macedonia who profited around 16K dollars per month via Google’s AdSense from Facebook post engagements with their 100 fake news websites (Subramanian, 2017). “What Veles produced, though, was something more extreme still: an enterprise of cool, pure amorality, free not only of ideology but of any concern or feeling about the substance of the election (Subramanian, 2017).” …Google eventually suspended the ads from these and other fake news sites. However, as reported in Dead Reckoning, new provocateurs will eventually figure out how to circumvent Google’s AdSense and other online companies’ gateways as soon as they develop new ones. This is because, as aforementioned, the RWE influencers are tech-savvy.
PUBLIC MISTRUST OF MAINSTREAM MEDIA
Marwick and Lewis acknowledged a long history of mistrust with mainstream media. However, the current distrust appears worse than ever. For example, youth reported having little faith in mainstream media (Madden, Lenhart & Fontaine, 2017). Republicans’ trust in the mainstream media was the lowest ever recorded by the Gallop Poll (Swift, 2016). Why has it worsened? They pinpointed The New York Times’ lack of evidence for various news articles on the Iraq War’s nuclear arsenal, as an example of long-lasting readership dismay. The researchers reported on how a lack of trust in the mainstream media has pushed viewers to watch alternative networks instead. Moreover, the right-wing extremists’ manipulation of the media demonstrates the media’s weakness, which in turn sows mistrust. Marwick and Lewis acknowledged that the RWE subculture has been around the Internet for decades and will continue to thrive off the mainstream media’s need for novelty and sensationalism if allowed. I, for one, appreciate what Data & Society is doing to shed light on the spread of fake news and hatemongers’ agendas on the Internet.
“The more radical the person is, the more fully he or she enters into reality so that, knowing it better, he or she can transform it. This individual is not afraid to confront, to listen, to see the world unveiled.― Paulo Freire