The Challenges of Combating Online Fake News: A Review of ‘Dead Reckoning’

Embed from Getty Images

This article was originally posted on the AACE Review by Sandra Rogers.

The Data & Society Research Institute has produced and shared informative articles on the many facets of fake news producers, sharers, promoters, and denouncers of real news as part of their Media Manipulation Initiative. In Dead Reckoning (Caplan, Hanson, & Donovan, February 2018), the authors acknowledged that fake news is an ill-structured problem that’s difficult to define in its many disguises (e.g., hoaxes, conspiracy theories, supposed parody or satire, trolling, partisan biased content, hyper-partisan propaganda disguised as news, and state-sponsored propaganda). Nevertheless, they stated the critical need for it to be defined to produce a problem statement. Only in this way can a proper needs assessment and subsequent solutions be explored.

Based on their critical discourse analysis of information reviewed during their field research, they identified two descriptions for fake news, problematic content and the critique of mainstream media’s efforts to produce trustworthy news. [They reported how]… the denouncement of mainstream media as fake news serves to legitimatize alternative media sources. Beyond defining fake news, the authors seek parameters for what makes news real in their efforts to address information disorder.

Neither Man nor Machine Can Defeat Fake News

Kurzweil (1999) predicted that in the year 2029 humans will develop software that masters intelligence. However, the idea that cognition can be produced through computation has been refuted (Searle, 1980; McGinn, 2000). In Dead Reckoning, the authors addressed the problem of combating fake news as twofold; Artificial intelligence (AI) currently lacks the capability to detect subtleties, and news organizations are unable to provide the manpower to verify the vast proliferation of unmoderated global media. The problem is that once addressed, fake news producers circumvent the triage of security. Several efforts are underway in developing algorithms for machine learning such as PBS’ NewsTracker and Lopez-Brau and Uddenberg’s Open Mind.

Fake News Endangers Our Democracy & Leads to Global Cyberwars

The social media applications that have become part of the fabric of our society are used as propaganda tools by foreign and domestic entities. For example, prior to the 2016 Presidential election, Facebook’s ads and users’ news streams were inundated with fake news that generated more engagement from August to September than that of 19 major news agencies altogether (Buzz Feed News, 2016). The authors shared how concerned parties (e.g., news industry, platform corporations, civil organizations, and the government) have moved beyond whether fake news should be regulated to who will set standards and enforce regulations. “…without systemic oversight and auditing platform companies’ security practices, information warfare will surely intensify (Caplan, Hanson, & Donovan, p. 25, February 2018).”

The potential for fake news to reach Americans through digital news consumption from smartphone apps and text alerts compounds the issue. The Pew Research Center surveyed 2004 random Americans who consume digital news and found these habits based on two surveys per day for one week: 36% used online news sites, 35% used social media, 20% searched the Internet, 15% used email, 7% relied on family, and the remaining 9% was categorized as other (Mitchell, Gottfried, Shearer, & Lu, February 9, 2017).

Strategic Arbitration of Truth

Caplan, et al. state how organizations and AI developers approach defining fake news by type, features, and news signifiers of intent (e.g., characteristics of common fake news providers, common indications of fake news posts, and sharing patterns). For example, one common news signifier of fake news is the use of enticing terms such as ‘shocking.’ Digital intervention efforts include developing a taxonomy for verification of content, developing responsive corporate policy, banning accounts of fake news promoters, tightening verification process for posting and opening accounts, and informing users how to identify fake news. See the Public Data Lab’s Field Guide to Fake News and Other Information Disorders.

Caplan, et al. raise many unanswered questions in the struggle to defeat fake news. How can we arbitrate truth without giving more attention to fake news? Will Google’s AdSense allow users to control where their ads are placed? Can Facebook really reduce the influence of fake news promoters on their site all the time? Caplan, Hanson, and Donovan (2018) proposed these powerful strategies to combat fake news:

  • Trust and verify- By trust, they mean going beyond fact-checking, and content moderation, and incorporate interoperable mechanisms for digital content verification through collaborative projects with other news agencies;
  • Disrupt economic incentives- Stop the pay-per-click mill of online advertising without a say in the type of site it will appear in;
  • Online platform providers need to ban accounts or otherwise not feature content based on falsehoods, click-bait, or spam; and
  • Call for news regulation within the boundary of the First Amendment’s Good Samaritan provision.

For information on single-user technology and critical thinking skills to avoid fake news, visit my previous AACE Review blog on Navigating Post-truth Societies: Strategies, Resources and Technologies.

References 

Caplan, R., Hanson, L., & Donovan, J. (February 2018). Dead reckoning: Navigating content moderation after “fake news”. Retrieved from https://datasociety.net/output/dead-reckoning/ 

Kurzweil, R. (1999). The age of spiritual machines: When computers exceed human intelligence. New York, NY: Penguin Books.

McGinn, C. (2000). The mysterious flame: Conscious minds in a material world. Basic Books, 194.

Mitchell, A, Gottfried, J, Shearer, E, & Lu, K. (February 9, 2017). How Americans encounter, recall, and act upon digital news. Retrieved from http://www.journalism.org/2017/02/09/how-americans-encounter-recall-and-act-upon-digital-news/

Searle, J. (1980). Minds, brains and programs. Behavioral and Brain Sciences3(3), 417–457. doi:10.1017/S0140525X00005756


P.S. Disinformation (aka fake news) means it was used with intent to deceive rather than unintentional misinformation.


Sandra Annette Rogers, Ph.D.

Teacherrogers Products
Pre-K, Kindergarten, First, Second, Third, Fourth, Fifth, Sixth, Seventh, Eighth, Ninth, Tenth, Eleventh, Twelfth, Higher Education, Adult Education, Homeschooler, Staff, Not Grade Specific - TeachersPayTeachers.com

Navigating Post-Truth Societies: Strategies, Resources, and Technologies

The blog was originally posted on the AACE Review by Sandra Rogers.

The Problem

While fake news and information bubbles are not new, awareness of their impact on public opinion has increased. The Wall Street Journal (2016) reported on a study that found secondary and postsecondary students could not distinguish between real and sponsored content in Internet searches. This became apparent when observing my college-bound niece google her bank on the Internet and quickly click the name at the top of the list within the sponsored content and then have the computer freeze from a potential malware attack. If teenagers cannot discern between promoted and regular content, imagine their encounters with fake news. The WSJ article recommended lateral reading (i.e., leave site to learn about it) and for adults to ask teens about their selection choices during Internet searches. In the instance with my niece, she was unaware of sponsored content. She also didn’t know that the first item in a browser’s search results generally is strategically pushed to the top because of search engine optimization (SEO) with keywords (meta-tagging).

Figure 1. Tag cloud of words from blog post

How can we help? What are good heuristics to determine the quality of online content?

Solution 1. Critical Reading and Thinking Skills

Determine the purpose of the Website by its domain (e.g., .com, .org, .gov). Analyze its content and graphics. Analytical questions to consider are as follows:

  • Is it current? Broken hyperlinks indicate a lack of attention to the site.
  • Does it look professional? Is it well written?
  • Does it have a point of contact?
  • Does the writer provide proper citations?
  • What is the author’s tone? Is the content biased toward a view? If so, is it substantiated with empirical evidence? Does the author present the complete narrative or are certain important elements omitted?
  • Do the graphics illustrate a valid point? Do they make sense statistically?

Are you an IT specialist, researcher, or educator? Each field has different approaches to thinking. The strategies you select would depend upon the nature of the content, as different content requires different ways of thinking. Bruning, Schraw, and Norby (2011) refer to this as thinking frames such as how one would think about scientific inquiry and the use of research methods. If you’re an educator, you might be interested in a WebQuest I developed to help students create their own job aid for critical thinking. It asks students to tap into the critical lens of their future field of study.

Solution 2. Primary Sources

Combat fake news by seeking the original source of information. Take time to verify the authenticity of what is begin shared online. Use various sources whenever possible for triangulation (e.g., interviews, observations, and documentation). This will ensure that what you read is corroborated by other articles presenting the same information. A good legislative resource is the U.S. Government Publication Office that provides congressional records, bills, codes, and Federal Register items. Their govinfo.gov website explains how to check the integrity of a government document found on the web by revealing its verification seal upon printing. It’s a digital signature placed in their PDFs; if the document has been modified, it breaks the verification.

Solution 3. Technology Resources

Use technology to decipher the trustworthiness of online content. Several Internet browser extensions provide visible alerts. For example, the Fake News Detector extension displays the word FAKE in red capital letters or orange for CLICKBAIT/Probably FAKE on the web page. It’s available in the Chrome store along with a few others and their user ratings. I started curating reputable fact-checking tools such as PolitiFact and Snopes on my Scoop.it! e-magazine, The Critical Reader. Some extensions are application-specific such as the Official Media Bias/Fact Check Extension that determines the veracity of articles on Facebook. It provides factuality (e.g., High), references, popularity, and positionality (e.g., left-center) at the base of the article on your Facebook feed. I personally use this one displayed in Figure 2.

Figure 2. Facebook post of Smithsonian article with Official Media Bias/Fact Check results

Solution 4. Seek Professional Content

Seek information from reputable researchers and educational leaders. Most professions adhere to ethical standards as a promise to their constituents. For example, the American Educational Research Association (AERA) states that members will not fabricate, falsify, nor plagiarize “in proposing, performing, or reviewing research, or in reporting research results (AERA Code of Ethics, 2011).” This standard is taken very seriously in the field of educational research. Those in the past that didn’t heed ethical rules have paid the cost of being outed with plagiarism tools such as was the case for the German Education Minister and Former Defense Minister’s plagiarized dissertations and subsequent unseating of their government appointments (CNN World, 2013).

As an educator, I took the Kappa Delta Pi (KDP) pledge of fidelity to humanity, science, service, and toil, as an initiate into this international honor society. The ideal of science relates to the topic of this discussion. “…This Ideal implies that, as an educator, one will be faithful to the cause of free inquiry and will strive to eliminate prejudice and superstition by withholding judgment until accurate and adequate evidence is obtained. One will not distort evidence to support a favorite theory; not be blinded by the new or spectacular; nor condemn the old simply because it is old. All this is implied in the Ideal of Science” (KDP Initiation Ceremony, 2015, p. 4).

Do you have good fact-checking resources or more solutions to share? Please share those in the comments section.

References

Brumfield, B. (2013, February 6). German education minister loses Ph.D. over plagiarized thesis. CNN World. Retrieved from https://www.cnn.com/2013/02/06/world/europe/german-minister-plagiarism/index.html

Bruning, R. H., Schraw, G. J., & Norby, M. M. (2011). Cognitive psychology and instruction. New York, NY: Pearson.

Caplan, R., Hanson, L. & Donovan, J. (2018). Dead Reckoning Navigating Content Moderation After “Fake News”. Data & Society. Retrieved from https://datasociety.net/pubs/oh/DataAndSociety_Dead_Reckoning_2018.pdf 

Code of ethics. (2011). American Educational Research Association. Educational Researchers, 40 (3), 145-156. doi: 10.31.02/001189X11410403

Ceremonies and rituals. (2015). Kappa Delta Pi International Honor Society in Education.

Shellenbarger, S. (2016, November 21). Most students don’t know when news is fake, Stanford study finds. The Wall Street Journal. Retrieved from https://www.wsj.com/articles/most-students-dont-know


Sandra Annette Rogers, Ph.D.

Teacherrogers Products
Pre-K, Kindergarten, First, Second, Third, Fourth, Fifth, Sixth, Seventh, Eighth, Ninth, Tenth, Eleventh, Twelfth, Higher Education, Adult Education, Homeschooler, Staff, Not Grade Specific - TeachersPayTeachers.com

A Review of ‘Media Manipulation and Disinformation Online’

This was previously posted on the AACE Review by Sandra Rogers.

Digital screen with green code on black background
Photo by Markus Spiske temporausch.com on Pexels.com

In Media Manipulation and Disinformation Online, Marwick and Lewis (2017) of the Data & Society Research Institute described the agents of media manipulation, their modus operandi, motivators, and how they’ve taken advantage of the vulnerability of online media. The researchers described the manipulators as right-wing extremists (RWE), also known as alt-right, who run the gamut from sexists (including male sexual conquest communities) to white nationalists to anti-immigration activists and even those who rebuke RWE identification but whose actions confer such classification.

These manipulators rally behind a shared belief on online forums, blogs, podcasts, and social media through pranks or ruinous trolling anonymity, usurping participatory culture methods (networking, humor, mentorship) for harassment, and competitive cyber brigades that earn status by escalating bullying such as the sharing of a target’s private information. The researchers proposed that the use of the more digestible term of alt-right to convey the collective agenda of misogynists, racists, and fascists propelled their causes into the mainstream discourse through various media streams. Therefore, I’ll use the term RWE instead.

MEDIA ECOSYSTEM MALEABILITY

The Internet provides a shared space for good and evil. Subcultures such as white nationalists can converge with other anti-establishment doers on an international scale thanks to the connected world we live in. Marwick and Lewis reported on how RWE groups have taken advantage of certain media tactics to gain viewers’ attention such as novelty and sensationalism, as well as their interactions with the public via social media, to manipulate it for their agenda. For instance, YouTube provides any individual with a portal and potential revenue to contribute to the media ecosystem. The researchers shared the example of the use of YouTube by conspiracy theorists, which can be used as fodder for extremist networks as conspiracies generally focus on the loss of control of important ideals, health, and safety.

The more outrageous conspiracies get picked up by the media for their viewers, and in doing so, are partly to blame for their proliferation. In the case study provided with this article, The White Student Union, an influencer successfully sought moral outrage as a publicity stunt. Why isn’t the media more astute about this? “The mainstream media’s predilection for sensationalism, need for constant novelty, and emphasis on profits over civic responsibility made them vulnerable to strategic manipulation (p. 47) (Marwick & Lewis, 2017).”

ONLINE ATTENTION HACKING

Marwick and Lewis shared how certain RWE influencers gained prominence based on their technological skills and manipulative tactics. One tactic they’re using is to package their hate in a way that appeals to millennials. They use attention hacking to increase their status such as hate speech, which is later recanted as trickster trolling all the while gaining the media’s attention for further propagation. Then there are the RWE so-called news outlets and blogs that promote a hyper-partisan agenda and falsehoods. These were successful in attention hacking the nation running up to the 2016 Presidential election at a scale that out-paced that of the regular news outlets on Facebook (Buzz Feed News, 2016). Are they unstoppable?

The researchers indicated that the only formidable enemy of alt-right media is the opposing factions within its fractured, yet shared hate, assemblage. Unfortunately, mainstream media’s reporting on political figures who engage in conspiracy theories, albeit noteworthy as to their mindset, raises it to the level of other important newsworthy of debate.  Berger and Luckmann (1966) referred to this as ‘reality maintenance’ through dialogue, reality-confirmation through interactions, ongoingly modified, and legitimized through certain conversations. The media needs to stop the amplification of RWE messages; otherwise, as Marwick and Lewis stated, it could gravely darken our democracy.

ONLINE MANIPULATORS SHARED MODUS OPERANDI

Marwick and Lewis reported the following shared tactics various RWE groups use for online exploits:

  • Ambiguity of persona or ideology,
  • Baiting a single or community target’s emotions,
  • Bots for amplification of propaganda that appears legitimately from a real person,
  • “…Embeddedness in Internet culture… (p. 28),”
  • Exploitation of young male rebelliousness,
  • Hate speech and offensive language (under the guise of First Amendment protections),
  • Irony to cloak ideology and/or skewer intended targets,
  • Memes for stickiness of propaganda,
  • Mentorship in argumentation, marketing strategies, and subversive literature in their communities of interest,
  • Networked and agile groups,
  • “…Permanent warfare… (p.12)” call to action,
  • Pseudo scholarship to deceive readers,
  • “…Quasi moral arguments… (p. 7)”
  • Shocking images for filtering network membership,
  • “Trading stories up the chain… (p. 38)” from low-level news outlets to mainstream, and
  • Trolling others with asocial behavior.

This is a frightful attempt at the social reconstruction of our reality, as the verbal and nonverbal language we use objectifies and rules the order (Berger and Luckmann, 1966).

DISINFORMATION MOTIVATORS

According to Marwick and Lewis, media manipulators are motivated by pushing their ideological agendas, the joy of sowing chaos in the lives of targets, financial gain, and/or status. The RWE’s shared use of online venues to build a counter-narrative and to radicalize recruits is not going away any time soon. This was best explained in their article as, with the Internet, the usual media gatekeepers have been removed.

Some claimed their impetus was financial and not politically motivated such as the teenagers in Veles, Macedonia who profited around 16K dollars per month via Google’s AdSense from Facebook post engagements with their 100 fake news websites (Subramanian, 2017). “What Veles produced, though, was something more extreme still: an enterprise of cool, pure amorality, free not only of ideology but of any concern or feeling about the substance of the election (Subramanian, 2017).”  …Google eventually suspended the ads from these and other fake news sites. However, as reported in Dead Reckoning, new provocateurs will eventually figure out how to circumvent Google’s AdSense and other online companies’ gateways as soon as they develop new ones. This is because, as aforementioned, the RWE influencers are tech-savvy.

PUBLIC MISTRUST OF MAINSTREAM MEDIA

Marwick and Lewis acknowledged a long history of mistrust with mainstream media. However, the current distrust appears worse than ever. For example, youth reported having little faith in mainstream media (Madden, Lenhart & Fontaine, 2017). Republicans’ trust in the mainstream media was the lowest ever recorded by the Gallop Poll (Swift, 2016). Why has it worsened? They pinpointed The New York Times’ lack of evidence for various news articles on the Iraq War’s nuclear arsenal, as an example of long-lasting readership dismay. The researchers reported on how a lack of trust in the mainstream media has pushed viewers to watch alternative networks instead. Moreover, the right-wing extremists’ manipulation of the media demonstrates the media’s weakness, which in turn sows mistrust. Marwick and Lewis acknowledged that the RWE subculture has been around the Internet for decades and will continue to thrive off the mainstream media’s need for novelty and sensationalism if allowed. I, for one, appreciate what Data & Society is doing to shed light on the spread of fake news and hatemongers’ agendas on the Internet.

Instructional Material

If you’re a college instructor of communications or teach digital literacy as a librarian, see the corresponding syllabus for this article. It provides discussion questions and assignments for teaching students about media manipulation. To teach your students how to combat fake news online, see my other AACE Review post on Navigating Post-Truth Societies: Strategies, Resources, and Technologies.

References

Berger, P. L., & Luckmann, T. (1966). The social construction of reality: A treatise in the sociology of knowledge. New York, NY: Anchor Books.

Madden, M. Lenhart, A., & Fontaine, C. (February 2017). How youth navigate the news landscape. Retrieved from https://kf-siteproduction.s3.amazonaws.com/publications/pdfs/000/000/230/original/Youth_News.pdf

Marwick, A. & Lewis, R. (2017). Media manipulation and disinformation online. Retrieved from https://datasociety.net/pubs/oh/DataAndSociety_MediaManipulationAndDisinformationOnline.pdf

Subramanian, S. (February 2017). Inside the Macedonian fake-news complex. Retrieved from https://www.wired.com/2017/02/veles-macedonia-fake-news/

Swift, A. (2016). Americans trust in mass media sinks to new low. Retrieved from http://www.gallup.com/poll/195542/americans-trust-mass-media-sinks-new-low.aspx


Sandra Annette Rogers, Ph.D.

Teacherrogers Products
Pre-K, Kindergarten, First, Second, Third, Fourth, Fifth, Sixth, Seventh, Eighth, Ninth, Tenth, Eleventh, Twelfth, Higher Education, Adult Education, Homeschooler, Staff, Not Grade Specific - TeachersPayTeachers.com

Interview with the Creators of Hoaxy® from Indiana University

This post was previously published on the AACE Review by Sandra Rogers.

Hoaxy diffusion network of the spread of a misleading news article on vaccines via Twitter

Figure 1. A Hoaxy® diffusion network regarding claims about the HPV vaccine.

Falsehoods are spread due to biases in the brain, society, and computer algorithms (Ciampaglia & Menczer, 2018). A combined problem is “information overload and limited attention contribute to a degradation of the market’s discriminative power” (Qiu, Oliveira, Shirazi, Flammini, & Menczer, 2017).  Falsehoods spread quickly in the US through social media because this has become Americans’ preferred way to read the news (59%) in the 21st century (Mitchell, Gottfried, Barthel, & Sheer, 2016). While a mature critical reader may recognize a hoax disguised as news, there are those who share it intentionally. A 2016 US poll revealed that 23% of American adults had shared misinformation unwittingly or on purpose; this poll reported high to moderate confidence in one’s ability to identify fake news with only 15% not very confident (Barthel, Mitchell, & Holcomb, 2016).

What’s the big deal?

The Brookings Institute warned how organized disinformation campaigns are especially dangerous for democracy: “This information can distort election campaigns, affect public perceptions, or shape human emotions” (West, 2017). Hoaxes are being revealed through fact-checking sites such as FactCheck.org, Politifact.com, Snopes.com, and TruthorFiction.com. These have the potential to reveal falsehoods and provide any corresponding truth in the details or alternative facts. For example, PolitiFact’s Truth-O-Meter is run by the editors of The Tampa Bay Times. This tool was so crucial for checking the veracity of candidates’ statements during the 2008 Presidential campaign season that they won a Pulitzer Prize for Journalism in 2009.

Hoaxy® (beta)

Hoaxy® takes it one step further and shows you who is spreading or debunking a hoax or disinformation on Twitter.  It was developed by Indiana University Network Science Institute and the Center for Complex Networks and Systems Research. The Knight Prototype grant and the Democracy Fund support it. The visuospatial interactive maps it produces are called diffusion networks and provide real-time data if you grant the program access to your Twitter account. Hoax purveyors be warned—it shows the actual Twitter user or Bot promoting it through grey, low-credibility claims. Conversely, it also displays in yellow the Twitter accounts fact-checking the claim.

Bots are determined by computer algorithms and given a score based on the science behind the Botometer with red identifying accounts most ‘Bot Like’ and blue for most ‘Human-Like’. The website’s landing page provides trending news, popular claims, popular fact-checks, and a search box for queries. The site’s Dashboard shows a list of influential Twitter accounts and number of tweets for those sharing claims or fact-checking articles with the corresponding Botometer Score.

Use Hoaxy® to find out who is at the center of a hoax by clicking the node to reveal the Twitter account. It’s also interesting to see who the outliers are and their six degrees of separation. Select a node, and it will provide the Botometer Score and whether they quoted someone or someone quoted them (retweets) on Twitter. For example, a magician named Earl is the approximate epicenter for spreading misinformation about the human papillomavirus (HPV) vaccine causing an increase of cervical cancer in Swedish girls. See vaccine query visualized on Hoaxy, as in Figure 1. Based on the sharing of the article from Yournewswire.com, it had 1665 people claiming it and zero disclaimers on Twitter, as of 7/18/18. As for the facts, according to the Center for Disease Control and Prevention, HPV vaccines are safe and prevent cervical cancer (2018).

It was a privilege to talk to the Hoaxy project coordinator, Dr. Giovanni Ciampaglia, on behalf of his co-coordinators Drs. Alessandro Flammini and Filippo Menczer:

What was the inspiration or tipping point to invent Hoaxy®?

We started Hoaxy because we could not find a good tool that would let us track the spread of misinformation on social media. The main inspiration was a project called Emergent (emergent.info), which was a really cool attempt at tracking rumors spreading through the news media. However, it was a completely manual effort by a group of journalists, and it was hard to scale to social media, where there are just so many stories published at once. So, we set out with the idea in mind of building a platform that would work in a completely automated fashion.

Since its creation in 2016, what are some of the overhauls that the Hoaxy® software program required for updates?

Hoaxy has evolved quite a bit since we first launched in 2016. The main overhaul was a complete redesign of its interface, during which we also integrated our social bot detection classifier called Botometer. In this way, Hoaxy can be used to understand the role and impact of social bots in the spread of both misinformation, and of low-credibility content in general.

What are some of the unexpected uses of Hoaxy?

We were not entirely expecting it when we first heard it, but several educators use Hoaxy in their classrooms to teach about social media literacy. This is of course really exciting for us because it shows the importance of teaching these skills and of using interactive, computational techniques for doing so.

What hoax is currently fact-checked the most?

Hoaxes are constantly changing, so it’s hard to keep track of what is a most fact-checked hoax. However, Hoaxy shows what fact-checks have been shared the most in the past 30 days, which gives you an idea of the type of hoaxes that are circulating on social media at any given time.

What’s the most absurd claim you encountered?

There are just too many… my favorite ones have to do with ancient prophecies and catastrophes (usually about asteroids and other astronomical objects).

Has Hoaxy® won any awards? (If not, what type of award categories does it fit in?)

It has not won an award (yet!). We are grateful however to the Knight Foundation Prototype Fund and to the Democracy Fund, who supported the work of integrating Botometer into Hoaxy.

I noted Mihai Avram’s (Indiana University graduate student) work on Fakey, a teaching app on discerning disinformation that is gamified. Are you involved with overseeing that project as well?

Yes, Filippo is involved in it. Mihai has also worked on Hoaxy; in fact, without him, the current version of Hoaxy would have certainly not been possible!

What are some other resource projects your team is working on now?

Hoaxy is part of the Observatory on Social Media (osome.iuni.iu.edu), and we provide several other tools for open social media analytics (osome.iuni.iu.edu/tools). We are working on improving Hoaxy and making it operable with other tools. The ultimate goal would be to bring Hoaxy into the newsroom so that reporters can take advantage of it as part of their social media verification strategies.

What type of research is critically needed to better understand the spread of disinformation and its curtailing?

We definitely need to better understand the “demand” side of dis/misinformation — what makes people vulnerable to misinformation? The complex interplay between social, cognitive, and algorithmic vulnerabilities is not well understood at the moment. This will need a lot of investigation. We also need more collaboration between academia, industry, civil society, and policymakers. Platforms are starting to open up a little to partnering with outside researchers, and there will be much to learn on all sides.

Is there anything else you would like to share?

Yes! We are always happy to hear what users think about our tools, and how we can improve them. To contact us you can use email, Twitter, or our mailing list. More information here: http://osome.iuni.iu.edu/contact/

About Giovanni Ciampaglia

Dr. Ciampaglia is an assistant professor in the Department of Computer Science and Engineering at the University of South Florida. Previously, he was an assistant research scientist and postdoctoral fellow at the Indiana University Network Science Institute, where he collaborated on information diffusion with Drs. Menczer and Flammini and co-created Hoaxy. Prior to that, he was a research analyst contractor at the Wikimedia Foundation. He has a doctorate in Informatics from the University of Lugano in Switzerland. His research interest is in large-scale, collective, social phenomena on the Internet and other complex social phenomena such as the emergence of social norms.

References

Barthel, M., Mitchell, A., & Holcomb, J. (December 2016). Many Americans believe fake news is sowing confusion. Trusts, Facts, & Democracy. Retrieved from http://www.journalism.org/2016/12/15/many-americans-believe-fake-news-is-sowing-confusion/

Ciampaglia, G. L., & Menczer, F. (June 2018). Misinformation and biases infect social media, both intentionally and accidentally. Retrieved from http://theconversation.com/misinformation-and-biases-infect-social-media-both-intentionally-and-accidentally-97148

Mitchell, A., Gottfried, J., Barthel, M., & Shearer, E. (July 2016). The modern news consumer. Trusts, Facts, & Democracy. Retrieved from http://www.journalism.org/2016/07/07/the-modern-news-consumer/

Qiu, X., Oliveira, D., Shirazi, S., Flammini, A., & Menczer, F. (2017). Limited individual attention and online virality of low-quality information. Nature Human Behavior, 1(132). doi:10.1038/s41562-017-0132

West, D. M. (2017). How to combat fake news and disinformation. Retrieved from https://www.brookings.edu/research/how-to-combat-fake-news-and-disinformation/?gclid=EAIaIQobChMI2eK5nbrA3AIVDgFpCh1k1wepEAAYASAAEgIL0PD_BwE


Sandra Annette Rogers, Ph.D.

Teacherrogers Products
Pre-K, Kindergarten, First, Second, Third, Fourth, Fifth, Sixth, Seventh, Eighth, Ninth, Tenth, Eleventh, Twelfth, Higher Education, Adult Education, Homeschooler, Staff, Not Grade Specific - TeachersPayTeachers.com

Minimum Technical Skill Requirements for Online Learners

Embed from Getty Images

One of my tasks as an instructional designer on my college campus is to provide learning guidelines and protocols for distance education. One way to prepare students for online learning is to provide a list of minimum technical skills required and make recommendations on where they can seek help if they do not possess such skills. Below is what I prepared for our students. I’d love your feedback on it.


Students,

The following is a list of basic technical skills you should have to engage productively in an online course:
● use the learning management system (e.g., Schoology) tools to post discussions and upload assignments;
● use different browsers, clear browsing history, cache, and cookies, and refresh the screen;
● use email with attachments;
● create and submit electronic files in word processing program formats and save files to PDFs;
● copy and paste text;
● download and install software (e.g., media applications);
● download a media file for viewing or listening;
● use spreadsheet programs (e.g., Excel, Google Sheets, etc.);
● use presentation and simple graphics programs;
● use collaborative tools like Google Docs and shared folders on Google Drive; and
● use search engines to access digital books and articles from library databases.


Sandra Annette Rogers, Ph.D.

Teacherrogers Products
Pre-K, Kindergarten, First, Second, Third, Fourth, Fifth, Sixth, Seventh, Eighth, Ninth, Tenth, Eleventh, Twelfth, Higher Education, Adult Education, Homeschooler, Staff, Not Grade Specific - TeachersPayTeachers.com

Using Google Suite for the Universal Design of Learning

Design for gardining Website interface displays tools and supplies as icons
Bad Example: This Google Drawing was created for a doctoral mini project on an interface design task for developing a gardening website with one of my peers in an online course. This was created prior to my understanding of accessibility issues. Notice that not all icons are labeled. This would not be accessible to all. Additionally, the alternative text would need to be embedded with each image.

Google Suite, along with Google’s Chrome browser’s Omnibox and useful extensions, can be used to enhance the teaching of all learners with universal instructional design principles. Google Suite is the new name for these features: Google Apps (Docs, Forms, Sheets, Slides), Classroom, and Drive. This blog focuses on the use of technology to augment instruction through differentiation via scaffolding, formative assessments, and student collaboration. Google’s professional development opportunities and teacher resources are also addressed.

There are several efforts to design education with universal design in mind. Palmer and Caputo (2003) proposed seven principles for universal instructional design (UID): accessibility, consistency, explicitness, flexibility, accommodating learning spaces, minimization of effort, and supportive learning environments. The UID model recognizes those needs for course design. Its main premise is equal access to education and extends this to all types of learners and not just those with disabilities. For example, all learners can benefit from multi-modal lessons. Palmer and Caputo’s principles should be kept in mind as you develop differentiated instructional learning scenarios with Google Suite. See my blog post to learn more about the universal design for learning.

My College is a Google Apps for Education campus, which means we have unlimited storage on our Drive and seamless access to Google Suite through our school Gmail. Speak with your Google Suite administrator to learn about the features and functions of your access, as some institutions like my alma mater block YouTube and Google+. 

The following scenarios address possible technology solutions for teaching all learners. For instance, scaffolding supports different learners’ preferences, as well as the needs of lower-performing students. Formative assessments are important to obtain ongoing feedback on student performance; use these often. They can be formal or informal (practice tests, exit tickets, polls). Formative tests promote active learning, which leads to higher retention of information learned. Use the following list to add your ideas and scenarios for differentiated lesson planning.

Scaffold Learning Google Tools & Features Formative Assessments Your Ideas & Scenarios
Provide visuals for structure, context, or direction & just-in-time definitions Google Drawings, Docs’ Explore tool, & Drive Students make their own graphic representation of a concept or complete guided tasks with the frame provided by an instructor.
Provide authentic speaking practice prior to oral test/presentation Google Docs’ Voice Typing, Chrome Browser’s Omnibox for a timer, & Drive Students work individually or in small group turn-taking voice typing their scripts/stories on Google Doc within a timed parameter on a split-screen.
Check for comprehension to obtain data to drive instruction/remediation Google Forms, Sheets, Classroom, & Drive (Alternative: Google Slides new feature allows for asking questions & polling question priority live from slide.) Students take a quiz on Google Forms to demonstrate knowledge after a lesson (exit ticket) or homework. Instructors receive Form responses in a Google Sheet. Sheets has an Explore tool for analyzing data for a visual display for data-driven discussions among teacher cohort/supervisors. Auto import grades from Forms to Classroom gradebook.
Students use app with embedded choices to check their own grammar Free Chrome extension, Grammarly and/or app Students correct errors in their first writing drafts on the app or within online writing platforms (e.g., wiki, blog, or email). Grammarly is also available for MS Office and Windows but not for Google Docs. Use its app to check Docs or other writing formats by pasting content to New Document.
Hi/low peer collaboration and/or tutoring Google Apps, Classroom, & Drive Students share settings on project Docs, Drawings, etc. to collaborate via text comments or synchronous video chat sessions.

Resources for Digital Literacy Skill Training

  • Did you know that Google provides lesson plans for information literacy?
  • Do you need to teach your students how to refine their web searches? See Google Support.
  • Internet Safety Tip- Recommend that students use incognito browsing on Google Chrome when conducting searches to reduce their digital footprint. See Google’s YouTube playlist, Digital Citizenship and Security, and their training site for more information.

Accessibility Resources for Assistive Technology

Here’s the link to the G Suite User Guide for Accessibility.

  • ChromeVOX – Google’s screen reading extension for the Google Chrome browser and the screen reader used by Chrome Operating System (OS).
  • TalkBack – This is Google’s screen reading software that is typically included with Android devices. Due to the design of Android and its customizability by hardware manufacturers, TalkBack can vary and may not be included on some Android devices.
  • Screen Magnifier – This is the screen magnification software included with ChromeOS. The magnification function in ChromeOS doesn’t have a unique product name like other platforms.
  • Hey, Google – This is Google’s personal assistant, which is available on the Google Chrome browser, ChromeOS, and many Android devices.

Professional Development for Educators

Other Tools and Support

References

Palmer, J., & Caputo, A. (2003). Universal instructional design: Implementation guide. Guelph, Ontario: University of Guelph.

Checklist for Novice Education Gaming Researchers

EverQuestII Paladin character is a human-like female puma in armor at home near Frostfang Sea

This is a cursory list of important concepts and items to consider when preparing to conduct educational research that involves the use of videogames.

  • Use media selection criteria (e.g., Chapelle’s 2001 computer-assisted language learning media criteria or Jamieson, Chapelle, & Preiss, 2005 revised version)
  • Determine reading level of videogame text by analyzing chat logs with the Flesch-Kincaid readability index. Make sure participants’ reading levels are within 2 grade levels of the index.
  • Use vocabulary concordancer (e.g., Range software) to obtain frequently occurring words from chat log texts for assessment.
  • Learn commands pertinent to research analysis to capture chat logs (e.g., /log) and/or images (e.g., print screen) to computer station public folder.
  • Determine participants’ gaming literacy skills and complexity of the game.
  • Determine participants’ propensity for pathological gaming behavior: low social competence, high impulsivity, and excessive gameplay (i.e., 30 hours) (Gentile, et al., 2011).
  • Determine participants’ perceived relevance of gaming as a learning tool.
  • Provide videogame tutorial and ongoing support.
  • Provide explicit instruction on the benefits of strategies used to enhance learning.
  • Consider participants’ preferences for gaming session location, time, and features.
  • Consider Reese’s (2010) Flowometer to determine gamers’ self-perception of flow and other mental states of engagement to achieve optimal learning condition (i.e., advanced skill use during challenging gaming tasks).
  • Provide warning of photosensitivity to persons with epilepsy (Daybreak Games, 2016).
  • New! Use Discord as a communication backchannel during gameplay.

This list was shared during a gaming panel at the SITE 2017 conference in Austin, TX. Here’s the citation if you would like to reference it:

Willis, J., Greenhalgh, S., Nadolny, L., Liu, S., Aldemir, T., Rogers, S., Trevathan, M., Hopper, S. & Oliver, W. (2017). Exploring the Rules of the Game: Games in the Classroom, Game-Based Learning, Gamification, and Simulations. In Proceedings of Society for Information Technology & Teacher Education International Conference 2017 (pp. 475-480). Chesapeake, VA: Association for the Advancement of Computing in Education (AACE).

What advice would you add?

References

Chapelle, C. A. (2001). Computer applications in second language acquisition: Foundations for teaching, testing, and research. Cambridge, MA: Cambridge University Press.

Jamieson, J., Chapelle, C., & Preiss, S. (2005). CALL evaluation by developers, a teacher, and students. CALICO Journal, 23(1), 93-138. Retrieved from http://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=1045&context=engl_pubs

Daybreak Games [Website]. (2016). Photosensitive warning. Retrieved from https://www.daybreakgames.com/photosensitive?locale=en_US.

Gentile, D., Hyekyung, C., Liau, A., Sim, T., & Li, D. (2011). Pathological video game use among youths: A two-year longitudinal study. Pediatrics, 127(2). doi:10.1542/peds.2010-1353

Range [Software application]. (2016). Retrieved from http://www.victoria.ac.nz/lals/about/staff/paul-nation

Reese, D. D. (2010). Introducing Flowometer: A CyGaMEs assessment suite tool. In R. Van Eck (Ed.), Gaming and cognition: Theories and practice from the learning science. Hershey, PA: Information Science Reference.