Misinformation abounds. This has always been the case, but the problem has become acute in the age of digital communication. As Mike Caulfield and Zeynep Tufekci have been showing in the week following the 2016 US presidential election, Facebook is particularly susceptible to this problem. Of course, Facebook is not alone. The ease with which we can share “news” on social media platforms makes it increasingly easy to contribute to the virality of falsehoods. Take this video, which reached viral status on Twitter. Those who passed it along claimed it was a video of anti-Trump protests in Los Angeles, but is really a video of an anti-Maduro protest in Venezuela.

But this problem is bigger than the proliferation of misinformation in today’s media landscape. Sure, captioning that Venezuelan video as Los Angeles could have been an honest mistake. While I think that’s true for those retweeting it, I find it hard to believe that whoever originally shared it made an honest mistake, given the architecture and the chant, not to mention all the Venezuelan flags! I think it more likely that the originator of this misinformation has something insidious in mind. Something that takes advantage of how easy it is today to pass along media uncritically.

Digital media scholars like Howard Rheingold, Cathy Davidson, Neil Postman, and even science communicators like Carl Sagan have been advocating new digital literacies for some time now ― literacies that include things like attention management, network theories, collaboration, and the modern spin on critical thinking, crap detection. Rheingold begins his book Net Smart: How to Thrive Online:

The future of digital culture ― yours, mine, and ours ― depends on how well we learn to use the media that have infiltrated, amplified, distracted, enriched, and complicated our lives.

But recent events have impressed upon me even more the importance of these literacies, especially crap detection. Not only is there a lot of crap out there, but there are increasingly active forces that seek to both deceive and discredit their opponents. Whether we simply seek to know the truth, or to battle these forces through our own media activity, we need these literacies. We need to know what we’re up against, and how to counter it. As Rheingold writes, “The mindful use of digital media doesn’t happen automatically.” And while Rheingold was writing about the new challenges posed by digital media technologies, his plea for mindful media creation and consumption resounds still more loudly in an environment full of insidious actors.

So how do we do this? Let me begin with an example, something that happened to me this past month.

Amid a stream of comments I posted to Twitter about deceptive media practices, I received this two part response from someone I don’t know:

And that does not count the obvious foreign influence of people like Soros.
I’ll throw RT and Putin in there just to keep the balance of the narrative, but the point stands.

This is subtle, and rather artful when I think about it, deception. Let’s unpack it.

The first message is a jab at businessman George Soros, someone the far right often accuses of manipulating leftist activists to suit his own aims. Though it is a common accusation from white nationalists, I hadn’t heard of it before. So I Googled “Soros” to see what the story was. Among results like Wikipedia and Soros’s own web page are white nationalist, fake-news sites talking about reasons that Soros is “dangerous,” a co-conspirator with the Clintons, and the “hidden hand” behind anti-Trump protests. If I wasn’t suspicious of these claims to begin with, these Google results might make me sympathetic with the deception. I’m already distrustful of billionaires trying to influence politics, education, and the media, and one can easily assume that top Google results mixed in with Wikipedia and the New York Review of Books would be fairly legitimate sources. The combination of innocuous and reputable sounding sources on the Google results page (gleaned just from their domain names) and a healthy dose of confirmation bias (he’s a billionaire “activist,” after all!) is dangerous for liberal readers trying to keep up with Twitter’s information bombardment.

The second part of the message is more insidious, though. It makes reference to RT (the state-run Russian news service), Putin, and the “balance of the narrative.” In an attempt to be balanced, the author references two known sources of false information, one of which is admired by Trump and has been potentially (but not definitively) linked to the leaking of information that may have cost Hillary Clinton the presidency. This combines two subtle and effective forms of deception: linking a lie to a truth to make it more credible (we know that RT and Putin are foreign actors that spread falsehood through the media to manipulate people, so why not Soros?), and linking a conspiracy theory (Soros manipulating the left) to an actual conspiracy (Russian involvement in the DNC server hack) to give the conspiracy theory more credulity. This combination of truth with “truthy” lies, aimed at both the propagation of lies and the questioning of what we already know to be true is a psychological abuse tactic called gaslighting. (For more on the impact of repeating “truthy” claims until they are taken for truth, see Audrey Watters’s “Education Technology and the Age of Wishful Thinking.”)

Let me clarify. This isn’t classic gaslighting, which tends to come as a sustained, subtle abuse of one person by another, usually someone in a close relationship with the abused. This is a new brand of digital gaslighting, where large groups of people (and their sock-puppet Twitter accounts and bots) attack both individuals and groups, with the effect of the targeted group losing their collective grip on reality. (This was a common tactic during the GamerGate movement.) Individuals may not question their own sanity, but they question the reality of their friends and allies. They question the truth of things for which there is good evidence, and they become susceptible to truthy lies. And when they uncritically retweet those truthy lies, those truthy untruths circulating alongside sometimes surreal truths fuel the uncertainty people have started to feel about their own movement. Worse yet, they give the attackers evidence to point to about the lies told by the movement, discrediting them in the face of moderates and the undecided. (Several in my social media circles suggested that this was the motivation behind the sharing of the video of Venezuelan protests, mentioned above, captioned as anti-Trump protests in Los Angeles.)

Facing tactics like these, digital literacy and critical thinking about digital media require far more than knowledge of the fallacies of informal logic. Ad hominem attacks, reductio ad absurdum, the intentional fallacy — these pale in comparison to coordinated digital deception, powered by sock-puppet Twitter accounts, SEO expertise, and a Facebook algorithm that privileges fake news. We need a new, critical digital literacy: a deep understanding of the technological, sociological, and psychological implications of connective digital media and how people use it, with a view towards mindful, ethical media creation and consumption. This is more than traditional information literacy applied to digital media, more than technical knowledge of digital media production and network protocols. Connective digital media enables new modes of media creation and human activity, and critical digital literacy requires grokking those new modes, in addition to grasping the implications of porting “traditional” media practices to the digital. And seeing both the good and the bad (and the good responses to the bad) ways in which these new modes are enacted, it is clear that we must engage them, especially those of us who educate.

But how do we?

Following are a few guidelines I’ve assembled, based on my experience and the experiences of others I have observed or researched.

Consumption

Double-check every claim before you re-share. As Mike Caulfield points out, social media platforms are designed to keep you on their platform. That means they make it easy to get the “gist” of a “news” story without going to the actual article, and they attract attention away from the name of the source in their news story previews. It’s important to read the full article and verify the source, its credibility, and to the extent possible, the veracity of the specific claims being made. (Sites like Snopes.com can be helpful, as well as Melissa Zimdars’s growing resource on fake and misleading news sources.)

Be wary of casual scrolling. This is hard work, but here’s the thing. That casual scrolling impacts your memory, consciously and unconsciously, and reframes your expectations, which in turn shapes what sounds “truthy” and what sounds surprising or shocking. We need to have our crap detectors on high alert, and double-check everything we can, especially if you find it popping back into your mind later.

Don’t automatically disbelieve the “falsy.” As I hinted at above, we live in a time in which many lies sound truthy and many truths seem surreal. “They say that, but they wouldn’t really do it.” Just as we need to be careful not to believe truthy claims without evidence, we need to be careful not to disbelieve the surreal simply because of our “gut” impression. This is particularly difficult when a simple conspiracy theory is offered as an explanation for a complex phenomenon. Sometimes the world is messy, and ― Occam’s Razor notwithstanding ― simple explanations can be a cop-out. We can’t let the speed of our various news feeds keep us from slowing down and thinking deeply when deep thinking is required. Wisdom and nuanced critique do not come quickly or cheaply.

Production

Do not exaggerate your own claims. When you go even a little bit beyond the truth in your own claim, especially a critique of the other side, that has two negative effects: it weakens your own credibility, and it contributes to the uncertainty of your allies. Both of those things can be used to great effect by deceptive opponents.

Be prepared to repeat the truth over and over. Misinformation is everywhere, and it can go viral easily. You’ll see the same misinformation repeatedly, whether from a single individual or, more likely, repeatedly shared by a variety of people you follow. Corrections rarely have the same viral reach as fake news and propaganda which are engineered for clicks, and each new deceptive post comes without the chain of comments that discredited the previous lies. Further, when many sock-puppet accounts share the same thing in different locations, the lies can start to dominate the search results if there’s not repetition. Fighting such a campaign can involve a lot of copy-and-pasting into comment feeds and mentions, and success almost always requires a networked team effort. Sometimes we need to sound like a broken record, or rather a choir of distributed voices, steadily chanting a tenor against which untruths sound dissonant.

Curate good resources. The nature of social-media feeds means that information flows past quickly and often doesn’t stick. Dumping good information and critique into the stream is necessary, but not sufficient. But if we each pick an issue we know well and begin to curate the best resources, perhaps on our own domains, we can collectively amass a growing network of robust resources to combat attempts at mass digital deception. Then instead of sharing an article at a time into a stream that will wash it out of people’s consciousness quickly, we can share updates to growing resources, and that sharing serves a double purpose: bringing new information to light and bringing old, but still valuable, information back into the conversation without looking repetitive. A further advantage is that each time we share that curated resource, we share the same URL, making it look more popular to algorithmic platforms like social media networks and search engines, which will in turn serve it up higher in people’s feeds.

Relation

Stand up for others. Some people ― because of their visibility, marginality, or their social status ― will bear the brunt of the deception and abuse more than others. If you’re one of those others who experience a relatively privileged status, be willing to play tag team and give them a break. We’re all in this together.

Be helpful to your allies who inadvertently share misinformation. Don’t attack them, especially publicly. Simply offer them the information you’ve found to be reliable, and if you’re sure they shared something false, encourage them to delete it and share the more accurate version. Again, we’re all in this together. That said…

Recognize that acquaintances and casual friends may turn out to be on a different side in the fight against deception and oppression. The truth is more important than casual friendships, especially when the stakes are high. Don’t be a jerk on purpose, but if a casual acquaintance tries to call you out for being too picky, too uncivil, maybe even mansplaining, be careful. Check your behavior, and make sure you’re not guilty of being a mansplainer, whitesplainer, etc., but then get back to work. When the stakes are high, so are emotions, and some people want their social media activity to be an escape. But when the stakes are truly high, we can’t afford that escape, and calls for “civility” amount to little more than calls to perpetuate oppression. As the old (and oft-mistattributed) adage goes, “The only thing necessary for the triumph of evil is that good [people] do nothing.” When we witness oppression, abuse, or harassment and do nothing, we allow it to continue, to gain inertia, to be normalized. Even worse, when we turn a blind eye to oppression in a public venue (like social media), we make it less likely that others will stand up, as well. (This is known as the bystander effect. The more observers there are, the more individuals rely on others to take action.)

It can be easy to write off large-scale deception, systemic injustices, even psychological abuse as non-violent, and thus less pressing, less worth raising our collective voices and spurring collective action. But these forms of oppression are violence, even when they don’t lead to immediate physical harm. As Paulo Freire writes, “With the establishment of a relationship of oppression, violence has already begun” (Pedagogy of the Oppressed, 2005 edition, p. 55). That doesn’t necessarily mean that physical violence is the proper response. But when we don’t recognize acts like massive-scale gaslighting as truly oppressive acts, we run the risk of becoming a mere observer. Or worse, a spectator. While the appropriate response in any situation is highly contextual, oppression demands a response. That is the truly civil thing to do. When arguing about the best way to make coffee, civility means toning it down and preserving the relationship. But when faced with the rise of fascism and white supremacy, the civil thing to do is to take a stand.

Education

Empower your students to do the same. Critical pedagogy is about raising one’s critical consciousness, empowering students to be transformative agents in the world. With fascism and white supremacy on the rise, critical pedagogy ― education as the practice of freedom ― is the necessary and moral response. Don’t settle for instrumental uses of technology. Don’t stop with informal logic and historical fallacies. Help awaken your students to these new practices of digital deception, and help them face them effectively. If they are going to be transformative agents of change in the world, they need this knowledge.

So what does that look like in practice?

This semester, I taught Digital Storytelling (#DS106) at the University of Mary Washington. Most of the class is centered around digital media creation and reflection on our own creations and those of the others in the course (or broader network of courses). The students and I chose a theme of “The Cover” for this semester, so we explored remakes/remixes of a variety of types. In addition to exploring the technical and creative challenges of a variety of media (re)creations, we undertook several tasks that engaged these critical digital literacies, one of which was Twitter bots.

Twitter bots are a fun way to engage remixing text. (See Audrey Watters’s Twitter list of Twitter bots for some brilliant examples.) But as Mark Sample discusses, it is also possible to create “bots of conviction.” As my students created, tweaked, and reflected on their bots, many of them pointed to the fact that Twitter in general, and bots in particular, had a potential power that they were previously unaware of, a power they didn’t realize was so within their reach. Seeing the results of their own bots, and the approaches that others took, gave them more ideas on positive things they could do. And negative results that could emerge.

One student wrote a bot that remixed a positive, uplifting text in order to give readers novel encouraging messages on a regular basis. Tweets like “Be confident with who you are” and “My happiness will draw people in” formed the bulk of the bot’s output. But interspersed were messages like “Stop trying to be you” and “I’m happy because you looked like someone else.” In her blog post about her bot-making experience, she writes:

After watching my bot rebel for 8 hours, I decided to fight back! I changed the original punctuation and took out words like… “hate” and “unhappy” so that the bot did not have negative words to include while pulling from the source.

Though this assignment was primarily technical, many critical lessons were learned: making a bot (even a sock-puppet account) is relatively easy; social tools can be used for education, inspiration, and activism; even simple algorithms have unintended consequences; who is responsible when an algorithm does unexpected harm is a complex question. (Thankfully, that last one was learned speculatively, not experientially.) And though the bots’ “rebellion” was mostly comical or non-sensical, many of the students were able to see in those algorithmic failures the seeds of other, less innocuous things they have seen on social media, and draw important connections. And now when they see a sock-puppet bot or problematic content in an algorithmic “news” feed, they have a better idea of what they are seeing, and how they might counter it.

These issues are a primary focus for many of my colleagues at UMW, particularly those asking students to work on the open web as part of our Domain of One’s Own initiative. We recently released a Domain of One’s Own Curriculum, a set of modules that can be incorporated into courses helping students learn how to manage their own digital identity and think critically about (mis)representation online. Three of these modules directly relate to the issue of critical digital literacy:

  • Digital Identity explores “how online identity is created, managed, and represented.”
  • Digital Citizenship helps students “think critically about the ‘rules’ of citizenship on the web” and “how they will engage in online environments.”
  • Representation dives deep into the realm of online harassment, systemic prejudices, and interrogates “the various identities that impact how one is received and allowed to perform online. Understanding the various racist and sexist underpinnings of supposed neutral platforms and programs.”

These modules are designed to be incorporated into a variety of types of courses, so if you are working online with your students and don’t know where to start, they can give you a launching point as you help them build their critical digital literacy.

A call to action

In Harry Potter and the Goblet of Fire, headmaster Albus Dumbledore warns that a time will come when we will have to make a choice “between what is right and what is easy.” Following the path laid out by corporate media platforms is easy. Believing a truthy lie is easy. Teaching old content with new digital tools is easy. Clicking “share” or “retweet” is easy.

But what is right? What is true? That’s the stuff of education.

It never has been easy.

But we need it ― now more than ever.