During the summer of 2013, I scratched my seven-year itch. I broke up with Facebook.

I first met the social media space in 2006, when the platform was in many ways an energetic youth bent on embracing all of life’s pleasures. Our relationship, like so many others, was full of vibrancy in the beginning. We shared our hopes and dreams, pictures of silly and significant events, and even shared music to chronicle our strengths and shadows. As our partnership matured, and Facebook began trying on new looks like the newsfeed, different privacy options, and even chat and mail, the once dazzling space dulled. We used to so clearly communicate, uninhibited by outside influence. And that quickly changed. As if almost overnight, Facebook began acting out, hiding important posts on the newsfeed and only showing what Facebook thought was most relevant or important. I began distancing myself in 2010. I could feel the shift in the site’s behaviors and habits, and I wasn’t sure then what that change meant for me or our relationship. In the final few years, I still occasionally flirted with Facebook, but I kept a distance; and in 2013, I finally left Facebook for good.

The shift from Facebook occurred for a couple of reasons; these motives helped me through the transition. The revelations forwarded by Edward Snowden on the National Security Agency’s deep ties to data collection of citizens both at home and abroad revealed through journalist Glenn Greenwald, piqued my interest in public debates on privacy and surveillance with the Internet and mobile app technology. The NSA collected Internet and telephone communications data, including data from Facebook, under the PRISM program, a directive birthed from a post-9/11 culture of assessment and prevention of threats to the American public.

Mostly though, the addition of Facebook’s GraphSearch — a super search engine based on phrases — affirmed how much data the social media company had about users, and established Facebook’s willingness to share such data with just about whomever wanted access. The addition of this feature became my tipping point to leave Facebook, and began my journey of talking with my colleagues and students about online surveillance.

On Facebook, when we engage in any activity, our clicks are recorded, our newsfeed posts archived, and our digital lives monitored. We may sense this happening because we see personalized advertisements alerting us to products we viewed on other sites. However, underneath the interface lie computer algorithms that filter the content we end up seeing in front of us on our newsfeed. Eli Pariser, who has an excellent Ted talk and book on what he calls the “filter bubble”, chronicles what we experience in some online spaces. Tracking technologies mine our browsers for data; computer algorithms take the data, run calculations, and output personalized information for the user.

While tracking technologies may seem trivial in light of the benefits (including free access) of the sites that use such technologies, the consequences of filtering content means information is refracted online and in mobile spaces, which leads to a myopia of information. With Facebook, for example, users no longer see all of the content from their friends — the algorithms filter content based upon proximity between users. This means the more a user comments or views another person’s posts or page, the more that person’s posts will appear in the user’s newsfeed. As Pariser notes, we each have our own unique filter bubbles, which means our preferences, habits, and our identities are categorized by algorithms for targeted advertising and search results. Facebook is not the only company that uses personalization either. Tech giants like Amazon, Google, and Netflix — among many others — use computer algorithmic surveillance to personalize products and goods, movie recommendations, and even search results.

Ultimately, the discussion of a refracted online experience addresses a larger conversation about the persuasive abilities of computer algorithms and code. Certainly, it is possible to consider computer technology as tools performing functions. At a deeper level, however, when we [re]act to the personalized content on our screens, we have been influenced by computer algorithms to change our behaviors, beliefs, intentions, or motivations. If algorithms have the ability to filter our content and modify our thinking because of the content provided by the algorithms, then we might consider computer algorithms and code as persuasive agents with abilities.

If we consider algorithms persuasive agents, then the ways we teach our students (and ourselves), may shift as a result. For example, in the classroom, some of us train our students to work with the tools and content of online, networked spaces. We may compel students to learn how to use tools like DropBox for document archival and sharing, Google Drive for collaborative exchanges, Facebook for class group discussion, even Tapestry — a mobile app of user-generated click-through stories… and the list goes on. While we find comfort in the affordances of the collaborative web 2.0 to share content, what if we integrated discussions about the persuasive agents that are embedded in these tools? How might we develop a renewed critical digital pedagogy addressing these concerns?

After all, we share these digital tools at the bequest of preparing students, in the words of researcher Stuart Selber, for functional (how to use the tool), critical (how to analyze the tool), and rhetorical (how to create with the tool) literate practices necessary for professional and personal communication in the age of the digital. As the digital seduces us to continue teaching students how to be critical and reflective prosumers (cf. George Ritzer & Nathan Jurgerson, who outline the portmanteau of producer and consumer linked to unpaid labor practices companies collect from online users) of technology, we also have to remind ourselves — and our students — that many of these platforms and applications track our movements online and create rich data portraits of who we are from our digital debris. As a result, if we continue to inhabit these online spaces and encourage our students to do so as well, we must also attend to why we should care about online surveillance and persuasive algorithms.

Writing teachers and researchers, for example, have long assumed the need for critical engagement and inquiry in the classroom. For instance, this year marks the 20th anniversary of Cynthia Selfe and Richard Selfe’s “The Politics of the Interface,” an intellectually stimulating examination of the ideological and material power relations embedded in computing technology, which was also the theme of this year’s Computers & Writing conference. Selfe & Selfe’s work helps frame the call educators should heed when addressing online surveillance.

If we, as teachers, ask our students to go online to sites that have tracking technologies, then as educators we bear responsibility in teaching students critically literate practices of analysis, evaluation, explanation, interpretation, and observation in connection with digital surveillance. Why? Quite simply, we are simultaneously shaping and being shaped by the digital technologies and personalized results we see on our screens. Cookies and beacons monitor our clicks and keystrokes for web personalization and tailored recommendations, but we also only see a fraction of content on the web because of the filter bubble. With personalized advertisements and recommendations, we might turn to ask, what are we not seeing? How are we being categorized? Might this be a form of discriminatory practice?

Each time we engage online, we have to attend to what Cynthia Selfe calls the “ideological freight” embedded in the interface. As teachers, we are poised to engage students with the critical skills necessary to untangle surveillance and tracking technologies as part of instruction. As educators, we’re prepared for these discussions; we already teach students how to evaluate sources online, for example. Why not teach students, as Heidi McKee eloquently notes in her article about net neutrality, how to evaluate privacy policies — and I add surveillance — online as well? We ask students to assess credentials, bias, publication type, ethos of the author and publisher in many classes. We can integrate instruction that asks students to assess privacy policies (Stephanie Vie has a forthcoming chapter in Technical Communication and Video Games which explores privacy policies and user agency), cookie use, and personalized advertising. Just by having this information, we can make better choices and decisions about what sites we feel comfortable sharing our data with, or when we want to opt-out — even if the decision to do so feels uncomfortable at first.

Because of the tracking mechanisms embedded in the architecture of the Internet and app technologies, educators need a frame for critical awareness with online surveillance that aligns with critical thinking skills. As McKee notes, teachers do not have to reinvent curricula in order to have students engage with the underlying norms of the interface. Instead, educators can integrate talks into existing learning outcomes and goals. When showing students a new online technological tool, educators must highlight the privacy policies, the terms of service, and even discuss third-party cookie collection. These critical micro-gestures, incorporated and scaffolded throughout a semester, may also help students learn that evaluation, assessment, and observation occur with each interaction in an online space, and reinforce a habit to engage critically both in and out of the classroom.

There are starting points to educating students about online tracking and surveillance that connect with a criticality of analysis, explanation, interpretation, and observation, but only represent one starting path into the discussion.

Analysis: Have students analyze the different types of personalized content they see on Google, Facebook or even Amazon to learn more about the filter bubble and the refracted content they experience online.

Evaluation: Ask students to install and use privacy tracking tools to see what companies track them online and learn ways to block or monitor the trackers if need be. Students may also conduct evaluative research into how companies use data.

Interpretation: Incorporate policy papers and educational materials authored by non-profit organizations to help students understand the larger debates about online surveillance and privacy online.

Explanation: Encourage students to cast arguments about the benefits or constraints of surveillance by having them explain the effects surveillance has upon users.

Observation: Go to marketing and behavioral advertising sites to run scripts that check for browser cookies to learn what companies observe web habits.

With the surveillance of web habits by so many companies, this may lead people to opt-out of participating in sites like Facebook just as I did last summer. I realize that leaving sites that use an immense amount of tracking technology does not address the core concerns of companies collecting vasts amount of prosumer data and possibly selling that data to other organizations. But, something has to change. Do we want to live our digital lives being constantly tracked? Do we want our legally tracked digital data sold and possibly used in ways that harm instead of support us? Make no mistake, we are on a precipice, tracking technologies will only increase, especially with Google’s work in the “Internet of Things” and digital surveillance may become more pervasive and invasive then it is already. I think educators can play an important role in critical engagement with surveillance technologies, and also by helping others understand the freight certain websites bear.

By developing critical engagement with students, educators may help them (and ourselves) make informed decisions about what sites and trackers to opt in and out of online. We may decide to fully opt-out or block certain tracking technologies. We may even initiate classroom projects where students investigate surveillance and privacy online and/or write to legislators, or even leaders at tech giants like Google and Facebook about increasing protections for consumers in spaces that use tracking technologies. It will take a concentrated effort to have our voices heard, and perhaps we may do so in ways that allow us to sustain our relationships with web and mobile companies instead of leaving. We can no longer afford to sit passively by while surveillance technologies harvest our data because there is too much at stake. We can educate our students, who may become future policy leaders, computer programmers, and educators on the real effects of surveillance online and hope both us and they can lead the way to positive change online. With such a concentrated effort, we are in a position to change our relationships — for the better — with companies that use surveillance, in ways that allow us to mend broken connections by focusing on positive and reciprocal relations for all parties involved. Perhaps one day, Facebook and I will find our common ground and reconnect, sharing our lives again once more without the ever watchful eye of surveillance tracking every movement.