“Learners are classified based on their patterns of interaction with video lectures and assessments, the primary features of most MOOCs to date.” — Rene F. Kizilcec, et al.

It’s the first thing in the name. MOOCs are primarily massive. They reach huge numbers of students. “Graduating even 5 percent of 100,000 students in a MOOC provides many instructors with substantially greater reach than an entire lifetime of teaching in a conventional classroom.” Educause’s “What Campus Leaders Need to Know about MOOCs” starts with scale: “MOOCs (massive open online courses) are courses delivered over the web to potentially thousands of students at a time.”

With this massive scale comes great responsibility. Discussions of teaching at “mass” scale can set the tone and context of a conversation about pedagogy, a conversation with faculty, CIO’s, CEO’s, but also potentially, a conversation which limits the voice of students. Especially at a time when Diane Dagefoerde of Ohio State University can say “We have a couple of MOOCs going on right now… I’m sure we all do, right?” in an Educause Top Ten Issues in Higher Education IT webinar, we all need to be careful how this conversation begins. The quote that starts this article is drawn from a paper co-written by three Stanford faculty, and argues that a reasonable classification of learners in MOOCs can be drawn from interactions with video lectures and assessments (multiple-choice quizzes and unmoderated forums).

I find it difficult to express how silly I find this idea to be. Drawing conclusions about the interests and purposes of learners through video lecture hours, the virtual equivalent of “butts in seats,” discussions so bad one Coursera professor left a MOOC because of them, and multiple choice quizzes seems like an awfully scarce number of data points for a huge population. To be fair, the Stanford article by Kizilcec, Piech, and Schneider goes on to encourage pedagogical inventiveness, “Other innovative designs of MOOC instruction, content, or platform features — based on principles of the learning sciences or human-computer interaction — should likewise be subject to experimentation and evaluation.” Despite this call, Coursera’s founders and others have, more recently, continued to spread the message that data from the currently limited field of MOOCs can paint an accurate picture of students’ desires. Even some who decry the current quality of MOOCs write about new qualities arising in response to the presumed “intention of the learner.”

In a recent Educause article, Daphne Koller, Andrew Ng, Chuong Do, and Zhenghao Chen (all four work for Coursera, three of them are Stanford professors) expand and explain the Stanford classification, saying “retention in MOOCs should be considered carefully in the context of learner intent, especially given the varied backgrounds and motivations of students who choose to enroll.”  They bring to the conversation a host of data and analytics from their impressively large user-base, as well as a framework established by Koller, et al. and Phil Hill for understanding the patterns of MOOCer behavior, based upon their activity in the MOOC.

Student activity, however, is not so limited. It is not predetermined only by the “type” of learner, its story is not told by automated data-gathering systems. Students are alive, they respond to quality, and the quality of a MOOC depends on the quality of the pedagogy, the quality of the content, how it is strung together, and how a community arises out of it.

And that is the problem. Koller, Ng, Do, Chen, Hill, Kizilcec, Piech, Schneider and others evaluate the intention of MOOC students by their behavior in those MOOCs, but miss one very important and obvious possibility: What if the students’ behavior in the MOOC is influenced by the quality of the resources and community they find there?

Students with great intentions — to complete, to engage in a broader community, to gather research for a project — will likely not continue to engage in a MOOC if the quality of the MOOC does not serve their intentions better than other resources. Instead of confronting this issue, Koller and others throw out the red herring of cost: “Since there is no financial cost or barrier to entry, there is little reason to believe that even a majority of the students who enroll in a MOOC intend to complete the class.”

But we are not only talking about completion, and these are not students who try for a few weeks and then trickle out as they get the piece of the course they wanted. Koller et al. also write that of the 50-60% of students who watch the first lecture, only about 15-20% ever submit an assignment. Phil Hill records the sharp drop-off of viewers after the first week. In a Bioelectricity MOOC, 8,000 students watched videos — but only 3,000 of those watched the week 2 intro. Hill reports that anything from 60-80% of the course are “lurkers” who participate in the first week, but by the second week, are gone.

I am also speaking from my own experience. I am what Koller, et al., would class as a “passive participant” and what Phil Hill would call a “lurker”, but I did not have the intentions those terms infer. I signed up for several Coursera MOOCs in the last quarter, one of which was the ill-fated Georgia Tech “Fundamentals of Online Education — Planning and Application” course.

I was excited — I am an educational technology professional, pursuing a graduate degree in educational psychology. Quality online learning is a daily point of discussion in my job and my life. I intended to complete the course. In the week before the course was set to start, I received an email asking me to join one of the discussion groups. Very good, I thought to myself, well-plotted group discussion is a much-used and much-praised component of online learning. I clicked on the “join group” link. The Google Spreadsheet was down. It had only been a few hours since the email arrived. The next email I received:

“It has been an exciting few hours. The course has just started and some of you have managed to delete entire rows and columns in Join A Group Google spreadsheet, removed people from their groups, crashed the Google server, and rebuilt the page back up. This is exciting for me because you are figuring out how to work with each other. I am also excited to see that you are personalizing your groups. So here are some suggestions and tips based on what we have seen so far:

  • You do not need a Google account to enter your name on the Google Spreadsheet.
  • If you get a “We’re sorry. Our servers are busy. Please wait a bit and try again.” message, please wait and try again. This just means there are too many people trying to access the site.
  • Make sure you enter your names in the correct cells. Some of you are entering your names in cells that have the Group numbers. Go back and make sure you are in a group with 19 other people.
  • Do not delete someone else’s name just because you want to be in that group. Find an empty cell to enter your name in.
  • Make sure you are still in the group that you signed up for. If for some reason your name is missing from that group, go to a different group.”

Note the language. “You managed to delete entire rows” “you removed people from groups” “(you) go back and make sure you are in a group” not “we failed to create protected ranges” or “we did not properly test-run our solution before trying it with 40,000 students.”

Let us be clear: The students did not crash the Google Servers. Google Spreadsheets have a stated limitation of 50 simultaneous editors per spreadsheet. The Fundamentals of Online Education course creators did not test their application with more than 50 students in a class of 40,000, they did not carefully read their infrastructure contracts, and they apparently did not realize that people would be able to remove others from groups they had already signed up for.

No matter, I thought to myself. I’ll play the lurker role for now and go watch some of the videos. Many articles have commented on the group failure, few have mentioned the quality of the videos for this course. I cannot return to the videos and watch them again, so my memory may be faulty, but this is what I remember: The videos consisted of a teacher speaking directly to the camera, against an (I assume, though am not sure) green-screen backdrop, overlaid with a cityscape, which created distracting visual artifacts around the low-quality overlay. The teacher stared directly into the camera, and delivered a dry lecture which was summarized in wordy bullet points in a faux-chalkboard font reminiscent of Comic Sans, on a faux-chalkboard background, complete with faux wood edging, below the teacher video. While I checked the second week’s video to see if the content was worth the ocular pain, the Google Spreadsheet fiasco continued. I soon received another email:

“I was hoping that the Google Spreadsheet would work after a day but it looks like it will not work at all for our purposes. So I have gone to Plan B. I have created a new Group Sign Up forum. To differentiate this from the groups on the Google Spreadsheet, the group names start with Group A and continues. You can join any group. It does not have to be in order. In order to sign up, look at the Posts. If there are 21 posts, join the next group that does not have 21 posts. If all groups are filled up, start a new Thread with a Group number or name.”

Curious again, I entered the course. As I suspected, and could have predicted, there was no way to sort threads by number of posts, threads did not cap at 21 posts, and several (likely because of both simultaneous posting, introductions happening in the boards, and aggravation) had more than 50 posts. There were groups AA, BB, XXZ, and others.

I fled to Reddit.

Then another email came, “All those who have joined groups, please stay in your groups. Those of you who still have not joined groups and would like us to assign you a group please click on the Assign Me A Group link on the left navigation. We will send you information on your group. I hope this helps to resolve some of the group issues.” I ignored it. I was unsurprised to receive yet another email several days later, almost a week after the course had started:

“We want all students to have the highest quality learning experience. For this reason, we are temporarily suspending the ‘Fundamentals of Online Education: Planning and Application’ course in order to make improvements. We apologize for any inconvenience that this may cause. We will inform you when the course will be reoffered.”

And like the drummer from Spinal Tap, the course vanished.

Though this is an extreme example of the quality issues in MOOCs, and of the tendency of MOOC instructors to lay both the blame and the responsibility for MOOC quality on the shoulders of the students, it is unfortunately not singular. I was excited about a Coursera Data Analysis course, until I discovered that it functionally required a prerequisite in R programming. I enjoyed a class on the Ancient Greek Hero (this one in EdX) until I was tired out by long form prose reading on an LCD screen of a functionally open-source “sourcebook” which the course team did not publish in an ereader-consumable format, though they admitted they could. I think they wanted to, but already had too much to do, and chose not to use the tremendous crowd-sourcing resources of a MOOC.

MOOCs are new, and we are still learning how to build and use them. I write to make a point: blaming the students is occurring at the highest and lowest levels of this system, and it is not helping us to learn how to build and use MOOCs. The founders of a major MOOC platform, Koller et al., say that students who don’t finish a course didn’t intend to do so. Is the equivalent true when Coursera or its faculty do not finish a course? Should we conclude they never intended to finish Foundations of Online Learning?

MOOC students are busy people, taking time out of their lives to try to learn something. In an online situation, quality issues can be checked ahead of time. Just as we now have the potential to reach more students than ever, we have the potential to erode the trust of students in education, to discourage young minds from pursuing resources, to make the educational system seem all the more staid, unhelpful, and impotent. When we fail to check our quality, we shouldn’t assume students are willing to put up with our crap.

Just as our reach now has increasing power, so does our data. It does look impressive to say “we looked at 100 courses of 40,000-60,000 students each.” There is distinct and great power in the ability to measure so many learners quantitatively and Coursera and others should be watched for that ability alone. That said, if we attempt to infer the psychological intention of those 40,000-60,000 students from their activity in a set of courses, without asking them what motivated their abandonment of those courses, without asking them who they talked to before watching the first video, without asking them about their intentions, we are drawing conclusions about the state of the ocean from the metabolism of a clam. Insofar as a student’s activity serves activity data more than our service to them, MOOCs extend access to education only insofar as that activity extends our access to data. Not only will that attitude erode trust, outcomes and motivations, that erosion of trust will erode the ability of MOOCs to gather meaningful data.

In closing, I would like to make three recommendations:

  1. MOOC students are generally happy to fill out surveys (at least 25-30% of them). Ask the students who leave the course why they did. It’s simple to ask and analyze, and telling that Koller, Ng, Do, Chen, and Hill do not cite data about student feedback. (Credit where due: Duke did so for their Bioelectricity course.)
  2. Put basic quality control measures in place. Surveying is good, asking students why they left is disrespectful when the lack of quality should be obvious.
  3. In thinking about retention of students for a course, don’t lose sight of retention for an education.

Much has been made of “student outcomes” recently, and it should be applied to MOOCs, and applied intelligently. Finishing a course is not a student outcome. Having learned something is. This is especially vital as people speak of MOOCs standing in for foundational and general education courses, because the necessary outcome of those courses is not that a student be able to repeat back knowledge at the end of the course, but that the student’s brain becomes more efficient at gathering and communicating in the information of that field. Education, at least in part, is about making the brain an efficient academic worker over the long term, so that students can enjoy being part of an information economy.

Koller, Ng, Do, and Chen say “When viewed in the appropriate context, retention in MOOCs is often quite reasonable.” I disagree. I think it may be helpful in this case to think of MOOCs like video games, TV shows, or other forms of equivalent mass media. If a successful TV show retains 5% of the initial audience that joined an online community about that show, a studio does not assume the other 95% just didn’t intend to finish the show. Instead, the studio fires writing staff, brings on new actors, or does not renew the show.

[Photo by eflon]