A post-rubric approach to online course quality

Toward a Critical Instructional Design

This article is part of the edited collection, Toward a Critical Instructional, from Hybrid Pedagogy. Together with its sister volume Design Designing for Care, it strives to imagine a more humanizing and problem-posing approach to the design of education.

Both books are available in paperback (Designing for Care and Toward a Critical Instructional Design) and Kindle editions (Designing for Care and Toward a Critical Instructional Design); additionally, chapters can be read online as open-access articles (Designing for Care and Toward a Critical Instructional Design). Proceeds from both books help continue the mission of the Hybrid Pedagogy 501(c)(3) non-profit.


There’s a phrase that has stuck with me since I first heard it a few years ago; security theater. It’s looking like you are taking active approaches to create a safer environment, except those actions are not really creating a safer environment. They just have the appearance of security, without actually making things more secure. I think we hold this same theatrical, wishful thinking in our rubric-based approaches of ensuring the quality of an online course. We make a lot of noise and movement, spending hours on preparing a course for a quality review, coaching the instructor on how to pass the review, reviewing the course, fixing the course, and then re-reviewing that same course. But most of our energy is spent checking off items on a list that have very little impact on the online learner’s experience. Instructional designers have become actors in our very own Quality Theater. If we take a critical look at what is a ubiquitous and often assumed process in online learning, we might build a path towards a more effective, equitable, and autonomous approach that requires far less time commitment.

Building that path is the work of critical instructional design. It’s looking at the systems we have, we build, and we propel forward. Critical instructional designers prioritize collaboration, participation, social justice, agency that builds connections between learners and instructors, but also relationships between instructors and instructional designers (Morris, 2018). The critical instructional designer rebels against the standardization of one-size-fits-all processes and technologies, because when we choose practices that are rooted in equity and autonomy, in collaboration and self-determination, in the back-and-forth dialogue of critical consciousness, we can build something inclusive and powerful (Morris, 2018). A deeply common and, I would argue, deeply problematic area of instructional design in education that needs the examination of critical instructional designers is the unquestioned adoption and propagation of online course quality rubrics.

When I talk about applying a quality rubric, I’m referring to the increasingly common approach in online course quality control. The assumption is that specific design characteristics of an online course can make a better learning experience for students. A course quality rubric typically has a set of categories (like assessment) with multiple items assigned to each category. A course reviewer or multiple reviewers, sometimes a peer or sometimes an instructional designer, will search for the evidence of the specified criteria and determine if the course in question meets the standards set by the rubric. If the course meets enough of the set standards it “passes” the course quality review. These reviews focus entirely on what can be viewed in the online course content in the learning management system (LMS). They do not examine the teaching or learning practice.

To think that we can neatly separate the content and structure of a course from the teaching of a course in terms of defining what makes a quality online learning experience is an exercise in magical thinking. By ignoring the teaching, we knowingly choose efficiency of process over effectiveness of impact. There’s a reason most online learning isn’t in the form of a correspondence course where all the content is there for you to work through on your own with minimal, if any, instructor interaction. Interaction matters. It’s not the content that predominantly makes the online learning experience work, it’s the experience itself. The rubric approach to quality looks at what content is present and absent in a course, but not at what happens in the teaching of the course. While that is a very “scalable” approach to quality assurance, online learning is a slippery creature totally dependent on context and participants. It is not a widget on an assembly line that can be reproduced interchangeably with the right tooling. In this chapter, I will detail how the rubric-based quality approach to online courses is ineffective at producing engaging online courses. I will then identify what kinds of approaches might help create better online experiences that improve learner retention and motivation.

Quality Outcomes: What are We Looking to Improve?

Why go through the hassle of doing these course reviews in the first place? It’s well within reason to think instructors and instructional designers can have a positive impact on the learner’s experience with a pre-semester course check. The first step is to acknowledge what impact we can and can’t have with a course check. We cannot instructional design our learners out of food insecurity, family caretaking responsibilities, physical and mental health challenges, or lower the outrageous costs of higher education through practices of quality assurance. These are the far more common reasons learners leave college. But we can help craft a more meaningful and supportive online classroom experience. We do this by approaching the work in a way where we can show that our efforts are worth the time investment. If a quality course review isn’t improving the learner experience, then we shouldn’t waste our time doing them. We need to name our outcomes.

I’m deeply tempted to point towards indicators of humanizing online learning as being our outcomes of choice. Getting individual qualitative and quantitative feedback from thousands of learners would absolutely provide rich evidence for effectiveness or lack-thereof. That would be downright glorious. But my pragmatism will not allow me to dream that big. I have never known an instructional design department to have the resources, skill sets, determination, and institutional support to be able to deploy that level of impact analysis with all the learners in all their online courses. If you can, go for it! But for the rest of us, we need a more mundane marker to use as a proxy for gauging effectiveness. I suggest learner outcomes.

When I talk about learner outcomes, I’m specifically referring to grades and drop-fail-withdraw (DFW) rates. Personally, I hate grades and the performance approaches to learning they create. But they are also pervasive and available. Anonymized grades and DFW rates can often be requested from the registrar’s office without much fanfare. And like it or not, grades and GPA’s are what determine if scholarships are kept, if courses need to be repeated, and even if future enrollment is allowed. Grades are the gatekeepers in determining if learners are allowed to continue their college dreams. But measuring impact from those outcomes is a tricky business.

Finding a direct line from any variable to student outcomes is a challenging task, and connecting course quality rubrics to student outcomes is no exception. Ron Legon, the Executive Director of Quality Matters™, argues that the comprehensiveness of the Quality Matters™ rubric standards makes measuring the impact of the rubric across an institution, let alone multiple institutions, a “practical impossibility” (Legon, 2015). I would argue that if measuring the impact of a quality rubric is a practical impossibility, then that is an intentional design choice in creating and maintaining that particular rubric. Course quality rubrics tend to have over 50 items, multiple standards of practice, and are divorced from an overarching learning theory. This is critical. Without an overarching theory of practice, there’s no clear way to measure if the practice works or not. For example, you could measure motivation using a scale derived from a motivational theory, like Bandura’s self-efficacy or Deci and Ryan’s self-determination theory. Those theories give a marker for what counts as motivation. Without a theory for how something works, it’s difficult to find evidence that it actually works. You don’t know what to look for. Having such a huge number of items on a rubric also makes it overwhelming. And in the academic world, we tend to confuse being overwhelmed by information as being rigorous in knowledge. This is the “spaghetti-at-the-wall” approach to effectiveness. If we throw enough spaghetti at the wall (i.e., rubric items that are good in intention) something is bound to stick and make an impact on course quality. It’s not purposeful, it’s not intentional, it’s not focused, but if we throw enough standards at the course something is bound to stick eventually and make the course a “quality” experience.

I will dip into some of the research on quality course reviews, but first I think there’s a simpler way to make the case for why the rubric-centric approach doesn’t work. I invite you to play along. Find a copy of the quality rubric you use to review online courses. If you are an instructional designer, I know you have one. Read the first checklist item out loud. Now answer this question: is that a common reason learners drop, fail, or withdraw from online courses in your context?

No?

Then why are we spending time on it as if it were?

This was the fatal crack in the dam for me. I pulled out the 48 item rubric we used at the institution where I was an instructional designer at the time, and I read through each and every item asking myself if the presence or absence of that item is something that causes learners to leave or fail online classes. I ended up identifying 4 of those 48 items that were factors in completion rates. And no, “measurable learning objectives” was not one of them. I have never heard a learner say, “I failed that online class because the instructor used the verb ‘understand’ in the objectives.” Most of the items on that 48 item rubric were perfectly fine items. They were nominally good things in and of themselves. Those items were not meaningful things, not impactful things. I realized that I was spending hours of my labor per course every week with these items that don’t impact the student experience in meaningful ways. I think this feeling is common no matter the particular rubric. Quality control rubrics take many shapes and forms in academia, but that approach originated with one very well-known rubric.

Quality Evidence: If it Works, We Should Be Able to Tell it Works

I’ve been working in online learning in higher education since 2007. In these early days of online courses the learning management systems (LMS) were often little more than a place to put folders full of files. There were few constraints and few enforced organizational structures. In practice this meant for each course everything was always different everywhere. One course I worked on in an early iteration of Blackboard’s™ LMS had a system of nested folders fourteen levels deep as the organizational structure. The instructor was meeting with me because they noticed their learners were rarely turning in assignments on time if at all! I got to be the one to let them know that it was because their fourteen-level organization approach made absolutely no sense. It was in these first days of online learning when course quality rubrics started to emerge, most notably the Quality Matters™ rubric from Maryland Online in 2003 (Quality Matters, n.d.). Quality Matters™, and the many imitators that came after, created these rubrics in an effort to provide a system to “ensure course quality” for learners across the institution (Quality Matters, n.d.). As LMSs have become more constricting to what an online course can look like, the need for a checklist of present and absent items in an online course seems to be less pertinent because so much structure is dictated by the LMS. Much of the syllabus-related items and policies can be automated by the LMS administrator.

My purpose is not to dedicate this chapter to directly critiquing Quality Matters™ (QM™), but it would be impossible for me to talk about online course quality without speaking directly about them. They have been the most successful group at pushing the conversation about online course quality around the world, while also creating a market for solutions to the challenges of quality. Every other major quality rubric out there stood on the shoulders of the QM™ process and the QM™ content to create their version of a quality rubric. Countless universities have “home-grown” versions of quality rubrics based on QM™, and countless more do unofficial reviews with bootlegged QM™ handbooks. Their influence in these practices runs deep, well beyond the official Quality Matters™ certified courses. I have to talk about Quality Matters™ because so many of our unchallenged assumptions about quality online experiences come from their legacy.

There are many online course quality rubrics, most of which follow a similar construction as the Quality Matters™ rubric. Baldwin, Ching, and Hsu looked at the six most prevalent online course quality rubrics publicly available to compare their features, emphasis, and course quality criteria (Baldwin et al., 2018). They found that the rubrics had an average of 59 items, with twelve common overarching standards. These standards are:

  1. objectives are available
  2. navigation is intuitive
  3. technology is used to promote learner engagement-facilitate learning
  4. student-to-student interaction is supported
  5. communication and activities are used to build community
  6. instructor contact information is stated
  7. expectations regarding quality of communication-participation are provided
  8. assessment rubrics for graded assignments are provided
  9. assessments align with objectives
  10. links to institutional services are provided
  11. course has accommodations for disabilities
  12. course policies are stated for behavior expectations.

The application of a rubric has a similar process too. A single or multiple reviewer(s) checks for compliance to the standards, feedback is offered at least on missed standards, and the instructor is given the opportunity to revise their online course based on that feedback in order to meet the standards.

Jaggars and Xu worked to isolate which, if any, standards are directly connected to student outcomes (2016). After comparing multiple course quality rubrics they name four general categories of quality from all the collected rubrics: (1) organization and presentation, (2) learning objectives and assessments, (3) interpersonal interaction, and (4) use of technology (Jaggars & Xu, 2016). Using these categories, they examined 23 online courses and found that only one category, the quality of interpersonal interaction within a course, positively and significantly connects to student grades. They go on to explain that interaction is most impactful with frequent and effective student-instructor interactions. Despite the large collection of standards and items on quality rubrics, it seems that proportionally few of them are directly connected to improving student outcomes in online classes. In fact, they conclude by suggesting the creation of rubrics that move beyond checking for the mere presence of an item or standard and instead note how they are being used in the learning environment. They also encourage practitioners to validate each component of the new rubrics against student outcomes.

Validity and reliability are two foundational terms used in statistics and measurement conversations. Validity means your measurement tool (like a ruler) actually measures the thing it was designed to measure (like the length of a pencil). This is a crucial concept for measuring things that are hard to measure, like learning or depression. If the depression questionnaire can’t help a doctor determine if someone is showing symptoms of depression then it doesn’t work, it does not have validity in the doctor-patient context. Reliability means your measurement tool works properly over and over again with multiple populations. If we are going to claim our rubrics as measurers of course quality, then they need to be valid and reliable tools.

Noting there is very little public research available on the reliability and validity of online course quality rubrics, Yuan and Recker completed one of the more comprehensive studies on validity and reliability (2015). They looked at the validity and reliability of their university’s course quality rubric, which was adapted from the Quality Matters™ rubric, and found that one-fourth of the total items were problematic and had to be eliminated. From the remaining items they identified nine factors, explaining 73% of the total variance, with “learning activities & materials” explaining the highest amount of total variance in course quality (Yuan & Recker, 2015). Explaining variance is a good thing; it means something important is happening in that area and we should pay attention. Their conclusions state that only rubric items related to learner engagement and interaction have a significant and positive effect on online interactions, while only student-content interaction significantly and positively influence course passing rates.

There’s more research out there, but the main takeaways I have found are:

  • Many rubric items will not survive statistical validation (Yuan & Recker, 2015)
  • Course quality rubrics average 59 items over twelve common themes (Baldwin et al., 2018)
  • Learners are not in agreement with rubric publishers about which standards are most important in online courses (Ralston-Berg, 2014)
  • In practice, the multiple reviewer process doesn’t provide useful specific examples of aligning the item in question with the standard (Schwegler & Altman, 2015)
  • Faculty resist course quality rubrics mainly because of the overwhelming time commitment involved in the process (Gregory et al., 2020; McGahan et al., 2015; Roehrs et al., 2013)
  • Quality course reviews are often done in tandem with faculty development programs, so separating the effects of the programs versus the effects of the rubric is challenging if not impossible (examples of overlapping programs and their effects Harkness, 2015; Swan et al., 2012)
  • Few standards connect to outcomes, mainly the standards that relate to faculty-student interactions are traceable to improving student outcomes (Jaggars & Xu, 2016)

I realize there are more reasons to use these rubrics than improving student outcomes. But honestly, I don’t care about those reasons. I want to keep students enrolled and on scholarship, which means outcomes. I still believe a degree opens doors in life that are unavailable otherwise. It changes people’s futures, family’s futures, and the community’s futures. That’s why outcomes are my focus. I also realize that if the course review process is too suffocating for instructors, then we have lost the opportunity to improve outcomes for their students. We don’t just need a better course quality tool, we need a better approach to course reviews. We need an approach that feels empowering as opposed to punitive.

Quality Feels: Power and Authority for the Precarious Worker

If you want to pick a fight with instructional designers, just throw shade at their quality rubrics. When I started questioning these rubrics the reactions from my colleagues were fast and furious. I was caught completely off-guard. Johanna Inman addresses this in her comments on using evidence-based research when talking with instructors about their courses. She says, “it is important to address the feelings about the evidence as much as it is to share the evidence” (Inman, 2021, p. 71). So when I push against a process of quality reviews, processes that bring feelings of clarity and authority in the work we instructional designers do, I’m also pushing against something that feels safe. I think it’s challenging that feeling of safety that gets such visceral reactions from my colleagues. There is a large group of instructional designers who will defend quality rubrics with their lives. (You can find them trolling me on Twitter right now.) Instructional designers roll their eyes at me when I roll my eyes at learning objective measurability. But when I publicly question the power of the rubric, I get such a disproportionate reaction of anger as if these instructional designers gave birth to the quality control rubrics, and I’m insulting their babies. It’s curious, but I think I understand why this topic stirs such forceful feelings.

I have a theory about the relationship between course quality rubrics and instructional designers. I don’t have any “empirical evidence” about it, just observations and ideas. Instructional designers have expertise and experience in their profession. However, we are often working in a culture that puts us (and other staff) in a power dynamic where we are treated as subservient to the faculty (or the SME). That context implies that our expertise is without merit. That wears on you. It can make you second guess your professional (and personal) value. Instructional design is also work that takes broad expertise and some pretty good social negotiation skills to really feel like you know what you are doing. Every person you work with is different, and every context you work in is different. It is often nebulous work. That’s actually what I enjoy about the job, it’s always changing so it’s always challenging. If you are new to the work, or something has shaken your confidence in your skill, it’s easy to shame-spiral into a feeling of incompetence. Enter the quality rubric.

You know what feels authoritative? A rubric of 50-ish items, all annotated in annoying yet ambiguous detail with the word “QUALITY” in the title. Preferably in all caps. If you are the keeper of that powerful rubric, you become the expert. If you work with one for even a short time, you can get to know the standards and the required items thoroughly. The nebulous work of instructional design becomes clear in the light of rubric-based compliance mandates. Enforcing a course rubric is work that is anything but ambiguous. When your work is checking off boxes, you never really feel like an imposter. It’s just box checking, after all.

When your work is checking boxes, it’s really easy to quantify your value to the university administrators, few of whom are interested in the nuances of instructional design consultations. Telling the people who determine your employment status that you completed 25 quality course reviews this month and have “certified” those courses sounds concrete and valuable. Few people are asking if that work actually made a difference with the learners, but numbers and certifications are easy to support. Entire instructional design departments are anchored in these course reviews. But box checking isn’t instructional design, at best it’s compliance monitoring. When instructional designers push against my criticisms with such venom, it’s partly because by pushing against the core work of their departments, I’m questioning their value as professionals.

Enforcing a rubric is safe. It’s a lot of effort, but it doesn’t demand much risk. Even when things get difficult with the instructor it’s easy to deflect blame to the rubric itself or the administration that prescribed it. But it also positions the instructional designer as the gatekeeper of quality. For a staff member who is typically treated as an inferior to the great and powerful tenured faculty, it can taste pretty delicious to be in a position of authority for a change. These rubrics and their associated processes can bring a sense of value and clarity to instructional designers when so much of our work is ambiguous and often dismissed. These rubrics are safe. They are authoritative. They are powerful. They are really easy to explain to administrators who decide if your department gets to keep its funding. When the clarity and security of your career is questioned, you can get really mad, really fast.

But if we can move away from the work of compliance, we can lean into the work of instructional design. It’s there we can put our energies into the things that matter most to our learners in online courses. We leverage our professional skills to help create an environment that better connects, empowers, and includes both the learners and the instructors.

Quality Focus: What Matters Most?

Earlier I noted how the rubric items that influence faculty-student connections are the ones most likely connected to outcomes. I also criticized the “spaghetti-at-the-wall” approach as being both ineffective and overwhelming to instructors. To move forward, we need to reject the 59 items on a rubric approach in favor of an approach that focuses on a smaller set of the most impactful categories on online learning: faculty-student connections, inclusion, and clarity of structure. I use the term categories deliberately because there are many practices in each of these categories that could be impactful given a specific context and population. By utilizing an approach that asks instructors to commit to categories of practice instead of prescribing universal, one-size-fits-all actions, we offer instructors and instructional designers the autonomy to leverage their professional experience in selecting (and changing as needed) the specific practices that might be the most interesting and the most effective in their own unique courses.

Faculty-student connections

The largest impact instructional designers can have on the learner experience in online classes is through creating an environment with deep and wide faculty-student connections. “Decades of research demonstrate that peer-to-peer, student-faculty, and student-staff relationships are the foundation of learning, belonging, and achieving in college” (Felten & Lambert, 2020, p. 5). Faculty-student connections tend to directly improve cognitive skill development while also directly encouraging classroom engagement (Kim & Lundberg, 2016). These same interactions also seem to be the most significant factor for both first-generation learners and non-white learners regarding positive outcomes (Felten & Lambert, 2020, p. 83). Not all interactions are positive ones, but overall, knowing that the instructors care for the learners and are actively working for learner success is the major factor in learner outcomes.

Instructional designers can assist with these connections by helping instructors create online courses full of opportunities for connection and communication. Instructional designers can equip instructors to host conversations outside the LMS using text and messaging tools like Slack, Discord, and GroupMe. But the purpose isn’t the tools (just to be clear), the purpose is to help create discussion and conversation that goes beyond “post once, reply twice” to feel more like regular, dare I say actual, conversations. I worked with an instructor on a fluid mechanics engineering course and we created an activity called “Fluid Mechanics in the Wild.” The idea was to take a selfie video where you saw fluid mechanics happening in your daily life, and then explain as best you could the principles at work. The instructor would give weekly examples of this and ask the learners to try this practice themselves. This could be the steam pipes in the engineering building or the large margarita with an upside-down bottled beer in it. These were shared and would provoke conversation (and connection) about the everyday relevance of their classwork. These connection practices don’t have to be new practices, they can just as easily be a refocusing old practices: instructors participating in discussion forums instead of merely observing them, modeling discipline-specific reading practices through social annotation as opposed to just assigning journal articles, or trying contract or negotiated grading practices (think ungrading) instead of one-way evaluation. Again, we are not talking about prescribing a single required item or practice but drawing from a broad category of approaches that can create connection while offering instructor autonomy. Instructional designers can use their professional experience to help guide instructors towards more meaningful connections that build both relatedness and competence in their unique population of learners.

Inclusion

Knowing and feeling like you belong in a space is a powerful motivator. When I talk about designing for inclusion, I am referring to practices like culturally responsive teaching, accessibility, universal design for learning, trauma-informed pedagogy, and designing for neurodiverse learners. I’m casting a wide net for sure here, but I would argue that any steps that make the online classroom more inclusive are positive steps. Inclusion in these terms also means establishing a sense of belonging, which has been demonstrated as a factor in keeping learners enrolled (Tinto, 2005). That sense of belonging helps retain learners in college programs, from women in STEM (Banchefsky et al., 2019) to diverse and underrepresented groups in higher education (Thomas, 2015). Instructional designers can influence online courses through inclusive design practices. Amy Collier describes inclusive design as a practice that, “goes beyond accessibility, though accessibility is considered within inclusive design. Inclusive design celebrates difference and focuses on designs that allow for diversity to thrive. In higher education, this means asking ourselves, ‘Who has been served, supported, or allowed to thrive by our educational designs and who has not?’ And, ‘How might we design for inclusion of more students’ “(Collier, 2020)? While instructional designers can begin with content accessibility and perhaps even “Decolonizing the Syllabus” (DeChavez, 2018), depending on the experience of the instructor and the population of learners, we can also help guide instructors in creating approaches like land-based pedagogies (Sam et al., 2021) or legacy assessments, in which projects engage with the community and the environment and take the purpose of learning beyond the individual, toward the betterment of the community (Chavez & Longerbeam, 2016). Again, this isn’t a checklist of inclusive practices to be included in a course without consideration of context. This is a wide category of options that build a sense of relatedness in the learners while instilling autonomy about the course material by being able to learn in more culturally responsive ways.

Clarity of structure

In my initial investigations of what are the most impactful factors in online learning, I tried my best to stay away from specifics on content and structure. The rubric-centric approach is entirely focused on content and structure and my hesitation was that I was trading one list of prescriptive rules for a different list of prescriptive rules. But I couldn’t get away from it. Clarity of structure matters. Not having to navigate fourteen levels of nested folders on Blackboard™ matters to the learning experience. Not having to play, “read my mind” games with the instructor matters. When I talk about clarity, what I’m getting at is structure that allows autonomy while empowering learners (and instructors) to feel competent in engaging in the tasks at hand. But there’s not one structure to rule them all, one template that will open access to all learners. “One right way” is the kind of thinking that replicates white supremacy principles in the workplace and in education (Okun, 2021). Course quality rubrics are rooted in the practice of “sameness” of every course in every space. The first step of critical instructional design is simply asking, “why?” Why is sameness mission critical? Do we ask on-campus instructors to conform to the practice of sameness? Do we tell on-campus algebra instructors to script their courses exactly like the anthropology or the astronomy courses? Of course not! That’s ludicrous! We expect, and even encourage, on-campus instructors to shape their courses with their professional experience, in accordance with expectations of the discipline, and in conversation with their learners. So why do we prescribe sameness for those courses when they move online? This is a side effect of an earlier issue: thinking we can separate the design of the course from the teaching of the course. Templates make processes scalable, not learning. Learning is contextual, and we can shape that context to empower and encourage learner participation. Clarity of structure simply means that the learning process should not be an unsolvable mystery to the learners. It does not mean that everything is always the same. Clarity happens within the chosen structure, not from requiring the same structure.

Another way of thinking about structure would be in terms of making implicit expectations explicit instructions. For example, instructional designers can share the Transparency in Teaching and Learning (TiLT) framework for assessments with instructors. This is an approach that doesn’t require instructors to change their assignments, but provides a structure to explain the expectations. TiLT asks instructors to describe the assignment in three sections: purpose, task, and criteria for evaluation (Winkelmes et al., 2019). This framework with its relatively simple implementation tends to improve learners’ confidence, belonging, and skill development, but shows even more benefit for learners typically underserved by higher education institutions (Winkelmes et al., 2016). Instructional designers can use their professional experience to help guide instructors towards content structures and greater clarity that empower their unique population to navigate the experience with competence and autonomy.

Overarching theory

Earlier in this chapter I claimed, “without an overarching theory of practice, there’s no clear way to measure if the practice works or not.” So what would be the overarching theory that provides a framework for measurement of effectiveness for the collection of faculty-student connections, inclusion, and clarity of structure? I think there are multiple frameworks that could potentially work, but for me as a quantitative-minded educational psychology researcher interested in motivation and learning, the immediate connection is the self-determination theory of motivation. Self-determination theory (often abbreviated as SDT) states that people have a motivational need for competence, autonomy, and relatedness, and these three needs can guide a person’s behavior (Schunk et al., 2014). Individuals need to feel competent when interacting with others and when engaging with various tasks and activities. The need for autonomy refers to the need for a person to feel a sense of control or agency in their interactions with the environment. A sense of relatedness would feel like belonging to a group of people, being included and connected to others (Schunk et al., 2014). I see pairs of SDT items working together in each of the high-impact categories.

  • Faculty-student connection = relatedness + competence. Relatedness because of the personal connection with the instructor, but also competence because that connection can lead to clarity of understanding of the course material.
  • Inclusion = autonomy + relatedness. Inclusion can bring autonomy when we look at factors like accessibility, universal design for learning, and neurodiverse instructional approaches. It also leads to feelings of relatedness through culturally responsive instruction and creating a learning environment where learners can bring as much of themselves as they want to the experience.
  • Clarity of structure = competence + autonomy. Clear instructions and transparent expectations let the learners know without ambiguity what they are responsible for. A navigable structure also creates marked paths for completing the course and engaging with the materials.

Self-determination theory has been used in educational research in multiple contexts, including online learning, to measure engagement and effectiveness of approaches (Hsu et al., 2019; Niemiec & Ryan, 2009; Wang et al., 2019). While SDT is my choice of theoretical framework, it’s far from the only candidate. The point is in order to claim effectiveness, quality, or impact, we need a way to name and measure those factors that contribute to being effective, quality, and impactful. Otherwise, we risk spending our time and energy putting faith in the marketing promises of companies trying to sell us their products.

I realize many of my colleagues will not hesitate to express their disgust at my desire for quantification and measurement of a new course quality process. I absolutely understand that aversion. Statistics and measurement have been wielded as a weapon of institutional compliance (and worse things) since they were created. Statistics, despite its rooting in seemingly objective numbers, is a deeply subjective approach just like every other approach we humans use to make sense of our world. Statistics, however, is also a very useful approach when we are trying to make decisions that will impact the time and energy of thousands of people. It generalizes practices into the most effective for the most people. I can’t in good conscience ask the 3000 online instructors at my university to devote their time, energy, and often unpaid labor to a course quality process that merely sounds like it should work. I can’t ask instructional designers to leave the main stage of the Quality Theater behind just to walk a block over to participate in the Off-Broadway Quality Theater. When we implement a system of practice that impacts thousands of educators, we need evidence that the practice makes a positive difference with learners. These online course quality practices affect thousands of educators because so many universities require some flavor of a quality check out of the fear of being out of compliance with the federal regulations.

What about federal requirements?

If you’re in the United States, there are federal requirements for online courses and programs that keep those courses connected to federal funding. Back in the dark ages of online classes (2008) the federal government stipulated that in order to meet the requirements for accreditation, online classes needed to have “regular and substantive interaction” between the learners and the instructor. It took an additional decade to define what that meant.

Substantive interaction is the push to make the online learning experience connected and engaging. It requires at least two of the following: “(i) Providing direct instruction; (ii) Assessing or providing feedback on a student’s coursework; (iii) Providing information or responding to questions about the content of a course or competency; (iv) Facilitating a group discussion regarding the content of a course or competency; or (v) Other instructional activities approved by the institution’s or program’s accrediting agency” (Code of Federal Regulations Title 34, 2022).

The definition of regular interaction is “(i) Providing the opportunity for substantive interactions with the student on a predictable and scheduled basis commensurate with the length of time and the amount of content in the course or competency; and (ii) Monitoring the student’s academic engagement and success and ensuring that an instructor is responsible for promptly and proactively engaging in substantive interaction with the student when needed on the basis of such monitoring, or upon request by the Student” (Code of Federal Regulations Title 34, 2022).

Let’s be honest here: I can’t imagine a lower bar for regular and substantive interaction. Instructors have to give feedback on assessments and answer questions? Preposterous! The point of these regulations is that the Department of Education doesn’t want universities to sell correspondence courses to unsuspecting learners under the banner of online courses. You don’t need an extensive rubric and a drawn-out peer review process to meet the low-bar requirements of regular and substantive interaction set by the federal government. However, this is the point where I see rubric creators leveraging the administrator’s fear of losing federal funds to promote their own products and services. In a recent announcement from the Online Learning Consortium about their OSCQR quality rubric, they explicitly fuel this fear by saying, “Institutions are seeking assistance in successfully navigating the new RSI [regular and substantive interaction] regulation and risk losing access to student financial aid if the institution is audited and found to be out of compliance by the DoE [Department of Education] Office of Inspector General, or as part of a periodic Departmental financial aid program review” (Chmura, 2022). Few things prompt university administrators into action as quickly as the threat of losing money. But the federal requirements are so minuscule that the quality rubric process is a monumentally over-engineered approach to what in practice is a very low expectation.

Quality Conclusion: New Ideas for a Post-Rubric Approach

There are groups out there that also recognize and are pushing against the tensions of the quality rubric approach. Peralta has moved beyond content and created a rubric for improving online equity (Peralta District, 2020). Hybrid Pedagogy published a piece last year from Martha Burtis and Jesse Stommel that explained their tensions with implementing an unchallenged and unquestioned rubric approach while offering a list of course design considerations for instructional designers to leverage instead (Burtis & Stommel, 2021). While I’ve detailed some of the research regarding the effectiveness of quality rubric processes, I also have years of experience enforcing these rubrics. I am challenging this rubric-centric approach because I have been actively involved in these reviews for much of my professional career. I have been a Quality Matters™ certified peer reviewer and been a reviewer on many Quality Matters™ course reviews. My previous institution created a customized version of the QM™ rubric, and I was a reviewer for many courses using that rubric as well. All said, I’ve been part of approximately 200+ online course reviews over the past 6 years. These are the primary tensions I’ve felt in my experience with the rubric-centric approach.

  • The experience almost always feels punitive for the instructor. No matter how experienced or how skilled of an educator they are, “failing” a review feels like an attack.
  • No one passes a review without coaching which requires even more time committed to the process. Even a great online educator will not “naturally” build in the 40-60 items the rubric requires because the rubric is looking for very specific, often unintuitive, things.
  • Our reviews averaged around 15 hours of work each between the reviewers (not including the revision work from the instructor or pre-review coaching). This same time commitment is echoed in other groups (Gregory et al., 2020).
  • There’s surprisingly little published evidence that course checks using common rubrics make a measurable impact on student outcomes or experiences. That’s a lot of time and energy invested in something with fuzzy impacts.
  • The prescriptive nature of the rubrics along with their processes convey, sometimes subtly and sometimes overtly, that there is “one right way” to teach or design a course. “One way” approaches are not inclusion-oriented practices.
  • It universally felt like a hoop to jump through and not a genuine path to make teaching more enjoyable and learning more engaging.
  • It dismisses the years of online teaching experience many faculty bring to these conversations by implying it’s these items that make a course “quality” and not their skill and dedication.

The whole idea of a 50-point quality checklist with multiple reviewers that deploys a drawn-out song-and-dance review process is an approach that will never prioritize collaboration, relationship, social justice, agency, or any other values of critical instructional design. By doing these rubric-centric reviews in this manner I think we are investing in something that will never bring value on the scale of the costs. While along the way we are stealing the autonomy, competence, and relationships from our instructors in the name of scalability and having a job description we can easily quantify to administrators.

I think a better, measurable, and more effective path to online course quality lies in a process that respects the experience of the instructors, focuses on high-impact items, approaches quality in a strengths-based framework, and empowers instructors with the support and autonomy to improve the learning experience in their online courses in ways that are uniquely appropriate to their learners, disciplines, and teaching approaches. I think this would need to be a reflective and possibly co-designed approach that respects instructor experience as opposed to templated task lists. We have mounds of research to point us toward the items that make known impacts on the online learning experience. We also have a community of human-centered instructional designers with deep skill sets and even deeper passion. We have plenty of barriers to break through, but the time is now for us to create a more effective, equitable, and empowering approach to online course quality. That starts with stepping down from the stage and leaving the Quality Theater behind.


Quality References

Baldwin, S., Ching, Y.-H. & Hsu, Y.-C. (2018). Online Course Design in Higher Education: A Review of National and Statewide Evaluation Instruments. TechTrends, 62(1), 46–57. https://doi.org/10.1007/s11528-017-0215-z

Banchefsky, S., Lewis, K. L. & Ito, T. A. (2019). The Role of Social and Ability Belonging in Men’s and Women’s pSTEM Persistence. Frontiers in Psychology, 10, 2386. https://doi.org/10.3389/fpsyg.2019.02386

Burtis, M. & Stommel, J. (2021). The Cult of Quality Matters. Hybrid Pedagogy. https://hybridpedagogy.org/the-cult-of-quality-matters/

Chavez, A. F. & Longerbeam, S. D. (2016). Teaching across cultural strengths: A guide to balancing integrated and individuated cultural frameworks in college teaching. Stylus Publishing.

Chmura, M. (2022). OLC and SUNY Online update course quality rubric based on new federal requirements for distance education. Online Learning Consortium. https://onlinelearningconsortium.org/news_item/olc-and-suny-online-update-course-quality-rubric-based-on-new-federal-requirements-for-distance-education/

Code of Federal Regulations Title 34, § 600.2 Definitions (2022). https://www.ecfr.gov/current/title-34/subtitle-B/chapter-VI/part-600

Collier, A. (2020). Inclusive design and design Justice: Strategies to shape our classes and communities. Educause Review. https://er.educause.edu/articles/2020/10/inclusive-design-and-design-justice-strategies-to-shape-our-classes-and-communities

DeChavez, Y. (2018). It’s time to decolonize that syllabus. Los Angeles Times. https://www.latimes.com/books/la-et-jc-decolonize-syllabus-20181008-story.html

Felten, P. & Lambert, L. M. (2020). Relationship-Rich Education. Johns Hopkins University Press. https://doi.org/10.1353/book.78561

Gregory, R. L., Rockinson-Szapkiw, A. J. & Cook, V. S. (2020). Community college faculty perceptions of the Quality Matters™ Rubric. Online Learning, 24(2). https://doi.org/10.24059/olj.v24i2.2052

Harkness, S. S. J. (2015). How a Historically Black College University (HBCU) established a sustainable online learning program in partnership with Quality Matters™. American Journal of Distance Education, 29(3), 198–209. https://doi.org/10.1080/08923647.2015.1057440

Hsu, H.-C. K., Wang, C. V. & Levesque-Bristol, C. (2019). Reexamining the impact of self-determination theory on learning outcomes in the online learning environment. Education and Information Technologies, 24(3), 2159–2174. https://doi.org/10.1007/s10639-019-09863-w

Inman, J. (2021). Grounded in research: Be good, or at least evidence-based. In J. Quinn (Ed.), The Learner-Centered Instructional Designer: Purposes, Processes, and Practicalities of Creating Online Courses in Higher Education (pp. 69–78). Stylus Publishing.

Jaggars, S. S. & Xu, D. (2016). How do online course design features influence student performance? Computers & Education, 95, 270–284. https://doi.org/10.1016/j.compedu.2016.01.014

Kim, Y. K. & Lundberg, C. A. (2016). A structural model of the relationship between student–faculty interaction and cognitive skills development among college students. Research in Higher Education, 57(3), 288–309. https://doi.org/10.1007/s11162-015-9387-6

Legon, R. (2015). Measuring the impact of the Quality Matters™ rubric: A discussion of possibilities. American Journal of Distance Education, 29(3), 166–173. https://doi.org/10.1080/08923647.2015.1058114

McGahan, S., Jackson, C. & Premer, K. (2015). Online course quality assurance: Development of a quality checklist. InSight: A Journal of Scholarly Teaching, 10, 126–140. https://doi.org/10.46504/10201510mc

Morris, S. M. (2018). Critical instructional design. In S. M. Morris & J. Stommel (Eds.), An Urgency of Teachers. Hybrid Pedagogy Inc.

Niemiec, C. P. & Ryan, R. M. (2009). Autonomy, competence, and relatedness in the classroom. Theory and Research in Education, 7(2), 133–144. https://doi.org/10.1177/1477878509104318

Okun, T. (2021). White supremacy culture. https://www.whitesupremacyculture.info

Peralta District, P. C. C. (2020). Peralta online equity rubric. https://www.peralta.edu/distance-education/online-equity-rubric

Quality Matters. (n.d.). About QM. https://www.qualitymatters.org/why-quality-matters/about-qm

Ralston-Berg, P. (2014). Surveying student perspectives of quality: Value of QM rubric items. Internet Learning. https://doi.org/10.18278/il.3.1.9

Roehrs, C., Wang, L. & Kendrick, D. (2013). Preparing faculty to use the Quality Matters model for course improvement. MERLOT Journal of Online Learning and Teaching, 9(3).

Sam, J., Schmeisser, C. & Hare, J. (2021). Grease trail storytelling project: Creating indigenous digital pathways. KULA: Knowledge Creation, Dissemination, and Preservation Studies, 5(1). https://doi.org/10.18357/kula.149

Schunk, D. H., Meece, J. L. & Pintrich, P. R. (2014). Motivation in education: Theory, research and applications. Pearson.

Schwegler, A. F. & Altman, B. W. (2015). Analysis of peer review comments: QM recommendations and feedback intervention theory. American Journal of Distance Education, 29(3), 186–197. https://doi.org/10.1080/08923647.2015.1058599

Swan, K., Matthews, D., Bogle, L., Boles, E. & Day, S. (2012). Linking online course design and implementation to learning outcomes: A design experiment. The Internet and Higher Education, 15(2), 81–88. https://doi.org/10.1016/j.iheduc.2011.07.002

Thomas, L. (2015). Developing inclusive learning to improve the engagement, belonging, retention, and success of students from diverse groups. In M. Shah, A. Bennett & E. Southgate (Eds.), Widening Higher Education Participation (pp. 135–159). https://doi.org/10.1016/b978-0-08-100213-1.00009-3

Tinto, V. (2005). Reflections on retention and persistence: Institutional actions on behalf of student persistence. Studies in Learning, Evaluation, Innovation, and Development, 2, 89–97.

Wang, C., Hsu, H.-C. K., Bonem, E. M., Moss, J. D., Yu, S., Nelson, D. B. & Levesque-Bristol, C. (2019). Need satisfaction and need dissatisfaction: A comparative study of online and face-to-face learning contexts. Computers in Human Behavior, 95, 114–125. https://doi.org/10.1016/j.chb.2019.01.034

Winkelmes, Bernacki, M., Butler, J., Zochowski, Golanics & Weavil, &. (2016). A teaching intervention that increases underserved college students’ success. Peer Review, 18, 31–36.

Winkelmes, M.-A., Boye, A. & Tapp, S. (2019). Transparent design in higher education teaching and leadership: A guide to implementing the transparency framework institution-wide to improve learning and retention. Stylus Publishing.

Yuan, M. & Recker, M. (2015). Not all rubrics are equal: A review of rubrics for evaluating the quality of open educational resources. The International Review of Research in Open and Distributed Learning, 16(5). https://doi.org/10.19173/irrodl.v16i5.2389