Linda Suskie

 A Common Sense Appr​oach to Assessment in Higher Education


Blog

I've had to disable the Comments feature of my blog because it's been targeted by spammers. But I'd still love to hear your thoughts and reactions! Please post a response to any blog post wherever you saw a link to my blog--Twitter, LinkedIn, or the ASSESS listserv--or contact me directly.

view:  full / summary

A few suggestions on working from home

Posted on March 25, 2020 at 6:05 AM

The entire planet, including the higher education community, has been rocked to the core by the COVID-19 virus and efforts to contain it. Many people are suddenly teaching and working from home for the first time in their lives.

 

I’ve been working from home for decades, starting when I was on parental leave when my children were born in the 1980s. Starting in 1999, my jobs had extremely long commutes, so I negotiated working from home one or two days a week. For the last eight years, I’ve been a freelance consultant working exclusively from home. My experience hasn’t been the same as yours, but I’ve got some suggestions that may help you.

 

1. Go easy on yourself. What many of us have been through in the past few weeks is akin to starting a new job in a new environment under incredibly stressful circumstances—simultaneously learning new job skills on an incredibly steep learning curve while simultaneously caring for children, supervising home-schooling, and dealing with debilitating anxiety over the health and safety of our loved ones and the practicalities of providing household necessities. No wonder we’re overwhelmed! But recognize that, in some ways, things will start to get better. You’ll figure out a routine; you’ll figure out how best to use your college’s learning management system; you’ll figure out how to adjust this semester’s assignments; and you’ll figure out how to use a gas pump without infecting yourself (tip I learned yesterday: put your hand in a dog poop bag). It will not always be this bad.

 

2. The hardest part of working from home may be self-discipline. Unless you’re facing an imminent, intractable deadline, home has plenty of distractions. There have plenty of times the laundry has looked a lot more appealing than the project that was facing me! I think the only way to address this is to appraise yourself honestly and build in whatever discipline you need. For example, if I have to read something deadly dull, I promise myself a cup of coffee when I’m done, but not before.

 

3. Carve out the right workspace. Again understand what you need to make working-at-home work for you. I’ve learned that I must be in front of a window, the bigger the better. So, though I initially created a workspace in a spare bedroom, our kitchen island has turned into my office, because it overlooks the biggest window in the house, with a great view of our backyard. I think I do some of my best thinking while looking out the window. But maybe you need that spare bedroom, so you can literally close the door and turn off work at the end of the day. When our children were young, my workspace was in the family room so I could be there for them.

 

4. Working from home is lonely. I’m an off-the-charts introvert and even I miss water-cooler camaraderie. Because most of my work is confidential, I can’t vent about that asinine e-mail to anyone. Here’s where technologies such as e-mail listservs, social media, video technologies (Skype, Face Time, Zoom), texts, and old-fashioned phone calls become really important. The ASSESS listserv has been a lifesaver for me in terms of staying connected with professional peers. I’ve also joined some Facebook groups focused on some of my outside interests. If you have friends or colleagues who have work-at-home experience, tap them for ideas.

 

5. Evolve into a routine that works for you. I’m an early riser, so my day starts early, in my PJs, before anyone else is up, with a cup of coffee, checking routine e-mails while the caffeine sinks in. Once I’m fully caffeinated, I tackle the meatier stuff. Because I start early, I stop most work by late afternoon. I do work on weekends, because that’s when e-mails, phone calls, and appointments die down and I can work on things that require blocks of uninterrupted time, like writing or preparing a workshop. But this is what works for me. The point is to figure out a routine that works best for you.

 

6. Keep e-mails under control. While meetings can still be held by conference call, Skype, or Zoom, your work-from-home life will probably have fewer meetings and a lot more e-mails. E-mails are the one thing I monitor 24/7, constantly deleting the ones I don’t need to read and replying to the easily answered ones. Otherwise, they balloon out of control, and I’ve learned an overflowing e-mail in-box stresses me. To keep my e-mail in-box under control, I also use a lot of e-mail folders. There’s one for each project, and when the project is done, the folder goes into a “Past Projects” mega-folder so it’s out of sight but there if I need to refer to it. I also have a “Read” folder for those interesting e-mails that I’d like to read…someday. And there’s a “Hold” folder for emails for which I’m waiting for an answer. Those folders really help keep me sane.

 

7. Stay healthy. Remember I told you our kitchen island is my office? I’ve gained weight since I started working from home full-time. Stocking only healthy food helps, but my problem is I simply eat too much—it’s easy to turn a work break into a snack break. Exercise is really, really important.

 

8. Celebrate the positives of working from home. You don’t have to get dressed up for work. You don’t have that commute. If you’ve had fixed work hours and now have a bit more flexibility, take advantage of it. Go for a walk when today’s weather is at its nicest point, and truly enjoy springtime. If you live in a region where stores are still open, weekday mornings are wonderful times to shop—the stores are empty and service is great (the staff so bored that they’re eager to help). Your dog loves having you home, and being around a pet is a great destresser. And, unlike millions of Americans, you still have a job. Working at home is minimizing your chance of contracting COVID-19 and spreading it to loved ones.

 

9. Start a post-COVID-19 to do list. This will pass, though not without lasting hardship and loss. Most of those who contract COVID-19 will recover and, once they do, they’ll be immune, no longer contagious, and able to resume their normal lives. For the rest of us, there will eventually be a vaccine. Start a list of things you want to do after COVID-19 subsides: dinners with friends and family, concerts and museums you want to go to, vacations you want to take, home improvements that aren’t possible now, conferences you enjoy. It will give you something to look forward to, and that will help you get through this.

 

 

Would competition among regional accreditors diminish the quality of American higher education?

Posted on February 27, 2020 at 11:35 AM

Originally posted 2/21/2020

 

First, my apologies to anyone who had trouble accessing my February 18, 2020, blog post on the prospect of regional accreditors’ boundaries evaporating. My website hosting service chose that day for a glitch that made my entire blog vanish. I appreciate everyone’s patience while the service restored my blog.

 

I also appreciate the very thoughtful replies I’ve received to that blog post. Most of the comments were on the last paragraph, in which I speculated that competition among the regionals might be healthy and lead to differentiation in ways that better serve the United States’ diverse array of higher education institutions. Some people worried that competition among the regionals will lead to a race to the bottom, with everyone moving to what they perceive as the least-rigorous accreditor.

 

I truly don’t see that happening (at least, not en masse) for several reasons.

 

First, regionally accredited institutions already have that choice, and they’re not going there. They can move to one of the national accreditors, which some perceive as less rigorous than the regionals. (They don’t have the same standards for general education, for example.) In fact, the opposite is happening: institutions with national accreditation regularly apply for regional accreditation. Why do they do this, considering all the time, work and money involved in complying with more extensive or rigorous standards? It’s because U.S. regional accreditation is viewed as the international gold standard of higher education quality assurance. Today some employers and graduate schools require applicants to hold a degree from a regionally accredited institution. I could see them someday saying applicants must hold a degree from an institution accredited by a shorter list of the regionals: those that continue to be highly regarded for their high standards.

 

Second, we’ve already seen the choice-among-accreditors model play out successfully. There are three business program accreditors: AACSB, ACBSP, and IACBE. They each aim to serve different kinds of business programs at different kinds of institutions. An institution that doesn’t qualify for one accreditor doesn’t have to try to force the square peg of its program into the round hole of that accreditor’s requirements; it can choose another accreditor that’s a better fit with the program and institution.

 

Third, accreditation standards are set by the member institutions, and there are plenty of institutions that wouldn’t want to be associated with a race to the bottom. They want their accreditation to be evidence of their quality.

 

Finally, as Joan Hawthorne pointed out, changing accreditors is a long, hard, expensive process. No institution is going to do it unless it feels it has a darned good reason. Yes, there already is some accreditation shopping—by state as well as by accreditor—by some newer, non-traditional institutions. But worst players here, such as diploma mills, are institutions without regional accreditation. I think the regionals are getting better all the time at dealing with institutions that try to shop among the regionals and eventually squeak by one.

 

In my last blog post, I speculated a bit on how the regionals might eventually differentiate themselves. Let me get more specific here.

 

1. I think one accreditor would eventually have standards that emphasize the old inputs accreditation model--endowment size, faculty credentials, student selectivity, research dollars, etc.—and downplay the outcomes—especially student learning outcomes—that most regionals now emphasize. I’ll refrain from snarky comments about the institutions that would migrate to this accreditor.

2. I think we need a regional accreditor whose standards are a better fit for newer institutions with non-traditional models of instructional delivery, governance, and general education. I’ve worked with many of these institutions as they seek regional accreditation, and they’re often square pegs struggling to fit themselves into the round holes of regional standards. I wouldn’t want to see less rigorous standards, just standards that are more flexible without adversely impacting educational quality and institutional viability and integrity.

3. I would love to see one regional accreditor REALLY emphasize student learning and success (more than student learning assessment!), including a requirement to use research-informed strategies to promote student learning and success. I fantasize that this would become the truly prestigious regional, with students flocking to its institutions, because they know they’ll get great educations there.

4. You know those students who just want their faculty member to tell them exactly what they want to see in an assignment, so the students don’t have to think on their own? Guess what—many institutions want the same thing from their accreditors! Some would be delighted with an accreditor whose processes consist of filling out some straightforward forms and submitting them along with some documentation. So maybe one accreditor would evolve into the checkbox accreditor—the one focused on compliance much more than improvement.

5. Yes, there would probably be one accreditor who would be perceived as the easy accreditor. Some institutions would move there, and they’ll get what they deserve in terms of reputation.

6. And, yes, one or two might eventually go out of business.

 

But, as I emphasized in my last post, I don’t see any of this happening soon. The new USED regulations remove some legal barriers to this evolution but not the logistical barriers.

 

Are the regional accreditors' boundaries evaporating?

Posted on February 18, 2020 at 2:30 PM

In November 2019 the US Department of Education (USED) issued “final” regulations for “the secretary’s recognition of accreditation agencies” among other matters. (I put “final” in quotes because things in Washington tend to change every few years.) You can find a link to the relevant pages of the Federal Register here . The new regulations go into effect on July 1, 2020.

 

Under the new regulations, regional accreditors are no longer required to get Federal approval to change the geographic region in which they accredit (page 58893). A regional accreditor based on the East Coast could, for example, start accepting applications for accreditation from institutions whose main campuses are in Oklahoma.

 

Will this dramatically change the face of regional accreditation? Let me begin with the caveat that I have not spoken about these regulations with anyone at any of the regional accreditors or anyone involved in the negotiated rules-making process. I’m just interpreting the language in the Federal Register.

 

The announcement in the Federal Register explains that “the Department seeks to provide increased transparency and introduce greater competition and innovation that could allow an institution…to select an accrediting agency that best aligns with the institution’s mission, program offerings, and student population” (page 58893). The announcement goes further: “The Department expects that the landscape of institutional accrediting agencies may change over time from one where some agencies only accredit institutions headquartered in particular regions to one where institutional accrediting agencies accredit institutions throughout many areas of the United States based on factors such as institutional mission rather than geography” (page 58894). And the announcement speculates, “A shift from strictly geographic orientation may occur over time, probably measured in years, as…greater competition occurs, spurring an evolving dynamic marketplace. Accrediting agencies may align in different combinations that coalesce around specific institutional dimensions or specialties, such as institution size, specialized degrees, or employment opportunities” (page 58897).

 

So is there going to be a sudden, huge rush among institutions to move from one regional to another? No way. Here’s one of the roadblocks: “[USED will not] require an agency to accept a new institution…for which it did not have capacity or interest to accredit” (page 58894). The regional accreditors are funded by dues paid by member institutions. They run lean operations, both in terms of staffing and dollars. They don’t have the capacity to accept and process applications from significant numbers of institutions without major increases in staffing and funding. While the announcement in the Federal Register speculates, “Accrediting agencies may develop a new focus area or geographic scope over time as they increase resources to expand their operations” (page 58901), I just don’t see a significant increase in resources happening anytime soon, if ever.

 

Here’s the second roadblock: “[USED] will not require any institution…to change to a different accrediting agency as a result of these regulatory changes” (page 58894). Let’s imagine a wildly hypothetical scenario: one of the regionals decides it wants to accredit only doctoral institutions. It can’t, because it now accredits community colleges, and USED will not require those community colleges to move to another accreditor. Yes, this accreditor could conceivably put in place standards such as for faculty credentials that are amenable to research universities and difficult for community colleges to comply with. But it couldn’t do that until community colleges have another accreditation home, which brings us back to the first roadblock.

 

I actually like the idea of the regional accreditors going national. I think competition can be healthy, and I like the idea of the regionals differentiating themselves in ways that better serve the incredible diversity of higher education institutions in the United States. I can envision one accreditor developing standards and processes that are particularly suitable for distance learning institutions, another doing the same for traditional institutions, another doing the same for complex institutions… maybe one doing the same for institutions that want an approach to accreditation that relies on documentation without the effort of extensive institutional self-study or analysis. But I don’t think these new regulations are going to move us appreciably down that road.

 

Some assessment smiles for the holidays

Posted on December 22, 2019 at 8:50 AM

I stumbled across an old folder of assessment-related witticisms that I’ve collected over the years—literally decades. Here are some of my favorites. Unfortunately, the sources of some are lost to time. If you know any missing sources, or if you know any other good witticisms, please let me know!

 

I’m all in favor of keeping dangerous weapons out of the hands of fools. Let’s start with surveys. (Frank Lloyd Wright)

 

Measurements are not to provide numbers but insight. (Ingrid Bucher)

 

You cannot fix through analysis what you bungled by design. (Karen Zaruba)

 

The lasting measure of good teaching is what the individual student learns and carries away. (Stanford Erickson)

 

Remember that ‘average’ is simply the best of the poorest and the poorest of the best. (Dan Galvin)

 

Description of a grade: An inadequate report of an inaccurate judgment by a biased and variable judge of the extent to which a student has attained an undefined level of mastery of an unknown proportion of an indefinite material. (P. Dressel)

 

He uses statistics as a drunken man uses lampposts—for support rather than illumination. (Andrew Lang)

 

You got to be careful if you don’t know where you’re going, because you might not get there. (Yogi Berra)

 

The way a question is asked limits and disposes the ways in which any answer to it—right or wrong—may be given. (Susanne Langer)

 

We don’t know who we are until we see what we can do. (Martha Grimes)

 

University politics are vicious precisely because the stakes are so small. (Henry Kissinger)

 

What gets measured, gets managed. (Peter Drucker)

 

To teachers, students are the end products—all else is a means. Hence there is but one interpretation of high standards in teaching: standards are highest where the maximum number of students—slow learners and fast learners alike—develop to their maximal capacity. (Joseph Seidlin)

 

Education has produced a vast population able to read but unable to distinguish what is worth reading. (G. M. Trevelyan)

 

It’s easier to see the mistake on someone else’s paper. (Cynthia Copeland Lewis)

 

For every complex question there is a simple answer—and it’s wrong. (H. L. Mencken)

 

For so it is, O Lord my God, I measure it! But what it is I measure, I do not know. (St. Augustine)

 

To those of you who have received honors, awards and distinctions, I say well done. And to the “C” students, I say: You, too, can be president of the United States. (George W. Bush)

 

It is easier to perceive error than to find truth, for the former lies on the surface and is easily seen, while the latter lies in the depth, where few are willing to search for it. (Johann von Goethe)

 

Old teachers never die, they just grade away. (Henny Youngman)

 

Given particular subject matter or a particular concept, it is easy to ask trivial questions or to lead the child to ask trivial questions. It is also easy to ask impossibly difficult questions. The trick is to find the medium questions that can be asked and take you somewhere. This is the big job of teachers and textbooks. (David Page)

 

The color of truth is gray. (Andre Gide)

 

Consistency is always easier to defend than correctness.

 

Every bureaucracy generates paperwork in a logarithmic fashion. A one-page directive will inevitably lead to a five-page guideline, a ten-page procedure, and a 25-page report. (Ed Karl)

 

The facts, although interesting, are irrelevant to your critics.

 

The more time you spend in reporting on what you are doing, the less time you have to do anything. (Dan Galvin)

 

It’s hard to be nostalgic when you can’t remember anything. Keep critical documents to verify your conclusions.

 

Stability is achieved then you spend all your time doing nothing but reporting on the nothing you are doing.

 

When confronted by a difficult problem, you can solve it more easily by reducing it to the question, “How would the Lone Ranger have handled this?” (Karyn Brady)

 

The last grand act of a dying institution is to issue a newly revised, enlarged edition of the policies and procedures manual. (Eric Hoffer)

 

If you do a job too well, you’ll get stuck with it. (Roy Slous)

 

How many samples of student work do you need to assess?

Posted on November 22, 2019 at 7:00 AM

It’s a question I get a lot! And—fair warning!—you probably won’t like my answers.

 

First, the learning goals we assess are promises we make to our students, their families, employers, and society: Students who successfully complete a course, program, gen ed curriculum, or other learning experience can do the things we promise in our learning goals. Those learning goals also are (or should be) the most important things we want students to learn. As a matter of integrity, we should therefore make sure, through assessment, that every student who completes a learning experience has indeed achieved its learning goals. So my first answer is that you should assess everyone’s work, not a sample.

 

Second, if you are looking at a sample rather than everyone’s work, you must look at a large enough sample (and a representative enough sample) to be able to generalize from that sample to all students. Political polls take this approach. A poll may say, for example, that 23% of registered voters prefer Candidate X with (in fine print at the bottom of the table) an error margin of plus or minus 5%. That means the pollster is reasonably confident that, if every registered voter could be surveyed, between 18% and 28% would prefer Candidate X.

 

Here’s the depressing part of this approach: An error margin of 5%--and I wouldn’t want an error margin bigger than that—requires looking at about 400 examples of student work. (This is why those political polls typically sample about 400 people.) Unless your institution or program is very large, once again you need to look at everyone’s work, not a sample. Even if your institution or program is very large, your accreditor may expect you to look separately at students at each location or otherwise break your students down into smaller groups for analysis, and those groups may well be under 400 students.

 

I can think of only three situations in which samples may make sense.

 

Expensive, supplemental assessments. Published or local surveys, interviews, and focus groups can be expensive in terms of time and/or dollars. These are supplemental assessments—indirect evidence of student learning—and it’s usually not essential to have all students participate in them.

 

Learning goals you don’t expect everyone to achieve. Some institutions and programs have some statements that aren’t really learning goals but aspirations: things they hope some students will achieve but not that they can realistically promise that every student will achieve. Having a passion for lifelong learning or a commitment to civic engagement are two examples of such aspirations. It may be fine to assess aspirations by looking at samples to estimate how many students are indeed on the path to achieving them.

 

Making evidence of student learning part of a broader analysis. For many faculty, the burden of assessment is not assessing students in classes—they do that through the grading process. The burden is in the extra work of folding their assessment into a broader analysis of student learning across a program or gen ed requirement. Sometimes faculty submit rubric or test scores to an office or committee; sometimes faculty submit actual student work; sometimes a committee assesses student work. These additional steps can be laborious and time consuming, especially if your institution doesn’t have a suitable assessment information management system. In these situations, samples of student work may save considerable time—if the samples are sufficiently large and representative to yield useful, generalizable evidence of student learning, as discussed above.

 

For more information on sampling, see Chapter 12 of the third edition of Assessing Student Learning: A Common Sense Guide.

 

General education as an economic driver?

Posted on October 19, 2019 at 8:25 AM

At a recent workshop on general education in a region with low educational and income levels. I pointed out that general education can be a driver of economic development.


Generally speaking, people with more education earn more money. They spend more, pay more taxes, and other ways contribute to regional economic development.

 

We also know that college students are most likely to drop out of college during their first year or two—when they’re taking gen ed courses. This means gen ed courses can have a significant impact on degree completion and therefore income and regional economic development.


We also know that students are more likely to persist when they’re actively engaged in their learning and see relevance in their learning.


This means that what is taught in gen ed courses and how they’re taught can have a significant impact on degree completion and therefore income and regional and economic development. Focus on teaching transferrable thinking skills more than memorized knowledge, and use active learning strategies rather than having students sit passively to a lecture, and you may make an important contribution to your region’s economic development.

What's the difference between course and program learning goals?

Posted on September 5, 2019 at 8:15 AM

Let me begin with a brief sidebar on assessment vocabulary. Assessment in higher education is relatively new—only a few decades old—and we don’t yet have a standard vocabulary. Specifically, we don’t have agreement on the terms “learning objectives,” “learning competencies,” “learning goals,” and “learning outcomes.” Some people draw distinctions among these terms; I don’t. Many people use the term “learning outcome,” even creating acronyms for course learning outcomes (CLOs) and program learning outcomes (PLOs). I prefer the term “learning goal” because I’ve found some people think “learning outcomes” refer to assessment results—the actual learning outcome as opposed to the intended or expected learning outcome. I don’t want to make assessment any more confusing than it already is!


Learning goals (or whatever you want to call them) describe what students will be able to do as a result of successful completion of a learning experience, be it a course, program or some other learning experience. So course learning goals describe what students will be able to do upon passing the course, and program learning goals describe what students will be able to do upon successfully completing the (degree or certificate) program.


Course and program learning goals are not comprehensive lists of every single minute thing students will learn. (An important exception: some specialized accreditors do have long lists of required competencies.) Instead, an effective course or program focuses on a few key learning goals that are so important that they are addressed throughout the curriculum. Key course learning goals should be addressed through multiple assignments. Key program learning goals should be addressed in at least two required courses or other program requirements. The reason is that we want students to learn these important things really well, and students learn best through repeated practice in a variety of contexts. It’s simply unfair to both students and faculty to place full responsibility for student achievement of a key course learning goal on just one assignment. It’s similarly unfair to students and faculty to place full responsibility for student achievement for a key program learning goal on just one faculty member or one required course.


Because programs are  of course broader than courses, program learning goals are typically broader than course learning goals. Course learning goals may address the building blocks necessary to achieve the program learning goal. Or they may address aspects or contexts of the program learning goal.


Here are three examples from Chapter 4 of my book Assessing Student Learning: A Common Sense Guide.

  • Several courses in a program may each help students develop a specific technological skill. Those course learning goals collectively help students achieve a program learning goal to use technologies appropriately and effectively.
  • A course learning goal that students solve a specific kind of problem helps students prepare to achieve a program learning goal to design appropriate approaches to solving a variety of problems in the discipline.
  • An English course on Shakespeare might have a course learning goal to analyze scholarly views on character motivations. This learning goal, along with other course learning goals in other English literature courses, prepares students to achieve the English program learning goal to conduct research on issues in the study of literature.


By the time students reach the program’s capstone requirement, the course and program learning goals may be the same. If the capstone is a research project, for example, the capstone’s learning goals may include program learning goals addressing research, written communication, and information literacy skills. If the capstone is a field experience, the capstone’s learning goals may include program learning goals addressing clinical, technology, communication, and interpersonal skills.


I’ve found that, if faculty are struggling to articulate program learning goals, the problem is often the program’s curriculum. As I frequently point out, a collection of courses is not a program. But I see a lot of academic programs that are exactly that: collections of courses, nothing more. They lack coherence and focus; there are no common threads of shared program learning goals that bind the courses together.


For more information on learning goals and curriculum design, see Chapters 4 and 5 of the 3rd edition of Assessing Student Learning: A Common Sense Guide.

Who is your audience?

Posted on August 7, 2019 at 6:30 AM

In my July 9, 2019, blog post I encouraged using summertime to reflect on your assessment practices, starting with the question, “Why are we assessing?”


Here are the next questions on which I suggest you reflect:

  • Who are our audiences for the products we’re generating through our assessment processes?
  • What decisions are they making?
  • How can the products of our assessment work help them make better decisions?


In other words, before planning any assessment, figure out the decisions the assessment results should inform, then design the assessment to help inform those decisions.


Your answers to the questions I’ve listed will affect the length, format, and even the vocabulary you use in each product. Consider these products of assessment processes:


Assessment results. The key audience for assessment results should be obvious: the faculty and administrators who need them to make decisions, especially what and how to teach, how to help students learn and succeed, and how best to deploy scarce resources.


Faculty and administrators are always making these decisions. The problem is that many people make those decisions in a “data-free zone.” They make a decision simply because someone thinks they have a good idea or perhaps because a couple of students complained about something.


Today time and resources at virtually every institution are limited. We can no longer afford to make decisions simply because people think they have a good idea. Before plunging ahead with a decision, we first need some evidence that we’ve identified the problem correctly and that our solution has a good chance of solving the problem.


In his book How to Measure Anything, Douglas Hubbard points out we’re not aiming to make infallible decisions, just better decisions than we would without assessment results.


Many reports of assessment results say the results have been used only to make tweaks to assignments and course curricula (“We’ll emphasize this more in class.”) But what if, say, six programs all find that their seniors can’t analyze data well? That calls for another audience: your institution’s academic leadership team. Your institution needs a process to examine assessment results across programs holistically—and probably qualitatively—to identify any pervasive issues and bring them to the attention of academic leaders so they can provide professional development and other support to address those issues across your institution.


All this suggests that you need to involve your audiences in designing your assessments and your reports of assessment results, both to make sure you’re providing the information they need and that it’s in a format they can easily understand and use.)


Learning goals have several audiences. The most important audience is students, because research has shown that many students learn more effectively when they understand what they’re supposed to be learning. Prospective students—those who are considering enrolling in your institution, program, course, or co-curricular learning experience—are another important audience. Key learning goals might help convince them to enroll (“I’ve always wanted to learn that” or “I can see why it would be important to learn that.”). A third audience is potential employers (“These are the skills I’m looking for when I hire people, so I’m going to take a close look at graduate of this program.”). And a fourth audience is potential funding sources such as foundations, donors, and government policymakers (“This institution or program teaches important things, the kinds of skills people need today, so it’s good place to invest our funds.”).


All these audiences need learning goals stated in clear, simple terms that they will easily understand. Academic jargon and complex statements have no place in learning goals.


Strategic and unit goals. These often have two key audiences: the employees who will help accomplish the goals and potential funding sources such as donors. Both need goals stated in clear, simple terms that they will easily understand so they can figure out where the institution or unit is headed, what it will look like in a few years, and how they can help achieve the goals.


Curriculum maps. Curriculum maps are a tool to help faculty (1) analyze the effectiveness of their curricula and (2) identify the best places to assess student achievement of key learning goals. So they need to be designed in ways that help faculty accomplish these quickly and easily.


Student assignments. In many assignments we give students, the implicit audience for their work—be it a paper, presentation, or performance—is us: the faculty or staff member giving the assignment. That doesn’t prepare students well for creating work for other audiences. When I taught first-year writing, one assignment was to write solicitations for gifts to a charity to two different audiences (and a third statement comparing the two). When I taught statistics, I had students write a one-paragraph summary their statistical test addressed to the hypothetical individual who requested the analysis. When I taught a graduate course in educational research methods, I had students not only draft the first three chapters of their theses but deliver mock presentations to a foundation explaining, justifying, and seeking funding for their research.


Documentation of assessment processes (how each learning goal has been assessed). The key audience here is the faculty and staff responsible for the program, course, or other learning experience being assessed. They can use this documentation to avoid reinventing the wheel (“How did we assess this last time?”).


Another audience for documentation of assessment processes is whatever group is overseeing and supporting assessment efforts at your institution, such as an assessment committee. This group can use this documentation to (1) recognize and honor good practices, (2) share those good assessment practices with others at your college, (3) give each program or unit feedback on how well its assessment work meets the characteristics of good assessment practices, and (4) plan professional development to address any pervasive issues they see in how assessment is being done.


Documentation of uses of assessment results. Here again the key audience is the faculty and staff responsible for the program, course, or other learning experience being assessed. They can use this information to track the impact of improvements they’ve attempted (“We tried adding more homework problems but that didn’t help much. Maybe this time we could try incorporating these skills into two other required courses.”).


Why haven’t I mentioned your accreditor as an audience of your assessment products? Accreditors are a potential audience for everything I’ve mentioned here, but they’re a secondary audience. They are most interested in the impact of your assessment products on students, colleagues, and the other audiences I’ve mentioned here. They want to see what you’ve shared with your key audiences—and how those audiences have used what you’ve shared with them. Most of all, they want your summary and candid, forthright analysis of the overall effectiveness of your institution’s or program’s assessment products in helping those audiences make decisions.

Why are you assessing?

Posted on July 9, 2019 at 3:50 PM

Summer is a great time to reflect on and possibly rethink your assessment practices. I’m a big believer in form following function, so I think the first question to reflect on should be, “Why are we doing this?” You can then reflect on how well your assessment practices achieve those purposes.


In Chapter 6 of my book Assessing Student Learning I present three purposes of assessment. Its fundamental purpose is, of course, giving students the best possible education. Assessment accomplishes this by giving faculty and staff feedback on what is and isn’t working and insight into changes that might help students learn and succeed even more effectively.


The second purpose of assessment is what I call stewardship. All colleges run on other people’s money, including tuition and fees paid by students and their families, government funds paid by taxpayers, and scholarships paid by donors. All these people deserve assurance that your college will be a wise steward of their resources, spending those resources prudently, effectively, and judiciously. Stewardship includes using good-quality evidence of student learning to help inform decisions on how those resources are spent, including how everyone spends their time. Does service learning really help develop students’ commitment to a life of service? Does the gen ed curriculum really help improve students’ critical thinking skills? Does the math requirement really help students analyze data? And are the improvements big enough to warrant the time and effort faculty and staff put into developing and delivering these learning experiences?


The third purpose of assessment is accountability: assuring your stakeholders of the effectiveness of your college, program, service, or initiative. Stakeholders include current and prospective students and their families, employers, government policy makers, alumni, taxpayers, governing board members…and, yes, accreditors. Accountability includes sharing both successes and steps being taken to make appropriate, evidence-based improvements.


So your answers to “Why are we doing this?” will probably be variations on the following themes, all of which require good-quality assessment evidence:

  • We want to understand what is and isn’t working and what changes might help students learn and succeed even more effectively.
  • We want to understand if what we’re doing has the desired impact on student learning and success and whether the impact is enough to justify the time and resources we’re investing.
  • Our stakeholders deserve to see our successes in helping students learn and succeed and what we’re doing to improve student learning and success.

Culturally Responsive Assessment

Posted on June 8, 2019 at 6:25 AM

I have the honor of serving as one of the faculty of this year's Mission Fulfillment Fellowship of the Northwest Commission on Colleges and Universities (NWCCU). One of the readings that’s resonated most with the Fellows is Equity and Assessment: Moving Towards Culturally Responsive Assessment by Erick Montenegro and Natasha Jankowski. 


A number of the themes of this paper resonate with me. One is that I’ve always viewed assessment as simply a part of teaching, and the paper confirms that there’s a lot of overlap between culturally responsive pedagogy and culturally responsive assessment.


Second, a lot of culturally responsive assessment concepts are simply about being fair to all students. Fairness is a passion of mine and, in fact, the subject of the very first paper I wrote on assessment in higher education twenty years ago. Fairness includes:

  • Writing learning goals, rubrics, prompts (assignments), and feedback using simple, clear vocabulary that entry-level students can understand, including defining any terms that may be unfamiliar to some students.
  • Matching your assessments to what you teach and vice versa. Create rubrics, for example, that focus on the skills you have been helping students demonstrate, not the task you’re asking students to complete.
  • Helping students learn how to do the assessment task. Grade students on their writing skill only if you have been explicitly teaching them how to write in your discipline and giving them writing assignments and feedback. 
  • Giving students a variety of ways to demonstrate their learning. Students might demonstrate information literacy skills, for example, through a deck of PowerPoint slides, poster, infographic, mini-class, graphic novel, portfolio, or capstone project, to name a few.
  • Engaging and encouraging your students, giving them a can-do attitude.


Third, a lot of culturally responsive pedagogy and assessment concepts flow from research over the last 25 years on how to help students learn and succeed, which I’ve summarized in List 26.1 in my book Assessing Student Learning: A Common Sense Guide. We know, for example, that some students learn better when:

  • They see clear relevance and value in their learning activities.
  • They understand course and program learning goals and the characteristics of excellent work, often through a rubric.
  • Learning activities and grades focus on important learning goals. Faculty organize curricula, teaching practices, and assessments to help students achieve important learning goals. Students spend their time and energy learning what they will be graded on.
  • New learning is related to their prior experiences and what they already know, through both concrete, relevant examples and challenges to their existing paradigms.
  • They learn by doing, through hands-on practice engaging in multidimensional real world tasks, rather than by listening to lectures.
  • They interact meaningfully with faculty—face-to-face and/or online.
  • They collaborate with other students—face-to-face and/or online—including those unlike themselves.
  • Their college and its faculty and staff truly focus on helping students learn and succeed and on improving student learning and success.

These are all culturally responsive pedagogies.


So, in my opinion, the concept of culturally responsive assessment doesn’t break new ground as much as it reinforces the importance of applying what we already know: ensuring that our assessments are fair to all students, using research-informed strategies to help students learn and succeed, and viewing assessment as part of teaching rather than as a separate add-on activity.


How do we apply what we know to students whose cultural backgrounds and experiences are different from our own? In addition to the ideas I’ve already listed, here are some practical suggestions for culturally responsive assessment, gleaned from Montenegro and Jankowski’s paper and my own experiences working with people from a variety of cultures and backgrounds:

  1. Recognize that, like any human being, you’re not impartial. Grammatical errors littering a paper may make it hard, for example, for you to see the good ideas in it.
  2. Rather than looking on culturally responsive assessment as a challenge, look on it as a learning experience: a way to model the common institutional learning outcome of understanding and respecting perspectives of people different from yourself.
  3. Learn about your students’ cultures. Ask your institution to develop a library of short, practical resources on the cultures of its students. For cultures originating in countries outside the United States, I do an online search for business etiquette in that country or region. It’s a great way to quickly learn about a country’s culture and how to interact with people there sensitively and effectively. Just keep in mind that readings won’t address every situation you’ll encounter.
  4. Ask your students for help in understanding their cultural background.
  5. Involve students and colleagues from a variety of backgrounds in articulating learning goals, designing rubrics, and developing prompts (assignments).
  6. Recognize that students for whom English is a second language find it particularly hard to demonstrate their learning through written assignments and oral presentations. They may demonstrate their learning more effectively through non-verbal means such as a chart or infographic. 
  7. Commit to using the results of your assessments to improve learning for all students, not just the majority or plurality.

Understanding direct and indirect evidence of student learning

Posted on May 10, 2019 at 8:50 AM

A recent question posted to the ASSESS listserv led to a lively discussion of direct vs. indirect evidence of student learning, including what they are and the merits of each.


I really hate jargon, and “direct” and “indirect” is right at the top of my list of jargon I hate. A few years ago I did a little poking around to try to figure out who came up with these terms. The earliest reference I could find was in a government regulation. That makes sense—governments are great at coming up with obtuse jargon!


I suspect the terms came from the legal world, which uses the concepts of direct and circumstantial evidence. Direct evidence in the legal world is evidence that supports an assertion without the need for additional evidence. Witness knowledge or direct recollection are examples of direct evidence. Circumstantial evidence is evidence from which reasonable inferences may be drawn.


In the legal world, both direct and circumstantial evidence are acceptable and each alone may be sufficient to make a legal decision. Here’s an often-cited example: If you got up in the middle of the night and saw that it’s snowing, that’s direct evidence that it snowed overnight. If you got up in the morning and saw snow on the ground, that’s circumstantial evidence that it snowed overnight. Obviously both are sufficient evidence that it snowed overnight.


But let’s say you got up in the morning and saw that the roads were wet. That’s circumstantial evidence that it rained overnight. But the evidence is not as compelling, because there might be other reasons the roads were wet. It might have snowed and the snow melted by dawn. It might have been foggy. Or street cleaners may have come through overnight. In this example, this circumstantial evidence would be more compelling if it were accompanied by corroborating evidence, such as a report from a local weather station or someone living a mile away who did get up in the middle of the night and saw rain.


So, in the legal world, direct evidence is observed and circumstantial evidence is inferred. Maybe “observed” and “inferred” might be better terms for direct and indirect evidence of student learning. Direct evidence can be observed through student products and performances. Indirect evidence must be inferred through what students tell us, through things like surveys and interviews, or what faculty tell us through things like grades, or some student behaviors such as graduation or job placement.


But the problem with using “observable” and “inferred” is that all student learning is inferred to some extent. If a crime is recorded on video, that’s clearly direct, observable evidence. But if a student writes a research paper or makes a presentation or takes a test, we’re only observing a sample of what they’ve learned, and maybe it’s not a good sample. Maybe the test happened to focus heavily on the concepts the student didn’t learn. Maybe the student was ill the day of the presentation. When we assess student learning, we’re trying to see into a student’s mind. It’s like looking into a black box fitted with lenses that are all somewhat blurry or distorted. We may need to look through several lenses, from several angles, to infer reasonably accurately what’s inside.


In the ASSESS listserv discussion, John Hathcoat and Jeremy Penn both suggested that direct and indirect evidence fall on a continuum. This is why. Some lenses are clearer than others. Some direct evidence is more compelling or convincing than others. If we see a nursing student intubate a patient successfully, we can be pretty confident that the student can perform this procedure correctly. But if we assess a student essay, we can’t be as confident about the student’s writing skill, because the skill level displayed can depend on factors such as the essay’s topic, the time and circumstances under which the student completes the assignment, and the clarity of the prompt (instructions).


So I define direct evidence as not only observable but sufficiently convincing that a critic would be persuaded. Imagine someone prominent in your community who thinks your college, your program, or your courses are a joke—students learn nothing worthwhile in them. Direct evidence is the kind that the critic wouldn’t challenge. Grades, student self-ratings, and surveys wouldn’t convince that critic. But rubric results, accompanied by a few samples of student work, would be harder for the critic to refute.


So should faculty be asked or required to provide direct and indirect evidence of student learning? If your accreditor requires direct and indirect evidence, obviously yes. Otherwise, the need for direct evidence depends on how it will be used. Direct evidence should be used, for example, when deciding whether students will progress or graduate or whether to fund or terminate a program. The need for direct evidence also depends on the likelihood that the evidence will be challenged. For relatively minor uses, such as evaluating a brief co-curricular experience, indirect evidence may be just as useful as direct evidence, if not even more insightful.


One last note on direct/observable evidence: learning goals for attitudes, values, and dispositions can be difficult if not impossible to observe. That’s because, as hard as it is to see into the mind (with that black box analogy), it’s even harder to see into the soul. One of the questions on the ASSESS listserv was what constitutes direct evidence that a dancer dances with confidence. Suppose you’re observing two dancers performing. One has enormous confidence and the other has none. Would you be able to tell them apart from their performances? If so, how? What would you see in one performance that you wouldn’t see in the other? If you can observe a difference, you can collect direct evidence. But if the difference is only in their soul—not observable—you’ll need to rely on indirect evidence to assess this learning goal.

What is good assessment, revisited

Posted on April 17, 2019 at 9:00 AM

Another week, another critique of assessment, this one at the Academic Resource Conference of the WASC Senior College and University Commission.


The fundamental issue is that, more than a quarter century into the higher ed assessment movement, we still aren’t doing assessment very well. So this may be a good time to reconsider, “What is good assessment?”


A lot of people continue to point to the nine Principles of Good Practice for Assessing Student Learning developed by the old American Association for Higher Education back in 1992. In fact, NILOA once published a statement that averred that they are “aging nicely.” I’ve never liked them, however. One reason is that they combine principles of good assessment practice with principles of good assessment results without distinguishing the two. Another is that nine principles are, I think, too many—I’d rather everyone focus on just a few fundamental principles.


But most important, I think they don’t focus on the right things. They overemphasize some minor traits of good assessment (I’ve seen plenty of good assessments conducted without much student involvement, for example) and are silent on some important ones. They say nothing, for example, about the need for assessment to be cost-effective, and I think that omission is a big reason why assessment is under fire today. A year ago, for example, I did a content analysis of comments posted in response to two critiques of assessment published in the Chronicle of Higher Education and the New York Times. Almost 40% of the comments talked about what a waste of time and resources assessment work is.


When I was Director of AAHE’s Assessment Forum in 1999-2000, I argued that it was time to update them, to no avail. In the mid-2000s, I did a lit review of principles of good assessment practice. (You’d be amazed how many there are! Here’s an intriguing one from 2014) I created a new model of just five principles, which I presented at several conferences. Good assessment practices:

  1. Lead to results that are useful and used.
  2. Flow from and focus on clear and important goals.
  3. Are cost-effective, yielding results that are useful enough to be worth the time and resources invested.
  4. Yield reasonably accurate and truthful results.
  5. Are valued.


These are not discrete, of course, so since I’ve developed this model, I’ve played around with it. About five years ago I took it down to two principles. Under this model, good assessment practices:

  1. Yield results that are used in meaningful ways to improve teaching and learning. This can only happen if assessment practices focus on clear and important goals and yield reasonably accurate and truthful results. And using assessment results to inform meaningful decisions is the best way to show that assessment work is valued.
  2. Are sustained and pervasive. This can only happen if assessment practices are cost-effective and are valued.


While I like the simplicity of this model, it buries the idea that assessments should be cost-effective, which we really need to highlight. Today when I do presentations on good assessment, I present the following four traits, because these are the traits that we most need to focus on most today. Good assessment practices:

  1. Lead to results that are useful and used. This is what psychometricians call consequential validity. I continue to think that this is **THE** most important characteristic of effective assessment practices—all other traits of good assessment practice flow from this one. One corollary, for example, is that assessment results must be conveyed clearly, succinctly, and meaningfully, in ways that facilitate decision-making.
  2. Flow from and focus on clear and important goals. While this is a corollary of the useful-and-used principle, this is so important, and so frequently a shortcoming of current assessment practices, that I highlight it separately. Learning goals need to be not only clear but relevant to students, employers, and society. They represent not what we want to teach but what students most need to learn. And those goals are treated as promises to students, employers, and society; if you pass this course or graduate, you will be able to do these things, and we will use assessments to make sure.
  3. Are cost-effective, yielding results that are useful enough to be worth the time and resources invested. This is a major shortcoming of many current assessment practices. They suck up enormous amounts of time and dollars, and whatever is learned from them just isn’t worth the time and money invested.
  4. Are part of everyday life of the college community. In other words, the culture is one of collaboration and evidence-informed planning and decision making.

Is it time to update our learning goals?

Posted on March 27, 2019 at 5:40 AM

Burning Glass Technology recently released a report on a study of skills that employers included in online job postings in over 50,000 online job boards, newspapers, and employer websites.


Before I get to the meat of their findings, an important caveat: While 50,000 online employment sites sound impressive, they’re clearly not representative of all jobs sought and filled by college graduates. The jobs discussed in the report are heavy on information technology and business. There’s no mention of many other fields such as teachers, social workers, scientists, clergy, or musicians. The report acknowledges the heavy weight on IT by separating results for digital occupations from other occupations, but I still don’t think the results are representative of all employers everywhere. That said, let’s dive in.


A few of the skills that employers seek are ones that already show up on virtually every college’s list of institutional or general education learning goals: communication, critical thinking, and analytical skills. Two others—collaboration and creativity—show up occasionally although, in my view, far too infrequently.


The remaining skills are largely absent from institutional or general education learning goals:

  • Analyzing data
  • Communicating data
  • Digital design
  • Project management
  • “Business process” (skills with cost control, business operations, planning, and strategy)
  • IT skills including computer programming, software development, data management, and digital security


I’m not going to recommend anything based on this one, somewhat flawed study. But it generates some ideas for all of us to think about:

  • Should our curricula be giving greater emphasis to creativity, collaboration, and visual communication?
  • I’ve heard arguments that gen ed math courses should be statistics courses, and this study, showing the need for skills in analyzing and communicating data, reinforces them.
  • Should we not only require program capstones but require that they be projects that students are responsible for planning and completing, thereby developing project management skills? Should we encourage group capstone projects, thereby helping students develop collaboration skills?
  • Would liberal arts students benefit from a course or badge that gives them basic skills with IT and business processes?

Why are we doing curriculum maps?

Posted on February 23, 2019 at 5:55 AM

Curriculum maps have become trendy in the last few years. They’ve built into some commercial assessment management systems. But to some faculty they’re simply one more pointless chore to be completed. Why bother creating a curriculum map?


First, what is a curriculum map? It’s a simple chart identifying the key learning goals addressed in each of the curriculum’s key elements or learning activities. A curriculum map for an academic program identifies the program learning goals addressed in each program requirement. A curriculum map for a course identifies the course learning goals addressed in each learning experience and assessment.


So why are we creating curriculum maps? They’re handy tools for analyzing how well a curriculum meets many of the traits of effective curricula discussed in Chapter 5 of my book Assessing Student Learning: A Common Sense Guide:


Is the curriculum designed to ensure that every student has enough opportunity to achieve each of its key learning goals? A program curriculum map will let you know if a program learning goal is addressed only in elective courses or only in one course.


Is the curriculum appropriately coherent? Is it designed so students strengthen their achievement of program learning goals as they progress through the program? Or is attention to program learning goals scattershot and disconnected?


Does the curriculum give students ample and diverse opportunities to achieve its learning goals? Many learning goals are best achieved when students experience them in diverse settings, such as courses with a variety of foci.


Does the curriculum have appropriate, progressive rigor? Do higher-numbered courses address program learning goals on a more advanced level than introductory courses? While excessive prerequisites may be a barrier to completion, do upper-level courses have appropriate prerequisites to ensure that students in them tackle program learning goals at an appropriately advanced level?


Does the curriculum conclude with a capstone experience? Not only is this an excellent opportunity for students to integrate and synthesize their learning, but it’s an opportunity for students to demonstrate their achievement of program learning goals as they approach graduation. A program curriculum map will tell you if you have a true capstone in which students synthesize their achievement of multiple program learning goals.


Is the curriculum sufficiently focused and simple? You should be able to view the curriculum map on one piece of paper or computer screen. If you can’t do this, your curriculum is probably too complicated and therefore might be a barrier to student success.


Is the curriculum responsive to the needs of students, employers, and society? Look at how many program learning goals are address in the program’s internship, field experience, or service learning requirement. If a number of learning goals aren’t addressed there, the learning goals may not be focusing sufficiently on what students most need to learn for post-graduation success.


(Oh, and, yes, curriculum maps can also be used to identify the best places to assess the curriculum’s learning goals—typically in courses or other requirements that students typically complete right before graduating. But I don’t think that should be the main purpose of a curriculum map, because you can figure that out without going to the trouble of creating a curriculum map.)


Program curriculum maps with the following traits can best help answer these questions.


Elective courses have no place in a curriculum map. Remember one of the purposes is to ensure that the curriculum is designed to ensure that every student has enough opportunity to achieve every learning goal. Electives don’t help with this analysis.


List program requirements, not program courses. If students can choose from any of four courses to fulfill a particular requirement, for example, group those four courses together and mark only the program learning outcomes that all four courses address.


Codes can help identify if the curriculum has appropriate, progressive rigor. Some assessment management systems require codes indicating whether a learning goal is introduced, developed further, or demonstrated in each course, rather than simply whether it’s addressed in the course.


Check off a course only if students are graded on their progress toward achieving the learning goal. Cast a suspicious eye at courses for which every program learning goal is checked off. How can those courses meaningfully address all those goals?

Why Do I Assess?

Posted on January 31, 2019 at 7:45 AM

Last year was not one of the best for higher ed assessment. A couple of very negative opinion pieces got a lot of traction among higher ed people who had been wanting to say, “See? Assessment is really as stupid and pointless as I’ve always thought it was.” At some American universities, this was a major setback on assessment progress.


The higher ed assessment community came together quickly with a response that I was proud to contribute to. But now that we’re in 2019, perhaps it would help if each of us in the assessment community reflects on why we’re here. Here’s my story, in three parts, about why assessment is my passion.


The first part is that I’m a data geek, so I find assessment fun. My first job out of grad school was in institutional research, and my favorite part of the job was getting a printout of student survey results and poring over it, trying to find the story in the numbers (to me it’s a treasure hunt), and sharing that story with others in ways that would get them excited about either feeling good about what’s going well or doing something about areas of concern.


The second part is that I love to teach. I’m not a great teacher, but I want to be the best teacher I can. I’ve always looked forward to seeing how my students do on tests and assignments. I can’t wait to tally up how they did on each test question or rubric criterion (that’s the data geek part of me). I cheer the parts they did well on and reflect on the parts where they didn’t. Why did so many miss Question 12? Can I do anything to help them do better, either during what’s left of this class or in the next one? If I can’t figure out what happened, I ask my students at the next class and, trust me, they’re happy to tell me how I screwed up!


The final reason that assessment is my passion is that I’m convinced that part of the answer to the world’s problems today is to help everyone get the best possible education. This dawned on me about 25 years ago, when I was on an accreditation team visiting a seminary. The seminary’s purpose was to educate church pastors (as opposed to, say, researchers or scholars). It was doing a thorough job educating students on doctrine, but there was very little in the curriculum on preparing students to help church members and others hear what Christians call the Good News. There was little attention to helping students develop skills to listen to and counsel church members, communicate with people of diverse backgrounds, and assess community needs, not to mention the practical skills of running a church such as budgeting and fundraising. While I’m not one to push my faith on others, I think the world might be a better place if people truly understood and truly followed the teachings of many faiths. If that’s the case, the world needs pastors well-prepared to do this. The seminary I visited had, I thought, a moral obligation to ensure—through assessment—that its graduates are prepared to be the best possible pastors, with all the skills that pastors need.


Since then, I’ve felt the same about many other colleges and many other disciplines. The world needs great teachers, nurses, lawyers, accountants, and artists. When I’ve visited U.S. service academies, I’m reminded that the U.S. needs great military officers.


Even more, the world needs people who can do all the things we promise in our gen ed curricula. The world needs people who can think critically, who recognize and avoid unethical behavior, who are open to new ideas, who can work with people from diverse backgrounds, who can evaluate the quality of evidence, arguments or claims, who are committed to serving their communities. Again I’m convinced that the world would be a far better place if everyone could do these things well.


None of us can change the world alone. But each of us can do our best with the students in our orbit, trying our best to make sure—through decent-quality assessments—that they’ve really learned what’s most important. Whenever anyone looks at the results of any assessment, be it a class quiz or a college-wide assessment, and uses those results to change what or how they teach, at least some students will get a better education as a result.


We need those better-educated students. This is what drives me. This is why I am devoting my life to helping others learn how to assess student learning. Assessment is one way each of us can help make the world a better place.

Setting meaningful benchmarks and standards, revisited

Posted on January 16, 2019 at 7:45 AM

A recent discussion on the ACCSHE listserv reminded me that setting meaningful benchmarks or standards for student learning assessments remains a real challenge. About three years ago, I wrote a blog post on setting benchmarks or standards for rubrics. Let’s revisit that and expand the concepts to assessments beyond rubrics.


The first challenge is vocabulary. I’ve seen references to goals, targets, benchmarks, standards, thresholds. Unfortunately, the assessment community doesn’t yet have a standard glossary defining these terms (although some accreditors do). I now use standard to describe what constitutes minimally acceptable student performance (such as the passing score on a test) and target to describe the proportion of students we want to meet that standard. But my vocabulary may not match yours or your accreditor's!


The second challenge is embedded in that next-to-last sentence. We’re talking about two different numbers here: the standard describing minimally acceptable performance and the target describing the proportion of students achieving that performance level. That makes things even more confusing.


So how do we establish meaningful standards? There are four basic ways. Three are:

1. External standards: Sometimes the standard is set for us by an external body, such as the passing score on a licensure exam.

2. Peers: Sometimes we want our students to do as well as or better than their peers.

3. Historical trends: Sometimes we want our students to do as well as or better than past students.


Much of the time none of these options is available to us, leaving us to set our own standard, what I call a local standard and what others call a competency-based or criterion-referenced standard. Here are the steps to setting a local standard:


Focus on what would not embarrass you. Would you be embarrassed if people found out that a student performing at this level passed your course or graduated from your program or institution? Then your standard is too low. What level do students need to reach to succeed at whatever comes next—more advanced study or a job?


Consider the relative harm in setting the standard too high or too low. A too-low standard means you’re risking passing or graduating students who aren’t ready for what comes next and that you’re not identifying problems with student learning that need attention. A too-high standard may mean you’re identifying shortcomings in student learning that may not be significant and possibly using scarce time and resources to address those relatively minor shortcomings.


When in doubt, set the standard relatively high rather than relatively low. Because every assessment is imperfect, you’re not going to get an accurate measure of student learning from any one assessment. Setting a relatively high bar increases the chance that every student is truly competent on the learning goals being assessed.


If you can, use external sources to help set standards. A business advisory board, faculty from other colleges, or a disciplinary association can all help get you out of the ivory tower and set defensible standards.


Consider the assignment being assessed. Essays completed in a 50-minute class are not going to be as polished as papers created through scaffolded steps throughout the semester.


Use samples of student work to inform your thinking. Discuss with your colleagues which seem unacceptably poor, which seem adequate though not stellar, and which seem outstanding, then discuss why.


If you are using a rubric to assess student learning, the standard you’re setting is the rubric column (performance level) that defines minimally acceptable work. This is the most important column in the rubric and, not coincidentally, the hardest one to complete. After all, you’re defining the borderline between passing and failing work. Ideally, you should complete this column first, then complete the remaining columns.


Now let’s turn from setting standards to setting targets for the proportions of students who achieve those standards. Here the challenge is that we have two kinds of learning goals. Some are essential. We want every college graduate to write a coherent, grammatically correct paragraph, for example. I don’t want my tax returns prepared by an accountant who can complete them correctly only 70% of the time, and I don’t want my prescriptions filled by a pharmacist who can fill them correctly only 70% of the time! For these essential goals, we want close to 100% of students meeting our standard.


Then there are aspirational goals, which not everyone need achieve. We may want college graduates to be good public speakers, for example, but in many cases graduates can lead successful lives even if they’re not. For these kinds of goals, a lower target may be appropriate.


Tests and rubrics often assess a combination of essential and aspirational goals, which suggests that overall test or rubric scores often aren’t very helpful in understanding student learning. Scores for each rubric trait or for each learning objective in the test blueprint are often much more useful.


Bottom line here: I have a real problem with people who say their standard or target is 70%. It’s inevitably an arbitrary number with no real rationale. Setting meaningful standards and targets is time-consuming, but I can think of few tasks that are more important, because it’s what help ensure that students truly learn what we want them to…and that’s what we’re all about.


By the way, my thinking here comes primarily from two sources: Setting Performance Standards by Cizek and a review of the literature that I did a couple of years ago for a chapter on rubric development that I contributed to the https://www.amazon.com/Handbook-Measurement-Assessment-Evaluation-Education/dp/1138892157" target="_blank">Handbook on Measurement, Assessment, and Evaluation in Higher Education. For a more thorough discussion of the ideas here, see Chapter 22 (Setting Meaningful Standards and Targets) in the new 3rd edition of my book Assessing Student Learning: A Common Sense Guide.

Getting NSSE (and other assessment) results used

Posted on December 19, 2018 at 10:55 AM

One of my treats this time of year is getting the latest annual report from the National Survey of Student Engagement. I’m an enormous fan of this survey. One reason is that it’s research-based: the questions are all about practices that research has shown help students learn and succeed. Another is that, because the questions mostly ask about specific experiences rather than satisfaction, the results are “actionable”: they make clear what institutions need to do to improve student learning and success.


I’m also a fan of NSSE because of its staff and the time, energy, and thought they’ve put into validating the survey and making the results as relevant and useful as possible.


But I’m also struck by how many institutions I encounter who still aren’t using NSSE results to meaningful ways. If an institution isn’t making good use of NSSE results, what hope is there for it using its student learning assessment results?


I’ve done presentations on why assessment results aren’t used…and keep in mind that’s a different question than why assessment isn’t getting done. There are a lot of potential reasons, some of which may apply to your institution and some of which may not. Several years ago I wrote a blog post that highlighted four possible reasons:

  1. Change must be part of academic culture.
  2. Institutional leaders must commit to and support evidence-based change.
  3. We don’t have a clear sense of what satisfactory results are and aren’t.
  4. Assessment results must be shared clearly and readily.

Most of you reading this aren’t empowered to do much about #1 and #2, and I’ve written a blog post on #3. So let’s focus on #4: Assessment results must be shared clearly and readily. Here are some suggestions:


“Most information is useless. Give yourself permission to dismiss it” (Harris & Muchin, 2002). I think one of the barriers to using NSSE is the sheer volume of information it yields, not to mention the myriad opportunities to slice and dice that information. Before you share NSSE results with anyone, ask yourself, “What are the three most important things I want people to learn from these results?” Here’s an example:

  • In many respects, our students are engaging in their learning more than students at peer institutions.
  • Our first-year students’ study time is declining.
  • Our seniors have fewer capstone experiences than their peers.

That’s plenty for people to chew on! And note that there’s a combination of good news and bad news—it’s not all doom-and-gloom.


Share only what people are willing to act upon. If your institutional community is unwilling to rethink its senior capstone experiences, for example, is it worth sharing NSSE results on those experiences?


Different people need different results. When I've shared NSSE results, I've prepared separate summaries for faculty, for student affairs staff, and for admissions staff (they got all the good news). Know what decisions each group is facing and share only results that will help inform those decisions.


Share a story with a clear point. Give every table, graph, and bulleted list a title that is a sentence that conveys the point of the table. The three points I listed above would make great titles for graphs or bulleted lists.


Consider sharing results through a live slide presentation. After too many years generating reports that no one looked at, I stopped writing reports and instead put key results on PowerPoint slides. Then I invited myself to various meetings to share those slides. This virtually ensured that the results would be at least discussed, if not used. It also forced me to keep the slides and my remarks short and very focused, because my time on the agenda was limited.


Use graphs rather than tables. You want the point to pop out at your audience. NSSE’s website has numerous examples of good visual presentations of results.


Make results easy to find and access. If you put your results on a web page, for example, you’ll need strategies to draw your audience to the web page (Jankowski et al, 2012).


For more ideas on sharing assessment results, see Chapter 25 in my new 3rd edition of Assessing Student Learning: A Common Sense Guide.

I'm not a fan of Bloom's

Posted on November 13, 2018 at 6:50 AM

I’m mystified by how Bloom’s taxonomy has pervaded the higher education assessment landscape. I’ve met faculty who have no idea what a rubric or a test blueprint or a curriculum map is, but it’s been burned into their brains that they must follow Bloom’s taxonomy when developing learning goals. This frustrates me no end, because I don’t think Bloom’s is the best framework for considering learning outcomes in higher education.


Bloom’s taxonomy of educational objectives is probably older than you are. It was developed by Benjamin Bloom in the 1950s. It divides learning goals into three domains: cognitive, affective (attitudinal), and psychomotor. Within the cognitive domain, it has six levels. Originally these were knowledge, comprehension, application, analysis, synthesis, and evaluation. A 2000 update renamed these levels and swapped the positions of the last two: remember, understand, apply, analyze, evaluate, and create. The last four levels are called higher order thinking skills because they require students to do more than understand.


So why don’t I like Bloom’s? One reason is that I’ve seen too many faculty erroneously view the six cognitive levels as hierarchy of prerequisites. Faculty have told me, for example, that first-year courses can only address knowledge and comprehension because students must thoroughly understand a subject before they can begin to think about it. Well, any elementary school teacher can tell you that’s bunk, but the misperception persists.


Even more important is that Bloom’s doesn’t highlight many of the skills and dispositions needed today. Teamwork, ethical judgment, professionalism, and metacognition are all examples of learning goals that don’t fit neatly into Bloom’s. That’s because they’re a combination of the cognitive and affective domains: what educators such as Costa & Kallick and Marzano and his colleagues call habits of mind.


I’m especially concerned about professionalism: coming to work or class on time, coming to work or class prepared to work, completing work on time, planning one’s time, giving work one’s best effort, self-evaluating one’s work, etc. Employers very much want these skills, but they get short shrift in Bloom’s.


So what do I recommend instead? In my workshops I suggest five categories of learning goals:

  • knowledge and understanding
  • career-specific thinking and performance skills
  • transferrable thinking and performance skills (the kinds developed in the liberal arts)
  • attitudes and values
  • habits of mind

But I also like the taxonomies developed by Dee Fink and by Marzano et al.


I wouldn’t expect every course or program to have learning goals in all five of these categories, of course. But I do suggest that no more than half of a course or program’s learning goals be in the knowledge and understanding category.


For more information, see Chapter 4 (Learning Goals: Articulating What You Most Want Students to Learn) in the new 3rd edition of my book Assessing Student Learning: A Common Sense Guide.

Grading group work

Posted on October 27, 2018 at 10:30 AM

Collaborative learning, better known as group work, is an important way for students to learn. Some students learn better with their peers than by working alone. And employers very much want employees who bring teamwork skills.


But group work, such as a group presentation, is one of the hardest things for faculty to grade fairly. One reason is that many student groups include some slackers and some overactive eager beavers. When viewing the product of a group assignment—say they’ve been asked to work together to create a website—it can be hard to discern the quality of individual students’ achievements fairly.


Another reason is that group work is often more about performances than products—the teamwork skills each student demonstrates. As I note in Chapter 21 of Assessing Student Learning: A Common Sense Guide, performances such as working in a team or delivering a group presentation are harder to assess than products such as a paper.


In their book Collaborative Learning Techniques: A Handbook for College Faculty, Elizabeth Barkley, Claire Major, and K. Patricia Cross acknowledge that grading collaborative learning fairly and validly can be challenging. But it’s not impossible. Here are some suggestions.


Have clear learning goal(s) for the assignment. If your key learning goal is for students to develop teamwork skills, your assessment strategy will be very different than if your learning goal is for them to learn how to create a well-designed website.


Make sure your curriculum includes plenty of opportunities for students to develop and achieve your learning goal. If your key learning goal is for students to develop teamwork skills, for example, you’ll need to provide lessons, classwork, and homework that helps them learn what good and poor teamwork skills are and to practice those skills. Just putting students into a group and letting them fend for themselves won’t cut it—students will just keep using whatever bad teamwork habits they brought with them.


Deal with the slackers--and the overactive eager beavers--proactively. Barkley, Major and Cross suggest several ways to do this. Design a group assignment in which each group member must make a discrete contribution for which they’re held accountable. Make these contributions equitable, so all students must participate evenly. Make clear to students that they’ll be graded for their own contribution as well as for the overall group performance or product. And check in with each group periodically and, if necessary, speak individually with any slackers and also those eager beavers who try to do everything themselves.


Consider observing student groups working together. This isn’t always practical, of course—your presence may stifle the group’s interactions—but it’s one way to assess each student’s teamwork skills. Use a rubric to record what you see. Since you’re observing several students simultaneously, keep the rubric simple enough to be manageable—maybe a rating scale rubric or a structured observation guide, both of which are discussed in the rubrics chapter of Assessing Student Learning.


Consider asking students to rate each other. Exhibit 21.1 in Assessing Student Learning is a rating scale rubric I’ve used for this purpose. I tell students that their groupmates’ ratings of them will be averaged and be 5% of their final grade. I weight peer ratings very low because I don’t want students’ grades to be advantaged or disadvantaged by any biases of their peers.


Give each student two grades: one grade for the group product or performance and one for his or her individual contribution to it. This only works when it’s easy to discern each student’s contribution. You can weight the two grades however you like—perhaps equally, or perhaps weighting the group product or performance more heavily than individual contributions, or vice versa.


Give the group a total number of points, and let them decide how to divide those points among group members. Some faculty have told me they’ve used this approach and it works well.


Barkley, Major and Cross point out that there’s a natural tension between promoting collaborative learning and teamwork and assigning individual grades. Whatever approach you choose, try to minimize this tension as much as you can.

Consider professionalism as a learning goal

Posted on September 23, 2018 at 10:35 AM

A recent Inside Higher Ed piece, “The Contamination of Student Assessment” by Jay Sterling Silver, argued that behaviors such as class attendance and class participation shouldn’t be factored into grades because grades should be “unadulterated measurements of knowledge and skills that we represent them to be—and that employers and graduate admissions committees rely on them to be.” In other words, these behaviors are unrelated to key learning goals.


He’s got a point; a grade should reflect achievement of key learning goals. (That’s what competency-based education tries to achieve, as I discussed in a blog several years ago.)  But I behaviors like coming to class, submitting work on time, giving assignments one’s best effort, and participating in class discussions are important. They fall under what I call professionalism: traits that include coming to work on time and prepared to work, dependably completing assigned work thoroughly and on time, giving one’s work one’s best effort, and managing one’s time.


Surveys of employers confirm that these are important traits in the people they hire. Every few years, for example, Hart Research Associates conducts a survey for AAC&U on how well employers think college graduates are prepared on a number of key learning outcomes. The 2018 survey added two learning outcomes that weren’t in previous surveys:

  • Self-motivated: ability to take initiative and be proactive
  • Work independently: set priorities, manage time and deadlines

Of the 15 learning outcomes in the 2018 survey, these were tied for #4 in importance by hiring managers.


So I think the answer is to add professionalism as an additional learning goal. Of course “professionalism” isn’t a well-stated learning goal; it’s a category. I leave it up to college and program faculty to decide how best to articulate what forms of professionalism are most important to their students and prospective employers.


Then an assignment like a library research paper might have three learning goals—information literacy, writing, and professionalism—and be graded on all three. Professionalism might be demonstrated by how well students followed the directions, whether the assignment was turned in on time, and whether the student went above and beyond the bare minimum requirements for the assignment.


Professionalism, by the way, isn’t just a skill and isn’t just an attitude. It’s a combination of both, similar to what Arthur Costa and Bea Kallick call habits of mind, which include things like persisting, managing impulsivity, taking responsible risks, and striving for accuracy. One of the reasons I’m not a fan of Bloom’s taxonomy is that it doesn’t really address habits of mind, which—as evidenced by Hart’s new survey—are becoming increasingly important learning goals of a college education.


Rss_feed