|Posted on February 18, 2020 at 2:30 PM|
In November 2019 the US Department of Education (USED) issued “final” regulations for “the secretary’s recognition of accreditation agencies” among other matters. (I put “final” in quotes because things in Washington tend to change every few years.) You can find a link to the relevant pages of the Federal Register here . The new regulations go into effect on July 1, 2020.
Under the new regulations, regional accreditors are no longer required to get Federal approval to change the geographic region in which they accredit (page 58893). A regional accreditor based on the East Coast could, for example, start accepting applications for accreditation from institutions whose main campuses are in Oklahoma.
Will this dramatically change the face of regional accreditation? Let me begin with the caveat that I have not spoken about these regulations with anyone at any of the regional accreditors or anyone involved in the negotiated rules-making process. I’m just interpreting the language in the Federal Register.
The announcement in the Federal Register explains that “the Department seeks to provide increased transparency and introduce greater competition and innovation that could allow an institution…to select an accrediting agency that best aligns with the institution’s mission, program offerings, and student population” (page 58893). The announcement goes further: “The Department expects that the landscape of institutional accrediting agencies may change over time from one where some agencies only accredit institutions headquartered in particular regions to one where institutional accrediting agencies accredit institutions throughout many areas of the United States based on factors such as institutional mission rather than geography” (page 58894). And the announcement speculates, “A shift from strictly geographic orientation may occur over time, probably measured in years, as…greater competition occurs, spurring an evolving dynamic marketplace. Accrediting agencies may align in different combinations that coalesce around specific institutional dimensions or specialties, such as institution size, specialized degrees, or employment opportunities” (page 58897).
So is there going to be a sudden, huge rush among institutions to move from one regional to another? No way. Here’s one of the roadblocks: “[USED will not] require an agency to accept a new institution…for which it did not have capacity or interest to accredit” (page 58894). The regional accreditors are funded by dues paid by member institutions. They run lean operations, both in terms of staffing and dollars. They don’t have the capacity to accept and process applications from significant numbers of institutions without major increases in staffing and funding. While the announcement in the Federal Register speculates, “Accrediting agencies may develop a new focus area or geographic scope over time as they increase resources to expand their operations” (page 58901), I just don’t see a significant increase in resources happening anytime soon, if ever.
Here’s the second roadblock: “[USED] will not require any institution…to change to a different accrediting agency as a result of these regulatory changes” (page 58894). Let’s imagine a wildly hypothetical scenario: one of the regionals decides it wants to accredit only doctoral institutions. It can’t, because it now accredits community colleges, and USED will not require those community colleges to move to another accreditor. Yes, this accreditor could conceivably put in place standards such as for faculty credentials that are amenable to research universities and difficult for community colleges to comply with. But it couldn’t do that until community colleges have another accreditation home, which brings us back to the first roadblock.
I actually like the idea of the regional accreditors going national. I think competition can be healthy, and I like the idea of the regionals differentiating themselves in ways that better serve the incredible diversity of higher education institutions in the United States. I can envision one accreditor developing standards and processes that are particularly suitable for distance learning institutions, another doing the same for traditional institutions, another doing the same for complex institutions… maybe one doing the same for institutions that want an approach to accreditation that relies on documentation without the effort of extensive institutional self-study or analysis. But I don’t think these new regulations are going to move us appreciably down that road.
|Posted on December 22, 2017 at 7:15 AM|
Virtually all U.S. accreditors (and some state agencies) require the assessment of student learning, but the specifics--what, when, how--can vary significantly. How can programs with multiple accreditations (say regional and specialized) serve two or more accreditation masters without killing themselves in the process?
I recently posted my thoughts on this on the ASSESS listserv, and a colleague asked me to make my contribution into a blog post as well.
Bottom line: I advocate a flexible approach.
Start by thinking about why your institution's assessment coordinator or committee asks these programs for reports on student learning assessment. This leads to the question of why they're asking everyone to assess student learning outcomes.
The answer is that we all want to make sure our students are learning what we think is most important, and if we're not, we want to take steps to try to improve that learning. Any reporting structure should be designed to help faculty and staff achieve those two purposes--without being unnecessarily burdensome to anyone involved. In other words, reports should be designed primarily to help decision-makers at your college.
At this writing, I'm not aware of any regional accreditor that mandates that every program's assessment efforts and results must be reported on a common institution-wide template. When I was an assessment coordinator, I encouraged flexibility in report formats (and deadlines, for that matter). Yes, it was more work for me and the assessment committee to review apples-and-oranges reports but less work and more meaningful for faculty--and I've always felt they're more important than me.
So with this as a framework, I would suggest sitting down with each program with specialized accreditation and working out what's most useful for them.
- Some programs are doing for their specialized accreditor exactly what your institution and your regional accreditor want. If so, I'm fine with asking for a cut-and-paste of whatever they prepare for their accreditor.
- Some programs are doing for their specialized accreditor exactly what your institution and your regional accreditor want, but only every few years, when the specialized review takes place. In these cases, if the last review was a few years ago, I think it's appropriate to ask for an interim update.
- Some programs assess certain learning goals for their specialized accreditor but not others that either the program or your institution views as important. For example, some health/medical accreditors want assessments of technical skills but not "soft" skills such as teamwork and patient interactions. In these cases, you can ask for a cut-and-paste of the assessments done for the specialized accreditor but then an addendum of the additional learning goals.
- At least a few specialized accreditors expect student learning outcomes to be assessed but not that the results be used to improve learning. In these cases, you can ask for a cut-and-paste of the assessments done but then an addendum on how the results are being used.
- Some specialized accreditors, frankly, aren't particularly rigorous in their expectations for student learning assessment. I've seen some, for example, that seem happy with surveys of student satisfaction or student self-ratings of their skills. Programs with these specialized accreditations need to do more if their assessment is to be meaningful and useful.
Again, this flexible approach meant more work for me, but I always felt faculty time was more precious than mine, so I always worked to make their jobs as easy as possible and their work as useful and meaningful as possible.
|Posted on July 25, 2016 at 11:10 AM|
American accreditors fall into three broad groups: regional, national, and specialized. Of the three, regional accreditation is often seen as the most desirable for several reasons. First, regional accreditors are among the oldest accreditors in the U.S. and accredit the most prestigious institutions, giving them an image of quality. Second, employers are increasingly requiring job applicants to hold degrees from regionally accredited institutions. Third, some specialized accreditors require accredited programs to be in a regionally accredited institution. And finally, despite Federal regulations to the contrary, students from nationally-accredited institutions sometimes find it hard to transfer their credits elsewhere or to pursue a more advanced degree.
For all these reasons, nationally-accredited institutions sometimes consider pursuing regional accreditation. Unfortunately, in many instances regional accreditation is simply not a good fit—it’s like trying to fit a square peg into a round hole. Then the institution may either fail in its efforts to earn regional accreditation or, once accredited, run into problems maintaining its accreditation.
When might regional accreditation be a good fit?
1. Regional accreditation is only open to institutions that award at least one degree. If your institution offers only certificates and/or diplomas, it isn’t eligible.
2. Regional accreditors require all undergraduate degree programs to include certain components, including a general education or core curriculum studying the liberal arts and the development of certain skills and competencies.
3. Regional accreditors require a system of shared collegial governance. While none prescribes a particular governance system, all require that the respective roles, responsibilities, and authority of the board, leadership, administration, and faculty be clearly articulated. And an implicit expectation is that the institutional culture be one of communication and collaboration; regional accreditation simply becomes very difficult without these.
4. Because regional accreditors accredit a vast array of institutions, their standards are relatively imprecise, more a set of principles that are applied within the context of each institution’s mission. Regional accreditation is therefore a process that requires considerable time, thought, and effort by many members of the institutional community, not a task to be delegated to someone.
5. Regional accreditors expect a commitment to ongoing improvement beyond the minimum required for accreditation. Regional accreditation is not appropriate for an institution content to teeter on the edge of the bare minimum required for compliance.
6. Regional accreditors expect a commitment to collegiality within and across institutions. Volunteer peers from other institutions will work with your institution, and the accreditor expects your institution to return the favor once accredited, providing volunteer peer evaluators, presenting at conferences, and so on.
7. Regional accreditors expect a board that is empowered and committed to act in the best interests of the institution and its students. Again, regional accreditors are not prescriptive about board make-up and duties, but they want to see a board that has the commitment, capacity and authority to act in the institution’s best interests. Suppose, for example, that the president/CEO/owner develops early-onset Alzheimer’s and begins to make irrational decisions that are not in the best interest of the institution. Can the board bring about a change in leadership? If the board heads a corporation, can it put institutional quality ahead of immediate shareholder return on investment? If the board oversees other entities that are troubled, such as a church, hospital, or another educational institution, can it put the best interests of the accredited institution first, or will it be tempted to rob Peter to pay Paul?
Some shameless self-promotion here: my book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability aims to explain what regional accreditors are looking for in plain terms. If your nationally-accredited institution is considering moving to regional accreditation, I think the book is a worthwhile investment.
|Posted on February 23, 2015 at 5:45 AM|
While some national and specialized accreditors want reports that are just the facts, ma'am, U.S. regional accreditors generally want not just information but analysis of that information. To help ensure that your accreditation report doesn't just describe what you're doing but provides evidence and analysis, look on it as a cousin of the scholarly research paper with which many faculty and administrators are familiar.
- Both begin with an introduction or overview for readers who may not be familiar with your college.
- Both have hypotheses; the accreditation report is investigating the achievement of key goals, with the hypotheses being whether key targets for those goals are being achieved.
- Both briefly summarize how evidence was collected, with enough information to assure the reader of the quality and value of the evidence. (In an accreditation report, these summaries are often in appendices.)
- Both summarize the results of evidence collected, often through simple charts.
- Both analyze the evidence, discussing what it is telling you and its implications.
- Both present conclusions from the evidence.
- Both identify further actions based on the evidence. A key difference is that a research study recommends actions, while an accreditation report documents that actions have been taken.
|Posted on July 10, 2014 at 5:35 AM|
I love Alison Head and John Wihbey’s piece, “At Sea in a Deluge of Data” in this week’s Chronicle of Higher Education. They talk about a particular skill that’s growing in importance in the 21st century, what I call seeing the 30,000-foot picture: taking a lot of information, seeing the big ideas from all that information, and communicating the big points clearly and understandably.
Many colleges have a hard time helping their students develop this skill. Traditional library research papers may help, but they don’t give students the real-world integrative skills that employers are looking for: separating the information wheat from the chaff (the relevant from the irrelevant and the credible from what I like to call the incredible) and communicating big points in short, succinct ways that people can quickly and easily understand (see my earlier blog on infographics).
One reason that I think we have a hard time helping students develop this skill is because so many of us struggle with this ourselves. Seeing the 30,000-foot picture doesn’t come naturally to most people. David Keirsey has found that only about 5-10% of the population has the inherent temperament for big-picture analysis; people are far more likely to be detail-oriented. (You can take the Keirsey Temperament Sorter at www.keirsey.com and see where you fit.)
I see this a lot in work on assessment and accreditation. People are good at saying, “We used this rubric and here are the scores,” “Students took this survey and here are their responses,” “Here are grade distributions from key gateway courses.” But people often struggle to connect those pieces. What do your rubric, survey, and grade distribution results each say about students’ writing skills, for example? What are they telling you overall about students’ writing skills? Are the survey results and grades helping you understand why you’re getting your rubric results? Accreditors are less interested in a table of results than in what the results are saying to you. What overall conclusions can you draw about your students’ writing skills?
We need both detail and 30,000-foot people working on assessment and accreditation activities. Make sure you’ve got both on your team.
|Posted on May 22, 2014 at 3:25 PM|
I have a new book coming out this fall! Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability, with a marvelous foreword by Stan Ikenberry, will be released by Jossey-Bass in October.
The book offers straightforward guidance on understanding and meeting calls for ensuring and advancing quality, including responding to calls from accreditors and from others asking for greater accountability, by answering questions such as:
• What is a quality education? What is a quality college? What is an effective college?
• How can colleges ensure that their students are receiving the best possible education?
• How can colleges demonstrate their quality and effectiveness to accreditors, government policymakers, students, and others?
The book takes all the things that U.S. accreditors, government policymakers, and other stakeholders are expecting of colleges and universities today and organizes them into a simple of model of five dimensions of quality that will help you understand not only what your accreditor expects but why.
The five dimensions of quality are:
I. A culture of relevance
II. A culture of community
III. A culture of focus and aspiration
IV. A culture of evidence
V. A culture of betterment
As you might expect from me, the book offers plenty of practical tips. And there’s so much jargon today that I’ve populated the book with “Jargon Alerts!”—sidebars that explain jargon in everyday terms.
For more information, including the table of contents, and to pre-order a copy, visit http://www.wiley.com/WileyCDA/WileyTitle/productCd-111876157X.html.
|Posted on November 24, 2013 at 8:10 AM|
Here are six ways:
1. Ignore what your accreditor says. Ignore the accreditor's requests for specific information, and don't bother reading the accreditor's standards, and guidelines.
2. Fill your report with platitudes and sweeping generalizations... and no supporting evidence. Include statements such as "Faculty are dedicated to teaching" and "Students thrive here both academically and in terms of personal development" without any documentation that these are indeed true.
3. Use rose-colored glasses for everything. Don't even hint that anything is less than perfect. Make your only recommendations for "improvement" to stay the course or maybe do a few minor tweaks around the edges.
4. Share everything. Throw into the appendices everything but the kitchen sink--anything that remotely looks like it's related to, say, assessment...including surveysof student satisfaction from eight years ago.
5. Or share just one or two "examples." Never mind that they aren't really a representative sample--they're your best examples or maybe the only things you're doing.
6. Make it as hard as possible for the reviewer to find evidence of compliance with the accreditor's standards. Never mind that the reviewer is probably a volunteer with a day job. Provide only basic documents with no summaries or analyses of what the documents are telling you. Attach every faculty member's resume, for example, and leave it to the reviewer to read them all and decide if the faculty are appropriately qualified.