The Current Institutional Context
The social, economic, and political forces framing contemporary higher education in the United States have largely discouraged undergraduate teaching improvement, rather than supported it. We note three trends: institutional competition for resources; the rise of public accountability systems; and changing definitions of scholarship in the academy.
First, institutions of higher education compete with one another for goods that have little bearing on the quality of teaching and learning: namely, prestige, legitimacy, dollars, and students.2 The pattern is rampant and deeply engrained in the American higher education system. Even the neutral Carnegie institutional classifications can detract from attention to teaching, as institutions strive to position themselves with increasingly prestigious (i.e., research-intensive) peers, or to 鈥渕ove up鈥 in the rankings in their own classification. Efforts to respond to the institutional rankings criteria, or to appeal to a mass public, may draw time, attention, and resources away from faculty teaching.3 When faculty reward structures and professional development opportunities emphasize securing external grants, many faculty will follow suit; and since time is not infinitely expansible, the more time that faculty spend on research activities will likely result in less time spent thinking about teaching.
Further, rankings and other public measures of institutional effectiveness are sites for institutional competition and 鈥済aming the system.鈥 Years of research have demonstrated that college rankings (such as U.S. News & World Report) privilege the incoming characteristics of students over education practices or student outcomes.4 Higher education scholars and foundations hypothesized that changing the bases for the rankings and/or providing additional information about teaching and learning to the public might incentivize institutions to focus more on teaching and learning than on the incoming characteristics of students. But there is little evidence to date that changes in the data made available to the public鈥攁dmittedly primitive and difficult to make sense of鈥攈ave had this effect.
Higher education scholars and sociologists have noted the ways in which rankings and other accountability measures evoke changes in institutional behavior, often unintended, in response to being rated or evaluated.5 For example, institutions of higher education that strive to move up in the rankings have focused on recruiting more applicants each year while admitting the same number, on increasing research expenditures and spending on administration, and on hiring faculty who are experts and promoting them based on their research prowess.6 Conversely, such institutions also decrease behaviors that do not garner status or count toward the ratings to which they attend, such as admitting a broad spectrum of students, emphasizing teaching in the campus reward structure, and increasing instructional expenditures. Institutions mimic behaviors that are rewarded in the prestige hierarchy (i.e., admissions selectivity, research productivity), and dissociate from behaviors that are unrewarded (e.g., teaching quality).7 Since the current generation of college ratings does not address teaching quality or student learning outcomes, it is not surprising that the ratings do not drive institutions to attend to undergraduate teaching improvement.
Second, government has taken a more active role in developing public accountability systems for institutions receiving public funds, even in the form of student loans. But accountability focuses the attention of policy-makers and institutional leaders on outcomes as markers of institutional success, with much less attention to the educating processes that produce these outcomes.
The increase in accountability practices has largely been driven by calls for transparency, efficiency, and return on investment. One clear example of this practice, and its application to teaching and learning in higher education, was the Spellings Commission on the Future of Higher Education, so named for Margaret Spellings, Secretary of Education under President George W. Bush. The Commission鈥檚 2006 report called for institutions of higher education to document the 鈥渧alue-added鈥 to students in the form of learning outcomes in a 鈥渃onsumer-friendly鈥 way.8 Coming on the recent passage of the No Child Left Behind Act in 2002, which mandated virtually universal testing of students in grades three through eight in English and mathematics, institutions of higher education were very concerned about a broad federal mandate for parallel testing in postsecondary institutions.9
Although no such mandate emerged from the Spellings Commission鈥檚 recommendations, the consequences of the emphasis on value and transparency rippled across higher education institutions and trickled down into faculty life. The regional accreditors charged by the U.S. Department of Education accelerated a shift in their philosophy and standards toward what became known as 鈥渙utcomes-based accreditation,鈥 a model that obliged institutions to define desired student and organizational outcomes (such as student learning outcomes) and to demonstrate a continuous quality improvement mechanism in which measured outcomes would drive changes in institutional policies and practices.10 The accountability movement led to an increase in standardized institutional assessments of student engagement and learning, such as the National Survey of Student Engagement (NSSE), the Collegiate Learning Assessment (CLA), and the American Association of Colleges and Universities鈥 (AAC&U) Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics.
Generalized assessments such as these appear to shift institutional attention toward student engagement and learning. However, little is known about whether student learning does actually improve at the institution level in response to these accountability efforts. Institutions that adopt such assessments report increased faculty understanding of assessment,11 but there is little evidence that student learning increases or improves as a result. Likewise, there is little attention to college teaching in these assessments and improvement mechanisms.
Undergirding the student learning assessment movement and its counterpart, outcomes-based accountability, is the assumption that a focus on the student experience and student learning will reinforce and improve the educational practices at the institution. Assessment has been used in strategic planning, increasing student engagement, developing databases to inform institutional decision-making, enhancing faculty collaboration, and aligning curricula. Yet outcomes-based accountability has not sought to understand teaching in classrooms or the connections between teaching and the desired learning outcomes. Rather, there is a broad but generic notion that data on student learning outcomes might be examined via a feedback process that redirects faculty and administrators鈥 attention to curriculum and teaching practice, but with little guidance on how specifically to improve teaching.
The accountability movement has, we believe, encouraged data-based decision-making in higher education, what is sometimes referred to as a 鈥渃ulture of evidence鈥 in organizational decision-making.12 A culture of evidence is a culture 鈥渋n which colleagues from varied disciplinary contexts and roles (including student affairs) share information and judgments about what is and isn鈥檛 working and commit as a community to ongoing improvement.鈥13 What is particularly notable about this definition is that the focus is on the process of assessing鈥攚ithout much regard for the content of what is being assessed (e.g., teaching, learning, etc.). In recent years, the culture of evidence has been associated with student learning assessment, but the attention is on the institutional commitment to the assessment process (collecting and using data as evidence to guide practices) rather than on undergraduate teaching and teaching improvement.
Third, over the past three decades, there has been a systematic effort to redefine scholarship in the academy, pushing it increasingly to encompass teaching. But this work has not reached deeply enough into teaching practice to make a lasting difference. Ernest Boyer鈥檚 seminal work for the Carnegie Foundation for the Advancement of Teaching, Scholarship Reconsidered: Priorities of the Professoriate, sought to expand what counts as scholarship. Recalling Aristotle, Boyer remarked, 鈥渢eaching is the highest form of understanding,鈥 as he attempted to elevate teaching from the lowest common denominator among faculty to a fundamental and revered form of scholarship. Boyer鈥檚 report caused a significant ripple in the field, with faculty and administrators seeking to integrate these ideas into academic discussions in institutions across the nation.14 Many institutions revised their tenure, promotion, and merit reward structures to include forms of scholarship beyond basic research, such as the scholarship of teaching (i.e., the study of one鈥檚 own teaching practice, and that of others).15 Five years after Boyer鈥檚 report, almost half of faculty responding to a national survey stated that there was a greater emphasis on teaching in their institutions and roles than before the report.16
In spite of Boyer鈥檚 symbolic elevation of the importance of college teaching, there is little evidence of a fundamental restructuring of faculty reward systems in the wake of the movement he initiated. Although teaching frequently is institutionalized as a regular (and measurable) part of faculty workload, the campus values and assumptions supporting college teaching are often tacit.17 Virtually all full-time faculty can describe their work in terms of their teaching 鈥渓oad鈥濃攁 term that by its nature connotes a burden鈥攂ut not in terms of teaching鈥檚 qualities, or its value to the institution and its students. In many institutions, teaching remains a 鈥渟econd among equals鈥濃攐vershadowed by research productivity, though typically of greater importance than service.18
There is one additional feature of contemporary higher education worthy of note: Technological change and its potential to transform college teaching and learning and the professional development of college teachers. Increasingly, technology can mediate the relationships among teachers, learners, and subject matter, in the form of online classes and 鈥渇lipped鈥 classrooms, to name but two increasingly salient innovations. Some observers are convinced that technological change will fundamentally disrupt and alter existing institutional arrangements;19 others, drawing on the history of technological change in K-12 schools, are more skeptical about that possibility.20 We acknowledge the potential for technological change to reconstruct the college classroom, but it is not a central focus of our analysis.
This overview summarizes evidence that the external environments of colleges and universities shape their internal cultures, norms, and practices, which in turn influence faculty work priorities, experiences, and learning.21 But attention to high-quality teaching and learning is largely absent here. Changes to institutional decision-making and reward structures can, in some cases, turn faculty attention to teaching, motivating them to teach more, and altering their priorities among research, teaching, and service. What these processes cannot do, however, is alter the content and quality of undergraduate teaching. Only the faculty who are charged with teaching can do this.
We note as well that even for institutions with prominent undergraduate teaching missions, what counts as high-quality teaching is not at all clear. But we believe that these two aims鈥攎eaningful improvement in undergraduate teaching and making it an organizational goal鈥攁re attainable. We address these concerns in the next two sections of the paper, first by responding to the key question of 鈥淲hat is good teaching?鈥 and then by examining six cases of attempts to improve classroom teaching.
ENDNOTES
2. Mitchell Stevens, Creating a Class: College Admissions and the Education of Elites (Cambridge, MA: Harvard University Press, 2009).
3. Kerry Ann O鈥橫eara, 鈥淪triving for What? Exploring the Pursuit of Prestige,鈥 in Higher Education: Handbook of Theory and Research, ed. John C. Smart, 22nd ed. (New York: Springer International Publishing, 2007), 241鈥306; Christopher Morphew and Bruce Baker, 鈥淭he Cost of Prestige: Do New Research Universities Incur Higher Administrative Costs?鈥 The Review of Higher Education 27 (3) (2004): 365鈥384.
4. Morphew and Baker, 鈥淭he Cost of Prestige: Do New Research Universities Incur Higher Administrative Costs?鈥; Gary Pike, 鈥淢easuring Quality: A Comparison of US News Rankings and NSSE Benchmarks,鈥 Research in Higher Education 45 (2) (2004): 193鈥208; O鈥橫eara, 鈥淪triving for What? Exploring the Pursuit of Prestige.鈥
5. Ibid.
6. Ronald G. Ehrenberg, 鈥淩eaching for the Brass Ring: the U.S. News and World Report Rankings and Competition,鈥 The Review of Higher Education 26 (2) (2003): 145鈥162; Susan K. Gardner, 鈥淜eeping Up with the Joneses: Socialization and Culture in Doctoral Education at One Striving Institution,鈥 The Journal of Higher Education 81 (6) (2010); Tatiana Melguizo and Myrah Strober, 鈥淔aculty Salaries and the Maximization of Prestige,鈥 Research in Higher Education 48 (6) (2007): 633鈥668; Marc Meredith, 鈥淲hy Do Universities Compete in the Ratings Game? An Empirical Analysis of the Effects of the U.S. News and World Report College Rankings,鈥 Research in Higher Education 45 (5) (2004): 443鈥461.
7. Jerome Barkow et al., 鈥淧restige and Culture: A Biosocial Interpretation,鈥 Current Anthropology 16 (4) (1975): 553鈥572; Paul DiMaggio and Walter Powell, 鈥淭he Iron Cage Revisited: Collective Rationality and Institutional Isomorphism in Organizational Fields,鈥 American Sociological Review 48 (2) (1983): 147鈥160.
8. 鈥淎 Test of Leadership: Charting the Future of U.S. Higher Education鈥 (Washington, D.C.: U.S. Department of Education, September 22, 2006), .
9. Corbin M. Campbell, 鈥淪erving a Different Master: Assessing College Educational Quality for the Public,鈥 in Higher Education: Handbook of Theory and Research, vol. 30, ed. Michael Paulsen (New York: Springer International Publishing, 2015), 525鈥579; Peter Ewell, 鈥淎ssessment and Accountability in America Today: Background and Context,鈥 in New Directions for Institutional Research (San Francisco, CA: Jossey-Bass, 2008), 7鈥17.
10. Balancing Competing Goods: Accreditation and Information to the Public About Quality (Washington, D.C.: Council for Higher Education Accreditation, 2004), .
11. Esther Hong Delaney, 鈥淭he Professoriate in an Age of Assessment and Accountability: Understanding Faculty Response to Student Learning Outcomes Assessment and the Collegiate Learning Assessment,鈥 Ph.D. diss., Columbia University, 2015.
12. Catherine Millet et al., A Culture of Evidence: An Evidence-Centered Approach to Accountability for Student Learning Outcomes (Princeton, NJ: Educational Testing Service, 2008).
13. Pat Hutchings, Mary Taylor Huber, and Anthony Ciccone, The Scholarship of Teaching and Learning Reconsidered (San Francisco, CA: Jossey-Bass, 2011).
14. Charles E. Glassick, Mary Taylor Huber, and Gene I. Maeroff, Scholarship Assessed: Evaluation of the Professoriate (San Francisco, CA: Jossey-Bass, 1997); Adrianna Kezar, 鈥淗igher Education Research at the Millennium: Still Trees Without Fruit?鈥 The Review of Higher Education 4 (2000): 443鈥468; KerryAnn O鈥橫eara, 鈥淓ncouraging Multiple Forms of Scholarship in Faculty Reward Systems: Does It Make a Difference?鈥 Research in Higher Education 46 (5) (2005): 479鈥510, doi:10.1007/s11162-005-3362-6.
15. Glassick, Huber, and Maeroff, Scholarship Assessed: Evaluation of the Professoriate; O鈥橫eara, 鈥淓ncouraging Multiple Forms of Scholarship in Faculty Reward Systems.鈥
16. Mary Taylor Huber, Balancing Acts: The Scholarship of Teaching and Learning in Academic Careers (Washington, D.C.: American Association for Higher Education, 2004).
17. John Braxton, William Luckey, and Patricia Helland, Institutionalizing a Broader View of Scholarship Through Boyer鈥檚 Four Domains (San Francisco, CA: Jossey-Bass, 2002).
18. We acknowledge that these hierarchies differ by institutional type, as a research-intensive university will have different values and reward structures than, say, an urban community college.
19. Kevin Carey, The End of College: Creating the Future of Learning and the University of Everywhere (New York: Riverhead Books, 2015); Ryan Craig, College Disrupted: The Great Unbundling of Higher Education (New York: St. Martin鈥檚 Press, 2015); Jeffrey J. Selingo, College (Un)Bound: The Future of Higher Education and What It Means for Students (Boston: New Harvest, 2013); Henry C. Lucas, Technology and the Disruption of Higher Education (Hackensack, NJ: World Scientific Publishing Company, 2016).
20. Larry Cuban, Oversold and Underused: Computers in the Classroom (Cambridge, MA: Harvard University Press, 2003); Karen J. Head, Disrupt This! MOOCs and the Promise of Technology (Lebanon, NH: University Press of New England, 2017); Susan M. Dynarski, 鈥淔or Better Learning in College Lectures, Lay Down the Laptop and Pick Up a Pen鈥 (Washington, D.C.: The Brookings Institution, August 10, 2017), .
21. Adrianna Kezar, Understanding and Facilitating Organizational Change in the 21st Century (San Francisco, CA: Jossey-Bass, 2011); Judith Gappa, Ann E. Austin, and Andrea G. Trice, Rethinking Faculty Work (San Francisco, CA: Jossey-Bass, 2007); KerryAnn O鈥橫eara and Corbin M. Campbell, 鈥淔aculty Sense of Agency in Decisions About Work and Family,鈥 The Review of Higher Education 34 (3) (2011): 447鈥476, doi:10.1353/rhe.2011.0000.