The Balanced Scorecard - Beyond Reports and Rankings
More commonly used in the commercial sector, this approach to strategic assessment can be adapted to higher education.
Since the 1990s, accountability in higher education has become a challenging issue for higher education. Increasingly, institutions of higher learning have been required to provide performance indicators empirical evidence of their value to state, alumni, prospective student, and other external stakeholders. State commissions of higher education and boards of regents have, in numerous states, developed “report cards” that grade colleges and universities according to their level of performance in a variety of categories. Surveys in the popular press and on the Internet rank institutions according to their retention and graduation rates, resources, academic reputation, and more.
Though substantial energy and effort have been expended to collect, organize, and present performance information, few would argue that the emphasis on the various report cards and surveys has dramatically changed the operational performance of most major universities. Commenting on the inadequacy of performance indicators for higher education, H.R. Kells (1990) warns of the following:
[This] notion to reduce complexity is acceptable if such reduction does not remove or reduce our ability to judge true worth. . . . The lists of performance indicators presented in study after study make little or no reference to the intentions (goals) of the organization to be described and virtually no reference to programme quality with respect to the speciﬁc results of instruction and research. (p. 261–62)
With important stakes such as increasing ﬁnancial resources, encouraging high-quality student applicants, and attracting faculty dependent upon how they “measure up,” universities are rightly concerned with how best to present themselves. Institutions attempt to improve accountability while dealing with the more difficult and complex issue of how to improve university effectiveness. The assumption of many externally derived accountability programs is that emphasis on one will result in the other. However, until performance indicators are linked to the drivers of institutional effectiveness in a meaningful way, the desired improvements in service, productivity, and impact are unlikely to occur. The real test for institutions is to create meaningful systems for strategic organizational assessment and then use that information in internal policy and resource allocation decisions.
Performance indicators can be powerful tools, at both the university and the college/department levels, for internal evaluation and strategic assessment. Though similarities exist between the indicators used for external reporting and internal assessment - indeed, many of the same data can be used for both - the development of internal indicators requires more attention to the contextual characteristics and operational goals of the university. Under these circumstances, performance indicators can provide substantive information for strategic decision making.
External Accountability Versus Internal Assessment
The differences between the use of performance indicators for external accountability and internal assessment are clear (see table 1). Performance indicators developed for external audiences are generally aimed at informing three types of stakeholders: consumers (i.e., students and parents), governing bodies (i.e., legislators and accrediting agencies), and potential revenue providers (i.e., alumni, donors, and funding agencies). The external audiences are often limited in their area of interest and have speciﬁc ideas of what might be acceptable institutional outcomes. These external audiences tend to adopt incomplete and one-dimensional views of per performance. A quick review of higher education report cards used to assess public colleges and universities in various states shows a principal focus on undergraduate education. This focus is consistent with the interest of many consumer groups and governing bodies associated with higher education. To present complex information in an easy-to-read and attractive format, external indicators are often presented in the form of rankings or report cards. Furthermore, it is common for external bodies to use a single set of indicators to measure many institutions across a wide range of missions.
For colleges and universities affected by external assessment, the management task is to learn the art of image management (Wu and Petrshiuses 1987). Since many external stakeholders have resources (ﬁnancial, student, and accreditation) that are of interest to the institution, understanding the formulaic relationships between the performance numbers and how they inﬂuence perception of success or failure is key. Thus, the emphasis of the university is primarily on external perception of success and manipulation of image and only secondarily on improved institutional effectiveness. This conclusion is based not on cynicism, but on the reality that the former is easier and more quickly inﬂuenced and changed than the latter.
To be useful internally, performance indicators must be tied to the values and goals of the particular university and should emanate from the institution’s performance objectives. These objectives translate the broad goals of the institution into speciﬁc research problems that can be studied and around which strategies for improvement can be developed. A different type of institutional stakeholder—university decision makers (i.e., faculty, academic administrators, and nonacademic administrators)—uses performance indicators developed for internal audiences. The internal audience represents a very broad spectrum of perspectives and interests with a wide range of opinions regarding what might be acceptable institutional outcomes. These internal audiences tend to adopt multidimensional views of performance. Often, issues are studied in great depth with information presented in the form of long, complex faculty reports. At times, the focus on the higher goals and values precludes speciﬁc action due to a lack of a supporting political coalition and/or criteria by which to evaluate the plan. Though institutional effectiveness and enhanced academic reputation are common goals, there is often lack of consensus about how institutional processes may actually have an impact on those goals.
For college and university decision makers engaged in internal assessment, the management task is to learn the art and science of institutional strategic assessment. Since consensus and buy-in are critical to many university initiatives, providing an acceptable mechanism or process for thinking about difﬁcult strategic questions is key to any real institutional improvement. And because the training of many faculty and academic administrators creates respect for theory and data analysis, presentation of institutional information in a conceptual model with supporting data can often facilitate both debate and decision making. Using data to support hypotheses about institutional strengths and weaknesses can effect decision processes and increase speed of both decision making and implementation of program changes. Making the appropriate linkage between the values and goals of the internal audience, the strategic tasks required, and the data collection and analysis necessary is important for useful internal performance assessment.
The Balanced Scorecard
In 1992, Robert S. Kaplan and David P. Norton introduced the balanced scorecard, a set of measures that allow for a holistic, integrated view of business performance. The scorecard was originally created to supplement “traditional ﬁnancial measures with criteria that measured performance from three additional perspectives—those of customers, internal business processes, and learning and growth” (Kaplan and Norton 1996, p. 75). By 1996, user companies had further developed it as a strategic management system linking long-term strategy to short-term targets. The development of the balanced scorecard method occurred because many business organizations realized that focus on a one-dimensional measure of performance (such as return on investment or increased proﬁt) was inadequate. Too often, bad strategic decisions were made in an effort to increase the bottom line at the expense of other organizational goals. The theory of the balanced scorecard suggested that rather than the focus, ﬁnancial performance is the natural outcome of balancing other important goals. These other organizational goals interact to support excellent overall organizational performance. If any individual goal is out of balance with other goals, the performance of the organization as a whole will suffer. The balanced scorecard system also emphasizes articulation of strategic targets in support of goals. In addition, measurement systems are developed to provide data necessary to know when targets are being achieved or when performance is out of balance or being negatively affected.
The Kaplan and Norton balanced scorecard looks at a company from four perspectives:
Financial: How do we look to shareholders?
Internal business processes: What must we excel at?
Innovation and learning: Can we continue to improve and create value?
Customer: How do customers see us?
By viewing the company from all four perspectives, the balanced scorecard provides a more comprehensive understanding of current performance. While these perspectives are not completely inappropriate for use by colleges and universities, it is possible to adapt the balanced scorecard theory using a paradigm more traditional to higher education.
Creating a Balanced Scorecard
If decision making is to be strategic, the strategy must be directed toward some overarching objective. Most colleges and universities have a mission or vision statement in place that sets out in very broad terms the goals of the institution. It is within the context of these goals that an institution must decide what it will benchmark and what performance it will measure, a process that Kaplan and Norton (1996) describe as “translating the vision.” “For people to act on the words in vision and strategy statements, those statements must be expressed as an integrated set of objectives and measures, agreed upon by all senior executives, that describe the long-term drivers of success” (p. 76).
The Ohio State University—a large, Midwestern land-grant university—has the vision of becoming “internationally recognized in research, teaching and service.” This has been translated into ﬁve speciﬁc organizational areas deemed necessary for achievement of the vision:
Academic excellence: What is the university’s contribution to the creation of knowledge?
Student learning experience: How effectively does the university transfer knowledge to its students?
Diversity: How well does the university broaden and strengthen its community?
Outreach and engagement: How effectively does the university transfer knowledge to local, national, and international communities?
Resource management: How well does the university develop and manage resources?
Based on this broadly accepted articulation of the vision, an academic scorecard can be developed by identifying long-term strategic objectives associated with each of these organizational areas. Each objective will, in turn, have speciﬁc performance measures that indicate progress toward attaining improvement in the designated performance area. Table 2 provides an example of the scorecard and associated objectives.
Linking the Theoretical Model and Data Needs
Key to the use of a balanced scorecard methodology are the steps that link the larger goals of the university to speciﬁc problems to be solved, decisions to be made, and resource allocation choices that present themselves. While the balanced scorecard cannot guarantee a recipe for correct decisions, it provides an integrated perspective on goals, targets, and measures of progress. It ties together information from a variety of perspectives so that trade-offs can be weighed.
After translating the vision, communicating and linking is the second step of the balanced scorecard process. Academic departments and academic support units must fully understand the macro-level goals so that objectives and measures for their individual units are linked to those of the entire institution. Kaplan and Norton’s third step, business planning, is more properly termed “academic planning” in the higher education setting. Academic planning calls for administrators to focus resources and set priorities. Administrators must link unit goals to macro goals in all scorecard areas, develop strategies to achieve those goals, and allocate resources to those strategies. In addition, they must develop credible measures of progress toward those goals. Finally, the feedback and learning step requires universities to evaluate their performance based on updated indicators and to revise strategies as appropriate. Though the timeline for the feedback and learning loop may be months or even years long, the process itself is vitally important. It is no less true in academia than in business that “just getting managers to think systematically about the assumptions underlying their strategy is an improvement” (Kaplan and Norton 1996, p. 85).
Linking Strategic Analysis to the Balanced Scorecard Model
An example may provide insight into how the relationship between the balanced scorecard model and more traditional data collection and analysis can be linked. A strategic question that has been raised on many university campuses is “What types of students should the university attract?” An analysis of the environment, through scanning and benchmarking efforts, suggests that the nontraditional student population may be an appropriate target. How, then, might the analysis of this question be informed by the use of the balanced scorecard?
Under the diversity component, there may be an analysis of the demographic components associated with current and potential nontraditional students. Will attracting more of this student type add or limit progress toward diversity objectives? Under the student experience component, there may be an analysis of retention and graduation rates of this subpopulation of students. Will emphasis on this group affect goals in those areas? Are the support needs of nontraditional students the same as those of the more traditional population? What must be done to ensure good results in student satisfaction?
Under the outreach component there may be an analysis of where nontraditional students might emerge. Are there businesses or industries that might require that an increased proportion of their workforce be college educated?
This analysis may feed into the resource management component if results suggest that this segment is not currently being served and can be added to the university enrollment without additional capacity expansion. There may also be analysis of whether the increased revenue from expansion into this potential population of students will cover the costs of additional services identiﬁed in the student learning experience analysis.
Finally, how will the change of revenue, the outreach possibilities, the student support demands, and the diversity of the new population affect the academic excellence of the university? Will increasing emphasis on this type of student particularly affect certain colleges, programs, or delivery systems? How will the institution adjust the college retention or student satisfaction objectives if it chooses to expand its services to this type of student? Will academic resources be efﬁciently utilized or strained by adding an additional student population? If additional faculty resources are required, what area of teaching or research will beneﬁt?
The comprehensive nature of the analysis will allow for a wider examination of the trade-offs associated with developing service to this particular student population. One university raising this question may ﬁnd that the trade-offs between the beneﬁts of increased performance in revenue management and outreach outweigh possible negative performance consequences associated with retention rates or increased costs of student services. Another university may evaluate the trade-offs with a different eye. However, what is gained is a holistic view of possible performance gains and losses associated with the decision as well as which institutional and administrative unit goals may be effected by the implementation of the strategy. In addition, by examining the question from multiple perspectives, appropriate performance and evaluation mechanisms associated with this strategy can be integrated into the balanced scorecard.
Though there is no guarantee that any decision will be “correct,” the balanced scorecard mechanism can provide a common frame of reference to all parties to the decision and clarify the choices and performance challenges involved. The presence of an accepted model, with data framed in the context of performance on organizational goals, can facilitate conversation, decision making, and ease of implementation for many strategic decisions. The balanced scorecard approach thus provides a framework for real conversation about the values and strategic objectives of the institution and the contributions of individual units to those objectives. Rewards can be linked to accomplishment of performance objectives, and resources can be more easily allocated to the priorities of the institution.
Translating the balanced scorecard to the complex world of academia is a challenge. Skepticism exists on campuses regarding the notion that a university’s performance can be measured quantitatively. Published rankings systems that change methodology and produce new orderings or that can be “gamed” encourage distrust in new institutional evaluation schemes. Using the balanced scorecard process, with its emphasis on integrative analysis and trade-offs, can move the discussion of performance management from an externally driven concern for image and rankings to an internally driven concern for improved institutional effectiveness.
Kaplan, R., and D. Norton. 1996. Using the Balanced Scorecard as a Strategic Management System. Harvard Business Review (January–February): 75–85.
Kells, H.R. 1990. The Inadequacy of Performance Indicators for Higher Education. Higher Education Management 2(3): 258–70.
Wu, B., and S. Petrshiuses. 1987. The Halo Effect in Store Image Management. Academy of Marketing Science Journal 15(3): 25–45.
About the Authors
Alice C. Stewart is director of strategic analysis and planning and assistant professor of strategic management at The Ohio State University. Her work has been presented at the National Academy of Management and the Strategic Management Society and has been published in such outlets as the Journal of Business Venturing and Advances in International Comparative Management.
Julie Carpenter-Hubin is strategic initiatives project manager at The Ohio State University, where she received her bachelor’s. She is currently pursuing a master’s in public administration. Her research interests include strategic decision making and university organization.