Home

Mike McMahon AUSD
BOE Meetings Assessment Facilities FinancesFavorite Links

Source:Voices in Urban Education

When Districts Use Evidence to Improve Instruction: What Do We Know and Where Do We Go from Here?

By Meredith I. Honig and Cynthia E. Coburn, Spring 2005

How do central office administrators use evidence in their practice? A research review revealed that district leaders use evidence far more often than is commonly assumed, although there is little concrete evidence so far that this has led to improved school outcomes. Districts need ways to translate evidence into usable forms.

In recent years, school district central offices have faced unprecedented demands to use evidence in their decision making. For example, the federal No Child Left Behind Act requires that all programs funded under this initiative stem from "scientifically based research" and student-performance data – a requirement that potentially affects decisions throughout central offices, from the selection of professional development approaches to debates about the inclusion of the arts in the curriculum. The U.S. Department of Education's (2002b) Strategic Plan for 2002–2007 explicitly calls for the "transform[ation of] education into an evidence-based field" by strengthening the quality and use of educational research.

Much attention has been paid to central offices as supporters of schools' use of evidence, but what do we know about the use of evidence in district central office administrators' own decision making? To address this question, we conducted a comprehensive review of research on district central office decision making and evidence use. We cast a broad net for empirical studies on how central office administrators use various forms of evidence, including student data, research-based practice, evaluation data, and feedback from teachers, school principals, and community members. In all, we reviewed thirty-nine empirical studies, literature reviews, and descriptions of evidence use published since 1975 and various other studies on district central office decision making.

Ironically, we found that pressures on district central offices to use evidence are not themselves based on substantial evidence that evidence-based decision making in central offices leads to improvements in academic, managerial, or other kinds of outcomes. These studies are few and far between and have many shortcomings. Often, studies refer to "district" use of evidence without distinguishing among different individuals throughout central offices who may or may not use evidence for a range of purposes, from choosing reading curricula to deploying school buses. Studies also tend not to distinguish among types of information in use (or not in use), even though research in the business sector and elsewhere reveals that using student-performance data, for example, should pose very different challenges and opportunities than using an academic research article or a research-based school reform model. Despite these limitations of the research, we were able to glean several key lessons about what evidence use by district central offices involves and what it takes.

Lessons from Research about District Use of Evidence

The body of research we reviewed revealed that central office evidence use is a far more frequent and complex process than typically represented in federal and state mandates. Several lessons emerged from our study for central offices to keep in mind as they use evidence for decision making. We conclude that evidence use can be strengthened and expanded, in part, by increased collaboration within district central offices and stronger roles for external support providers in helping central office administrators translate evidence into usable forms.

District central offices are no strangers to evidence use.

Despite some policy claims to the contrary, superintendents and other district central office administrators have long used a variety of locally and externally generated evidence in all sorts of decision-making processes. Local evidence – community assessments, a district's evaluations of its own programs, and student and family surveys – grounds many central office decisions related to resource allocation and policy development (Honig 2001, 2003; Marsh 2002; Massell 2001). District central offices use standardized-test scores and school-improvement plans as a regular part of strategic planning processes (Massell 2001; Massell & Goertz 2002).

District central offices draw on external research studies to inform or justify decisions to choose or abandon instructional programs and school reform models (Corcoran, Fuhrman & Belcher 2001; Kean 1983; Newman, Brown & Rivers 1983; Robinson 1988). They also seek out research and research-based "best practices" through professional conferences (Datnow, Hubbard & Mehan 2002; Osheka & Campagne 1989) and from visiting researchers (Boeckx 1994), local universities (Nafziger, Griffith & Goren 1985), community-based organizations (Honig 2004a), research and development organizations (Corcoran & Rouk 1985), and voluntary associations such as the National School Boards Association and the American Association of School Administrators, among others.

Evidence use may provide light in the dark, but it's more like striking a match than turning on the floodlights.

The word evidence, derived from the same root as evident – meaning, literally, conspicuous, apparent, or obvious – may create the impression that the information being used as evidence somehow speaks for itself. But central office administrators report that information is usually riddled with ambiguity about which information to collect, what it means, and how to use it.

Often district central office administrators use evidence in a range of political decisions that may help or hinder instructional improvement.

The ambiguity stems, in part, from the form evidence typically takes. Research papers that emphasize the abstract rather than the concrete, that use technical language, and that are very long are more likely to be perceived as ambiguous by district administrators (Kean 1981, 1983; Roberts & Smith 1982; West & Rhoton 1994). Ambiguity also stems from the sheer volume of evidence available at any given time (Massell & Goertz 2002). Ultimately, though, implications for action are often unavoidably ambiguous, especially in complex systems like district central offices (Honig 2001; O'Day 2002). Central office administrators report that social science research and evaluation findings, in particular, often fail to provide direct guides for action (Corcoran, Fuhrman & Belcher 2001; Fullan 1980).

Ambiguity can curtail the consensus sometimes essential to central office decision making. For example, Kennedy (1982a) has revealed how researchers attempts to balance positive and negative findings can allow central office administrators with different views to "freely infer what they wanted" about the degree to which the findings supported their position and thereby frustrate consensus (p. 82). But the ambiguity of evidence is not inherently problematic. For example, ambiguity of information about how best to meet the needs of students can open opportunities for deliberation and for tailoring reform strategies to individual student needs (e.g., Honig 2001).

The connection between evidence used by central office and instructional improvement isn't always direct or predictable.

Sometimes central office administrators use evidence for purposes that seem directly related to instructional improvement, such as allocating resources based on data about student needs. However, perhaps more often district central office administrators use evidence in a range of other, largely political decisions. These political decisions – not the evidence, per se – may help or hinder instructional improvement. For example, district central office administrators use evidence to bolster their arguments at school board meetings and various community events to increase board and community support for particular education reform strategies (Corcoran, Fuhrman & Belcher 2001; Marsh 2002). One superintendent recounted how he used research to "stabilize the environment" within his central office among his own staff to advance an improvement strategy: "When confronted with research, our teachers and administrators began to 'buy in' to the program" he was trying to implement (Boeckx 1994, 24).

Occasionally, research grounds central office presentations of particular reform approaches at school board meetings as a way to influence school board opinions, even if that research was not used in the development, selection, or implementation of those programs (Robinson 1988). In these ways, evidence is used to influence public opinion or group consensus, which, in turn, impacts decision making and, other conditions permitting, improvement (Englert, Kean & Scribner 1977; Kennedy 1982a, 1982b).

If it's "useful," they will use it.

Despite policies promoting evidence use and the sheer quantity of information available from a variety of sources, district central office administrators frequently report limited access to evidence they consider relevant to their most pressing concerns (Corcoran et al. 2001). Available evidence tends to come in non-user-friendly forms. Central office administrators seem to want concise research syntheses (Corcoran, Fuhrman & Belcher 2001) based on up-to-date studies (Corcoran, Fuhrman & Belcher 2001; Roberts & Smith 1982; West & Rhoton 1994). Student-outcome data may be difficult to obtain from state educational agencies in usable form (Massell 2001).

Some central offices lack the technological infrastructure to use their own data to answer pressing questions (Reichardt 2000). Time pressures also curtail the collection of relevant evidence; in particular, district personnel cannot always wait for the results from evaluation studies or pilot programs before they take action, either because they need to react to an immediate need or because they feel pressured to appear decisive (Bickel & Cooley 1985; Corcoran, Fuhrman & Belcher 2001; Englert, Kean & Scribner 1977; Kean 1981, 1983).

The process of rendering evidence meaningful typically involves the translation of evidence into forms that central office administrators consider clear and "actionable."

How evidence is used depends on what central administrators already know, can do, and need to do.

Central office administrators are hardly passive recipients of evidence. They actively search for it and grapple with how to incorporate it into their decision making. Which evidence they find and bring back to the central office and how they choose to use it depends largely on their prior knowledge. For example, Mary Kennedy (1982a) has shown that when central office administrators select and use evidence, they filter or screen it through their beliefs, assumptions, interests, and experiences. According to Kennedy, "When people say they have used evidence, what they really mean is that they have rendered it meaningful by connecting it to a prevailing and usually very powerful point of view. Having done so, they can claim the evidence is relevant, timely, and compelling" (p. 101).

The process of rendering evidence meaningful typically involves the translation of evidence into forms that central office administrators consider clear and "actionable." For example, studies show that, especially when faced with large volumes of evidence, central office administrators tend to gravitate towards evidence that is congruent with preexisting beliefs and pay less attention to that which challenges their experiences or assumptions (Coburn & Talbert, forthcoming; Kennedy 1982a, 1982b; Spillane, Reiser & Reimer 2002). They also tend to break large, complex studies into smaller pieces that they consider more manageable (Hannaway 1989). These patterns appear particularly prevalent when consequences for poor performance are high and when available evidence is complex or ambiguous (Honig 2001).

Evidence is not just for the research and evaluation unit any more; broad participation bolsters evidence use.

By participating in evidence gathering and translation, central office staff become more familiar with the evidence, develop more confidence that they understand it and know how to use it, and strengthen their belief that they should use it. Broad participation also is essential because different central office administrators tend to be skilled at different aspects of using evidence. For example, certain administrators are skilled at conducting research or otherwise collecting evidence. These are not always the same people who are able interpreters of the large volumes of data that central offices are now routinely required to manage. Nor are these administrator-researchers always the same people who have the authority to decide how evidence should be used to guide central office operations (Honig 2003; Reichardt 2000). Central offices that are well organized to use research seem to distribute various evidence-related functions across staff and to have high degrees of coordination.

Collaboration helps central office administrators create common beliefs and understandings essential to making sense of evidence. Through collaboration, central office staff may increase their social capital – in this case, trusting relationships between those who have evidence and those who will use it – that may help to effectively use evidence. For example, central office administrators may have evidence of school difficulties that could either help direct new resources for school improvement or increase threats of district sanctions. Without trust that the information will be used for support rather than punishment, such evidence may not see the light of day (Honig 2003; Marsh 2002).

Send time and models, not just money.

Central offices seem to have access to new funding for data systems and other computer technologies, but they often lack other resources, namely structured time and models of professional practice, that support their use of evidence. For example, central office administrators often face multiple goals, demands, and priorities that divide their attention (Hannaway 1989; Holley 1980; Peterson 1998) and thus may report that they have little time to "consult the evidence" (Holley 1980).

To use research and data to drive decision making, district administrators must play new and sometimes unfamiliar roles. Demands to use data as part of accountability requirements, for example, call for a shift in orientation in collecting data away from compliance reporting to the government and toward making data accessible to inform ongoing decision making. However, central office administrators don't always have access to models of professional practice that reflect these roles (Burch & Thiem 2004; Reichardt 2000).

External supports seem essential.

Various organizations outside central office jurisdictions – including professional associations, reform support providers, intermediary organizations, and research and development agencies, among others – play essential roles in supporting district central office evidence use. These organizations often form "natural channels" through which information flows, because they have credibility with school and central office personnel and the ability to integrate research knowledge with an awareness of local needs and conditions (Corcoran & Rouk 1985; Datnow, Hubbard & Mehan 2002; Honig 2004a, 2004b; Kean, 1981, 1983; Osheka & Campagne 1989; Roberts & Smith 1982). James Spillane (1998) showed that information garnered through ties to professional associations shaped the assumptions and beliefs that district personnel use to interpret information and that such ties were more salient than state policy in shaping districts' instructional agendas.

By contrast, federal and state agencies have mainly mandated the use of evidence, sanctioned the use of specific evidence, invested in particular forms of research, and tied penalties to the failure of district central offices to use evidence. While focusing increased attention on evidence, these steps have not necessarily increased capacity or led to substantive use in district central offices. As Diane Massell (2001) found, state policy provides the conditions to encourage data use, but whether districts embrace the approach depends on district conditions, such as whether district staff view outcomes and performance goals as important, relevant, and attainable.

Selected Implications for Practice

The lessons suggested by the research have important implications for the way central offices use evidence, leading to a series of recommendations.

Evidence use and improvement do not operate in a one-to-one relationship. Evidence rarely points to an unambiguous path to improvement.

Be realistic about what evidence offers and how it functions in decision making.

District central offices and the organizations that support them should understand that evidence use and improvement do not operate in a one-to-one relationship and that evidence rarely points to an unambiguous path to improvement. For central office administrators to make productive use of evidence, they must have opportunities to interpret and translate evidence.

Improve access to "useful" evidence.

Evidence is useful when it clearly relates to pressing central office matters, is available in a timely manner, and comes in relatively straightforward forms. However, evidence translators should take care to ensure that their reformulations of evidence are simple enough that they will be used – but not so simple that they strip their original source of potentially valuable information. Educational researchers may have particularly important roles to play in this process by asking research questions that relate more closely to central office practice and by translating (or collaborating with someone to translate) their research into more accessible formats.

Support nontechnical capacity: professional models and collaboration.

Improving evidence use isn't simply a matter of building a better management information system. Central office staff administrators need models of professional practice that include evidence use as part of their day-to-day routines Ð especially those who may perceive evidence use as a job for "experts." These routines include substantive collaboration among central office administrators overall and, in particular, among those who may be designated to acquire evidence and those charged with incorporating it into central office decisions.

Meredith I. Honig is an assistant professor of education at the University of Maryland College of Education. Cynthia E. Coburn is an assistant professor at the University of California, Berkeley, Graduate School of Education.

Note: This paper is based on a book chapter, written by the present authors and Mary Kay Stein, forthcoming in Conference Volume on Evidence Use [provisional title]. The authors thank the MacArthur Network for Teaching and Learning (funded by the John D. and Catherine T. MacArthur Foundation) for their support of this research. The authors also thank Carolina Belalcazar, Mika Yamashita, Marlene Mullin, and Scott Mergl for their various contributions to this article. Any opinions expressed herein are those of the authors and do not necessarily represent the views of the MacArthur Foundation.

TOP

Comments. Questions. Broken links? Bad spelling! Incorrect Grammar? Let me know at webmaster.
Last modified: March 4, 2005

Disclaimer: This website is the sole responsibility of Mike McMahon. It does not represent any official opinions, statement of facts or positions of the Alameda Unified School District. Its sole purpose is to disseminate information to interested individuals in the Alameda community.