Presidential Essay: AEFP’s Role in an Evidence-Informed Education Ecosystem
Cara Jackson
Abstract
Historically, the U.S. federal government has invested in education research as a public good, with the implicit assumption that such research can support more efficient and effective educational opportunities nationwide. These investments were formalized in the Education Sciences Reform Act (ESRA) in 2002, which authorized federal government efforts to collect statistics and conduct research on the U.S. educational system. ESRA established the Institute of Education Sciences (IES) as an independent research institute, and for over two decades IES led the development of research infrastructure and provided funding for research. Along with many other members of the Association for Education Finance and Policy (AEFP), I have benefitted from IES-funded training workshops on causal inference and cost-benefit analysis.In the weeks leading up to AEFP's fiftieth annual conference, the Trump administration began dismantling the federal education research infrastructure.1 Nearly all IES contracts were terminated in February, leading to reductions in force at research firms. Most IES staff were put on administrative leave in early March, prior to a massive reduction in force in August 2025.AEFP's mission is to promote research and partnerships that can inform education policy and finance and improve education outcomes. Given the alignment of our mission with the notion of education research as a public good, we collaborated with others to advocate for continued federal investments in education research. Joining forces with leaders of our sister professional associations, we highlighted how education research can lead to a more efficient and effective educational system (Jackson et al. 2025). AEFP members wrote op-eds and spoke to journalists covering the administration's actions.In partnership with the Institute for Higher Education Policy, AEFP also filed a lawsuit against the Department of Education. The lawsuit argues that the administration's actions were contrary to the ESRA (P.L. 107–279), which requires IES to exist, and to conduct and disseminate research. Though the judge denied the preliminary injunction, he acknowledged the validity of specific complaints around losing remote access to student data for research purposes (Barshay 2025). By July 2025, the Department had reversed its decision to terminate remote access to data (Public Citizen 2025). Yet the fate of IES remains uncertain as I write this, on day 37 of the longest government shutdown in our nation's history.As we navigate reduced research funding and infrastructure, partnerships with policy makers and practitioners at federal, state, and local levels are essential to ensure that their evidence needs are met efficiently. Drawing on the wisdom of past presidents, lessons from the field, and prior research, I explore how AEFP has approached answering questions of decision makers. I conclude by calling for an evidence-informed education ecosystem grounded in the teamwork and intellectual humility needed to achieve AEFP's mission.Presidential essays over the years shed light on the multitude of legal, political, and financial issues in education that AEFP has addressed. These essays describe education finance court cases (Guthrie 2006), critical junctures in federal education policy (Hannaway 2015), specific federal policy initiatives such as school turnaround (Orland 2011), and state policies (Cunningham 2014; Wyckoff 2006). As modern education finance research extended into the operations of education (Guthrie 2006), essays on teacher quality reflected this shift (Rice 2013; Roellke 2007). Other essays focus on AEFP's role in the education ecosystem (Brewer and Miranda 2016; Goldhaber 2017; Herrington 2013), and how the association has navigated challenging times while continuing to pursue our mission (Downes 2021; Iatarola 2022; Strunk 2023).Presidential essays often address what Plecki (2006) calls “persistent dilemmas” that arise in education research aimed at informing education decisions for the benefits of students (Loeb 2012). What educational opportunities are available to whom? What resources are required to provide these educational opportunities, and how are these resources distributed? And do programs and policies achieve intended outcomes? A common theme is efficient resource use to improve outcomes (Stiefel 2006), highly relevant as the Elementary and Secondary School Emergency Relief Funds run out.Policy makers and practitioners benefit from a comprehensive body of evidence that addresses all three types of questions. Addressing this variety of questions requires multiple methodological approaches and data sources, and the intellectual humility to recognize how different methods complement one another. As Cartwright et al. (2022) note, privileging some methods as “most rigorous” limits the findings to the questions that can be answered by those methods. Relevant and compelling policy research must answer a variety of questions.Regarding access to educational opportunities, descriptive quantitative and qualitative analyses offer essential insights. Members of AEFP's policy maker and practitioner community have called for descriptive analyses to help agencies understand trends and context (Association for Education Finance and Policy 2025), and noted that such analyses can shift policy makers’ conversations about research and demonstrate their commitment to data-driven decision making (Conaway, Keesler, and Schwartz 2015). Descriptive information can shed light on “capacities, needs, methods, practices, policies, populations, and settings in a manner that is relevant to a specific research or policy question” (Loeb et al. 2017, p. 1). To understand why we see differential access to opportunities, interviews, focus groups, or observations are useful. For example, focus groups with students might reveal that guidance counselors are steering English learners away from Advanced Placement courses, even if the school district policy allows any interested student to enroll in such courses.Policy makers and practitioners have also called for more information on implementation. Such information includes resources required for implementation to support planning (Cunningham 2014), advice on implementing evidence-based practices (What Works Clearinghouse 2024), and data on implementation quality to improve the chances of having a positive impact (Association for Education Finance and Policy 2025). Determining the full cost of implementation helps school and district leaders decide whether they can feasibly adopt a program with fidelity (IES 2020). Such information is typically obtained from a combination of document analysis and interviews with program personnel (c.f. Bowden et al. 2016). Information gathered from systematic examination of resource use and programming can be used to improve the efficacy of existing programs (Yan and Aberli 2024).Accurately identifying the benefits of programs and policies requires credible, relevant causal impact estimates. Policy makers and practitioners encounter a variety of claims regarding the effectiveness of programs. To meet decision-making needs, researchers must strive to provide valid estimates of causal impact through rigorous quasi-experimental or experimental designs. We can strengthen the credibility of such evidence by following the Standards for Excellence in Education Research, including preregistering confirmatory impact analyses for quasi-experimental as well as experimental designs (Dee 2025; Peck and Litwok 2024). To be relevant to policy makers and practitioners, the impact estimate must focus on outcomes of interest to the decision makers who will ultimately use the research findings.Answering policy-relevant questions in methodological appropriate ways is only the beginning of the journey to incorporate evidence in decision making. For policy makers and practitioners to use evidence, they must perceive the benefits of change as outweighing the costs—that is, the evidence must overcome status quo bias. Some people call this the “last mile” problem, but it is a marathon, not a sprint. As Orland (2011) said:A recent study suggested that the rigor of research, size of effects, and generalizability are not factors in whether evidence influenced policy spending (Rao 2025).2 Further, education policy makers do not have a preference between experimental and observational studies (Nakajima 2021). These findings may come as an unpleasant surprise to those of us who have invested a great deal of time to increase rigor and generalizability. Yet, for any given decision maker, generalizability is less about studies with large, demographically diverse samples and more about the plausibility of positive outcomes replicating in their context. Plausibility may depend on availability of resources, local policies, and legislation that affect the ability to successfully implement a strategy. For example, policies that encourage differential retention of teachers may be more likely to lead to desired outcomes in schools and subjects with an oversupply of teacher candidates. For schools and subjects with teacher shortages, decision makers might not see the relevance of strategic retention, since they may feel pressure to retain as many teachers as possible.As such, good decision making requires considering both the available research and the context in which the research would be applied. As seen in figure 1, research evidence is a subset of the knowledge decision makers use, and it is situated in a complex education ecosystem. The ecosystem consists of federal, state, and local policies; local labor market conditions; priorities of the superintendent and school board; current student achievement levels; educator, parent, and student concerns; and resource constraints. While education leaders do report using research to inform their decisions (Penuel et al. 2017), they also draw on a wide variety of contextual information and evidence sources (Farley-Ripple et al. 2018). Figure 1.Knowledge that informs decisions, with research evidence as a subset.Knowing that a program has significant positive impacts on outcomes is not sufficient to compel use of evidence if policy makers and practitioners have limited knowledge of how to successfully implement the program in their own context. Educators have praised IES's practice guides for including implementation advice, including tips to address potential obstacles (What Works Clearinghouse 2024). Annenberg Institute and Results for America have generated a series of EdResearch for Action briefs, which summarize evidence-based strategies for various topics. The National Student Support Accelerator provides a variety of tools related to tutoring implementation. The Center for Outcomes Based Contracting's Standards of Excellence encourage district leaders to build mutual accountability for implementation into contracts with external with tutoring, edtech, and professional learning providers. IES had begun to require grantees to collect data on costs to provide key information to education decision makers about the resources needed to implement programs with fidelity, and provided resources to support researchers in this process (National Center for Education Research 2023). These initiatives provide compelling and relevant information to potential evidence users.AEFP has a history of bringing together practitioners and researchers in ways that strengthen the credibility, relevance, and timeliness of evidence. Members have led and advocated for partnerships between education practitioners and researchers (Conaway 2019; Goldhaber 2017; Loeb 2012; Schwartz 2010). Such partnerships can help researchers design studies that address the needs of practitioners and policy makers (Goldhaber 2017). One step in this process, championed under the Foundations for Evidence-Based Policymaking Act of 2018, is to develop a learning agenda with input from educators, community members, parents and students. The learning agenda can then guide the development of evidence that is relevant and useful to the current needs of decision makers (Jackson 2022). For example, co-developed learning agendas might focus on cell phone policies or evaluate claims made by providers of edtech products that a school district has purchased. Figure 2 provides a simplified model of how research partnerships generate more use of evidence and ultimately, improved educational outcomes. Figure 2.Simplified model for strengthening evidence use to improve outcomes.As Figlio, Karbownik, and Salvanes (2017) note, administrative data provides an opportunity to generate more policy-relevant research findings. Using existing administrative data is an efficient way to track people over time, including their participation in programs of interest. While this reduces researcher costs by minimizing primary data collection, education agencies bear the costs of governing use of these data, which they typically collect and store for purposes other than research. Thus, researchers have a particularly strong ethical imperative to ensure the information produced benefits the agencies whose resources are used to generate these data, and the taxpayers who subsidized its existence.Embedding researchers in education agencies, as the Strategic Data Project does, can expedite research and promote evidence use. Embedded researchers have several advantages: trusted relationships with decision makers and familiarity with the datasets and with policy timelines. Those with quantitative training can run code provided by external researcher partners and return the output, forgoing the lengthy processes of developing data-sharing agreements and anonymized datasets while minimizing data security risks.Partnerships between researchers and practitioners strengthen interpretation of evidence and support organizational learning on both the research and practice sides of the equation. Conaway (2019) describes the value in sharing preliminary findings and discussing and interpreting results, giving equivalent value to the perspectives of policy makers, practitioners, and researchers. Relational and interpretative research use, paired with organizational norms of asking questions and taking appropriate action based on the answers, creates the conditions for continuous improvement. Ongoing partnerships can support the collection and analysis of data to monitor process outcomes and implementation fidelity. Doing so informs both practitioners’ improvement efforts and provides researchers with insight on the resources required for effective implementation, which is needed to determine the full cost of achieving a specific effect size.Partnerships between researchers and practitioners also support evidence use by increasing the likelihood research will be timely and presented in a policy-friendly manner. Education agency staff can raise awareness of policy windows in which research findings might be most relevant, and, as Rao (2025) notes, timeliness is an important factor in the evidence-to-policy pipeline. Additionally, partnerships give researchers opportunities to share the brief, accessible explanations of research design that generate persistent effects in policy makers’ changes to their beliefs in response to research evidence (Nakajima 2021).The evolution of AEFP over the past fifty years reflects a commitment to conducting actionable research in partnership with policy makers and practitioners. While we started as a school finance organization, modern-era education finance “is concerned with relationships of inputs to schooling outcomes. This modern education finance paradigm provokes a need for . . . research and data extending into the operations of education itself, not just financing” (Guthrie 2006, p. 3). Our questions have shifted away from “does money matter?” to more nuanced questions of how and why money matters, such as in recent work from Nguyen, Anglum, and Crouch (2023) on school finance reform.Given the high proportion of school budgets devoted to staff salaries and teachers’ critical role as the most important in-school input into students’ educational outcomes, it is not surprising that Education Finance and Policy’s most cited publications include teacher-focused articles and policy briefs. This includes studies of changes in teacher entry requirements (Boyd et al. 2006), teacher experience (Rice 2013), teacher effectiveness and evaluation (McCaffrey et al. 2009; Rockoff et al. 2010; Steinberg and Donaldson 2016), and racial disparities in access to high-quality teachers (Clotfelter, Ladd, and Clifton 2023). Additionally, AEFP's Live Handbook devotes a substantial number of chapters to the teacher workforce.AEFP's policy maker and practitioner members provide a solid foundation for evidence use, with generous support from foundations that supported their attendance at AEFP's annual conference. In 2025, Arnold Ventures funded AEFP's Innovation Day, coordinated by Matthew Duque and Shanna Ricketts, who co-led the Policymakers & Practitioners Community Group. Several attendees of Innovation Day became contributors to AEFP's Evidence Use Spotlight series. Since they are more likely to have regular communications with decisions makers than researchers in academia or research firms, evidence use in education is driven by this community.Many AEFP members have gone the extra mile to ensure that research findings are accessible to the public. Gema Zamarro's work on dual language and language immersion programs was cited by the D.C. Policy Center (Mason 2024) and she has been cited by the Washington Times on the implications of enrollment declines for school funding (Salai 2024). Constance Lindsay was quoted in The 74 on the benefits of a diverse teacher workforce. Brookings’s Jon Valant has appeared on PBS NewsHour to discuss the priorities of the Department of Education under the Trump administration (Bennett and Cuevas 2024). The journalists who cover education issues are an important part of the ecosystem, and responding to their calls is a step toward a more evidence-informed education sector.Developing evidence for decisions makers that is compelling, relevant, and actionable requires working with a broad variety of actors across the education ecosystem. State, district, and school leaders, educator preparation programs, professional development providers, and program developers all play a role in whether evidence is used in the education sector. Policy influencers such as journalists, teachers, and parents also play a role. To win a relay marathon, we must connect various actors around our common goal of improving educational outcomes.Of course, operationalizing that common goal is complex: Individuals vary in the weight placed on different measures of educational outcomes, such as test scores, educational attainment, or assessments of social–emotional learning (Goldhaber 2017). Parents might prioritize safety over academic outcomes. Policy makers might emphasize short-term test score gains over long-term educational attainment. Voters may attend less to education's societal benefits and positive externalities than we might like (Brighouse et al. 2018). By partnering with policy makers and practitioners, researchers may be better situated to provide evidence that into the of these various and can relevant this research likely decisions about education through trusted more often than we are Evidence use through and organizational (Conaway Most people do not provide they are on the of between is a to for the purposes of a cost-benefit Yet I to money that if we do not in decision makers will be less likely to use research who to build with others in the education ecosystem will need to do the work of learning and partnering with the policy makers and practitioners. researchers have more time with practitioners than external researchers so partnering with and their may be critical for the of education research. I AEFP will to the of our to across the education ecosystem in of improving educational opportunities and outcomes for all to those who at the Institute of Education The has better resources, and evidence of federal investments in education research as a public I also to the members of the of Education Practitioners for many conversations about evidence use.
Evidence weight
Balanced mode · F 0.40 / M 0.15 / V 0.05 / R 0.40
| F · citation impact | 0.50 × 0.4 = 0.20 |
| M · momentum | 0.50 × 0.15 = 0.07 |
| V · venue signal | 0.50 × 0.05 = 0.03 |
| R · text relevance † | 0.50 × 0.4 = 0.20 |
† Text relevance is estimated at 0.50 on the detail page — for your query’s actual relevance score, open this paper from a search result.