By Watson Scott Swail, President and CEO, Educational Policy Institute
After 25 years as an educator and 15 as an evaluator, data geek, and policy wonk, I’m forever fascinated and amazed at the arrogance and ignorance people have toward evaluation of educational programs. Maybe I’ve been hanging out with the wrong crowd for too long, but I don’t think so. I’ve worked for a variety of research and membership organizations since the mid-1990s, and I have a lot of respect for my colleagues in the research arena, as well as for those working hard in the field to better the lives of students at all levels.
What gets me is the attitude that many people in the field have toward evaluation. This certainly isn’t a blanket attribution, because I know directors of educational programs at the district and state level who clearly understand the important of solid, empirically-driven program evaluations. This is aimed at those who clearly do not understand the nature of evaluation and look at it only as pure evil in their desire to do the “public good.” The truth is, by circumventing the evaluation process, the public good is not served well at all and is more often undermined.
I’ve been mostly involved in the evaluation of early childhood education programs, college preparatory programs, and college success programs. My research has resulted in either the proliferation or elimination of programs. Depending on the circumstances, both alternatives are appropriate avenues upon the completion of a well-conducted program evaluation.
On the first example, programs that are conducted well and work toward their goals in a prudent and appropriate manner value the learning from evaluations, because, generally speaking, these individuals understand the importance of evaluation outcomes on the continuous improvement of their program. Antithetically, other individuals, those who are scared of evaluations, do not value critical feedback or program improvement—they just want to promulgate their program and keep bringing in the money. They don’t want to expose the fact that they are in over their heads. The goal of serving students and the “public good” becomes far from primary.
We see this often in federal programs and surely in state/provincial programs. Let me give you the example of the federal GEAR UP program. GEAR UP is a pre-college effort to help middle and high school students prepare for college. States and partnership programs are given six-year grants to focus on schools (and students) in low-income areas—those who have the least resources to prepare for college. Ten years later, we’re not sure how well the GEAR UP program has done in this area, in part because of a relatively botched, multi-million dollar national evaluation. At the state and local levels, we’re still not sure of the impact or efficacy of this program because we don’t have a clear idea, from a research perspective, on what is happening program to program. We know, from experience, that some of these programs are on the cutting edge of research and evaluation. But others aren’t even in the ballpark.
Let me give you a further case in point, from an EPI and research organization perspective. We have been written in as evaluators in a number of GEAR UP grants. In these cases, the institutions of higher education that prime (lead) these grants write us in, with my name and our other research scientists. Because of our knowledge and our reputation, I think it is safe to say this helps them win (or lose, to be fair) the competition, especially given that 20 percent of the evaluation rubric is based on evaluation and data management.
However, we have been involved in several cases where, once the contract was awarded, the institution (or state) decided not to allocate the money for evaluation OR to turn about and use an in-state firm for the evaluation, at a much lower effort and level than proposed in the winning proposal. You might understand, purely from a business point, how frustrating this can be… to learn of a winning contract and then not to get the evaluation contract that had been formally promised (verbally, because they can’t write contracts ahead of funding, so it’s all “good faith”). We have lost well over a million dollars in evaluation contracts in GEAR UP alone due to this issue. As a non-profit organization, you might imagine that a million or so dollars has an impact. In one case, I had to eliminate staffing when we didn’t receive a $600,000 promised contract from a southwestern state. These decisions have resonance, and the bureaucrats on the ground don’t seem to get it. They’re just trying to save money, even though they had promised it to the federal government.
A second level of frustration is from a public policy perspective. If a grantee wins a competition based on a proposal, do they not owe the public their obligation to do what they are entrusted to do? I can name in GEAR UP and other federal programs countless examples of what could constitute fraud in the use of federal monies, mostly in the form of redistribution of funds for discretionary purposes. Somehow, when states and partnerships get money from the federal government, they choose how to spend it regardless of their budget. This does not happen all the time, but it happens far too often.
The federal role in education is extraordinarily finite. The real funders of education are states. Thus, the feds can only dabble in education such that every dollar matters. To that end, we must ensure that every dollar is leveraged to do the most of what it can possible do… having the receivers of those funds undermine the integrity of the federal program is unconscionable. But it happens.
A further point worthy of discussion, especially for those who I am perhaps pointing out in this article, is the definition of evaluation. It seems, in this new age of accountability, that data rules all discussions. Again, using, GEAR UP as an example, we have seen several RFPs (Request for Proposals) from states and partnership programs seeking evaluators for their programs. All is fine and good until we read through the RFP and realize that they don’t want evaluators at all. They simply want a data system to collect data. This comes, in part, due to a federal requirement for an Annual Progress Report, better known as the APR, due each April 15. The APR provides basic data on student progress and activities that substantiates that they are, in effect, doing what they are supposed to be doing with their money. This necessitates the collection of data on an ongoing basis. EPI, as other research organizations have done, has developed a sophisticated web-based system for doing this.
The problem comes when programs consider completion of the APR as evaluation. It is not. It’s production of an APR— a federal paper that meets a legal requirement. But it isn’t evaluation in any form. Going back to the premise of this commentary, a true evaluation provides important information on the educational outcomes of programs, but also formative information, often in qualitative form, on how the program is operating. That is, does the program meet its goals? Are the strategies and internal processes working effectively and efficiently? Are there ways to improve the program and, in turn, improve student progress? The APR nor basic data collection do not inform these questions. Focus groups, interviews, data analysis, review of documents, and other activities do. And these are the types of evaluation strategies that many programs are eliminating because they cost money to do. Give us the data system. Get us an APR. We’re done.
I understand that programs want to maximize the amount of money that goes into staffing and indirectly to students. But if we don’t conduct proper evaluations of these programs, how do we know that the funds are prudently used anyway? How does a program understand that certain strategies work and others don’t? That they could move funds from this place to that place and have better per-dollar impact?
The truth is they don’t. And that’s a crime for the taxpayer and students. If we’re going to leverage federal funds, we need to know that they are being used prudently. And if programs aren’t doing what they are supposed to be doing, and do not have the evaluation background to prove their ilk, then their federal funding should be taken away.