By Watson Scott Swail, President & CEO, Educational Policy Institute/EPI International
Dear Prospective Client:
My organization and thousands of others, including university-based, non-profits, and for-profit organizations, are pleased that you are soliciting our help in evaluating your project. This is how we keep the doors open, and, in our case, as a non-profit organization, it is how we preach our mission of expanding educational opportunity through high-level research.
We understand that you want the best possible evaluation of your project. We are pleased that you share our commitment to excellence. That’s a great start. Really.
But we are perplexed at some of your conditions set forth in the RFP (Request For Proposals). Let me call a few into question and ask why on earth you would want to put us and our colleagues at competing companies through the hell that you quite evidently want us to go through in order to submit, let alone win, your competition and then complete your evaluation. I understand that you may think I’m over blowing this issue. I’m not.
Project Budget. I’m not sure why in all cases, but it is extraordinarily frustrating that organizations like yours (especially you state and provincial government entities) would not want to set a parameter for funding. It’s like telling the vendor, “we don’t really care about the budget, as long as it is the cheapest.” Put another way, “we also don’t care about the quality of the project, because we only want the least expensive.” Not providing budget parameters is akin to telling us you can do almost anything you want. Cost doesn’t matter. Except the same contractors will then also place 40 percent of the proposal score on the cost. Case in Point: Maryland put out an evaluation with no parameters. The highest proposal came in at $1.9 million, EPI came in at $600k, and the winner was at the bottom: $220k. A range of $220k to $1.9 million…something is very wrong with this scenario.
Do us a favor. Set a cost. You have a budget. You know how much you can spend. Set it and let us come with the best design for the budget and let that be the competition. Cheaper is not better. Cheaper means something gets left off the table.
Not providing a budget parameter also increases the possibility that companies like us will not submit a proposal. We regularly dump RFPs because it is too risky of our time. We can’t take 40-60 hours to prepare a proposal just to find out we were way too high (or worse—to low!) to compete. Time is money. It costs us between $5,000 and $20,000 to prepare a proposal. And that’s real money to us. So be considerate of others. And realize that this isn’t “free.” It is only “free” to you. This is a killer.
Our Ideas. Another pet peeve of mine. We’ve been involved in several competitions that asked us for our best ideas of how to conduct an evaluation. We didn’t win, but then the company that won—typically a small, local entity—used our evaluation strategies. It’s unethical to have us spend time generating a comprehensive scheme for an evaluation and then pass it on to your “preferred” customer. If we come up with the idea, and you like it, pay us for it. Don’t give it to your Shiraz-guzzling friend who will do the $150,000 evaluation poorly for $15,000.
Misuse of Government Money. I cannot say how many state and local RFPs/contracts we see that misuse federal funds for their evaluation activities. Well, to be clear, for non-evaluation activities in many cases. Federal projects typically require a third-party evaluator. This makes sense. You need an arms-length organization to measure, in an unbiased manner, the impact of the program or intervention on student outcomes. This is most of the work that we do. However, the misuse of federal funds is STUNNING.
Two recent (one this morning) examples come to mind. The first is a state with a large federal program. The research budget was set by the state at over $300,000, but they only provided $50,000 for the external evaluator. So what did they do with the rest of the money, you ask? They used $300,000 annually to fund state employees in their evaluation department, plus funds to further develop their state data system. Not exactly what this project was supposed to support.
The example this morning is a state that is “willing” to provide $50,000 annually for five years to evaluate a $33 million project—$33 million!!! Typical evaluation budgets run approximately 7.5 percent of total budget. That would set the evaluation budget at about $500,000 year. Even if you scrimped at 5 percent, it would be $330,000. But they want it done (46 separate school sites, no less) for $50 grand a year.
These sites don’t want an evaluation. They want a rubber stamp.
Just as bad are the states that delay the evaluation for one, two, or three years or more. It happens all the time. Then they bring in an evaluator at the end of their five-year project to pull all the pieces together. But only pay them for one year. All ex-post facto analysis. No hand in the development of instruments, no voice in how the experimental groups are formed (usually none, by the way).
Timeline. This is a good one. Ever had to respond to an RFP that wanted a year-long study done in a month? Because the legislature demanded it? Happened to us this past December when a state was obviously behind the proverbial eight ball, because the previous vendor dropped that same ball. You can’t help but wonder if the state unit dropped the ball and needed someone to clean up this mess. Trust me, this happens all the time, and showcases how many states (and perhaps provinces, but these issues do not seem to happen at the same frequency in Canada) just don’t have their act together.
Project Planning. There are also several examples of states or localities that receive large (we’re talking $25 million grants) from the federal government—and do not even have an evaluator. What was the federal government thinking?
Be Truthful. We had a $250,000 evaluation a few years ago. We were basically released from the project (as was the previous evaluator) because we “offered” to get them an additional $1,000,000 from the federal government to extend the evaluation. Why did they not want another mill for a better evaluation? Ahhhh. Because they knew their project didn’t hold up well after the initial year. The long-term findings would be null, and an official longitudinal study would negatively impact their local fundraising efforts. Can you believe it? They knew their program didn’t work beyond the first year, but couldn’t risk their funding. In truth, the leadership needed to find out why this was happening and what they could do about it. But if this was the case, why does the program exist anyway? A waste of money.
As well, demand truth from your evaluators. Many evaluators will say almost anything to get a job. Hate to say it, but it’s true. We’ve actually lost jobs before for being honest. I personally have travelled across the country for a final proposal presentation only to say to the working group: “Well, if that’s what you want, we can’t do it; nobody can.” We didn’t get that job. But what they wanted could not be done—period. They surely hired someone who said they could. I’ll go with honesty every time, even if it costs us work. Accept the honest ramblings of an evaluator. If they say it can’t be done, they either don’t know what they are talking about or are being truthful.
Give it to the group or individual you want doing it. I have been involved in several RFPs where the final project was given to the person the organization wanted in the first place. Some of these are “wired,” or set up for the person to win. You can sometimes tell this in advance by their wording. In other cases, they look at the 10 or 12 proposals and decide that their original confidant had it better. Hire them at the start and leave the rest of us alone. It’s too much time and expense for us to waste our time when we don’t realistically have a shot at the project. And maddening. We’ve benefited from “wired” projects before, but not many. We’ve lost a lot to “wired” organizations. Nothing but frustrating.
These are all examples of the problems we face. I have plenty more, but I’m depressed thinking about them. We’re here, like so many others, because we enjoy research and want our research to matter. We want to be unbiased and tell a story so that programs can improve. If you don’t know where you are now, how do you know far you’ve come?
So, to our future clients (if I haven’t scared them off):
- Set a budget. Just do it. Set your expectations for the evaluation and then evaluate vendors on what they propose to do for the budget. Take money out of the equation when possible. If you still want to put budget into the calculation, then at least have a parameter of budget or at least level of effort. Otherwise, we’re flying blind.
- Understand that a good evaluation helps you. Too many people are scared of numbers and the thought that their program isn’t doing what it should. Use that information! Knowledge is Power, and those with it rule. If you know the weaknesses of your program, you can deal with them. If you know the strengths of your program, you can build upon those.
- Don’t fleece the system. Feel fortunate that there are federal funds to support what you need to do at the state or local level. But don’t abuse it by misusing funds. That hurts all of us. It’s unethical. It’s wrong.
- Be realistic about your expectations. You can’t have everything for nothing, and you can’t do everything you want sometimes. Be real about what you need and what you want. Budgets do limit this sometimes, but don’t be artificial about your budget parameters and evaluation needs. You must reconcile the two. And remember, if the contractor promises are too good to be true, they are. I’ve seen too many evaluators promise things that simply can’t be done.
- Don’t waste our time. Time is valuable. Putting out an RFP is a big deal. Only do so if you have to. If you can steer it somewhere, go ahead. Don’t waste our time. If you are going to do an RFP, do it fairly. For everyone.
But don’t waste our time.
6 thoughts on “An Open Letter to All Organizations in Need of a Program Evaluator”
This letter makes excellent points. I hope it is read widely by the right people.
A few months back I posed the same questions you raised on a blog to the American Evaluation Association membership and got a lithany of complaints like the ones you list in your open letter. Some also blame the funders who continue to demand proof of impact but only allow for a minimum amount, if any, of the budget to be dedicated to the evaluation. I just went through the budget conversation last week with a client with a $100,000 a year budget with 10% for evaluation. What could I offer for $10,000? Her answer was that she would go to the university where they can do it because they use graduate students and it makes it cheaper! I said, Go for it!
We share these frustrations but the problem is substantially bigger than just being a burden to evaluators:
** Lowest-bidder decisions based on scores weighted to focus on price, even when put in place in good faith efforts to control costs to agencies, invariably sacrifice VALUE. A better practice would be to solicit capacity statements through an RFQ (for Qualifications) process, then go back to the most attractive offerors for detailed conversations about evaluation study scope, rigor, and cost, on which a final decision can be made.
** The ultimate burden of bad evaluation procurement decisions is borne by the programs – many of which are multimillion-dollar public-money undertakings paid for by US taxpayers. The response to this should not be “Kill the programs!” but instead, to demand that such programs hold high expectations for evaluation at all levels, and promulgate those expectations to grantees and sub-grantees.
** Unethical low-ball responses, encouraged by the types of procurement practices described above, cheapen an entire field of practice. If you need evaluation research services, hire trained professionals. Just as being a student does not qualify one to be a teacher, being a is not sufficient preparation to evaluate what do with rigor, value, and utility. Quality costs money so learn to balance these two attributes, make informed decisions, and demand good work.
Another way that evaluation procurement can break down is through the well-intentioned-but-badly-written RFP. We see this, for example, when state agencies solicit competitive sub-grants under federal programs. The states know what they are expected to report to the feds but are often unable to effectively make the translation to evaluation reporting requirements for local awards. The state’s RFP to sub-grant proposers ends up missing the mark evaluation-wise, and (where they happen) the locale’s RFPs to potential evaluators are even further askew. It’s risky to contract for kitchen remodeling without knowing what you want and the lingo required to describe it. The same thing applies to securing an evaluator. Determine what you need and how to effectively ask for it, or get help from someone who can help you figure it out.
Finally – and even more fundamental – is the eternal challenge of RFPs that solicit evaluation proposals without actually explaining what is to be evaluated. Agencies seeking an evaluator: Please help us be responsive to your needs, and be more likely to hit your budget nail on the head, by clearly describing (a) what you plan to do, and (b) what you hope will happen as a result. Arrive at consensus among agency staff about the value or anticipated outcomes of something before you search out evaluation services, or make that process part of what you ask us to do for you. You will likely be disappointed otherwise.
We evaluator-types might get cranky but many of us do want to do a professional job for you.
I made the mistake of using angle brackets (a la html tags) rather than parentheses, so a couple of words were lost from the third bulleted paragraph above. It should read…
… Just as being a student does not qualify one to be a teacher, being a (whatever) is not sufficient preparation to evaluate what (whatevers) do with rigor, value, and utility. …
One question for you …. where did you get this figure “Typical evaluation budgets run approximately 7.5 percent of total budget.” I would find any reference to evaluator payment standards most helpful, especially in regard to federal grants. Thanks!
I’ve seen it in government documents (they often say between 5 and 10 percent) and through working with colleagues for several decades; this is the defacto percentage rate.