Category Archives: evaluation

Fighting to Avoid Change

I had an evaluation contract recently to evaluate an organization’s safe schools programs. My job was to evaluate the degree to which they were achieving their identified objectives and implementing their program as designed.

The bottom line is that they were neither achieving objectives nor implementing their program as designed. Every time the project director tried to do something she was supposed to do, she was foiled by upper administration. They said they wanted change, but they did everything in their power to stop change. So, the project director stayed busy doing other things – good things – while staying away from any controversy that might affect her job.

Halfway through their 4-year grant period, they were subject to a federal monitoring visit because of a clash between the grant’s lead partners and the dysfunctional administration of the grantee (my client). I was asked to share results.  I did. I said they were neither achieving objectives nor implementing the program as designed.

Until that moment, I had no idea how far people would go to cover their tracks and avoid change. The administration rose up and started pointing at all the wonderful things they were doing. I made that the point that those activities were, indeed, wonderful, but they would not do a thing to get them closer to achieving their objectives. I also reminded them that they selected the activities that they put in their grant because they were evidence-based practices that would likely lead to positive changes in the areas targeted by their objectives.

Things got ugly. Soon, fingers started pointing at the evaluation as the culprit.  That perplexed me because the evaluation had no role in implementation at all.  How could it possibly be our fault that they were not doing what they had agreed to do?

But they were persistent and brutal.

They asked for (and were granted) permission to change some of their objectives to say simply that they were successful at doing what they were doing.

Six months later, when it came time to contract for another year, I declined and walked away. Clearly, the administration was more interested in avoiding change than making their schools any safer. I know that sounds harsh, and I know that those administrators would never, ever admit to such a thing.  Maybe they don’t even realize what they are doing, but avoiding accountability is avoiding change and fighting to keep the status quo. I couldn’t be part of that anymore.

The result?

They hired another evaluator, presumably one who they hope will tell them what they want to hear.

And now, at the end of the grant period, the schools are no safer than they were before the grant was written, nothing has really changed in the infrastructure of the organization that can reasonably be expected to make their schools safer, and there is even more gang activity (and it’s more violent) in the community than there was before.

Millions of federal dollars were spent and nothing significant has changed.

Why?

Because it’s human nature for people to avoid change and, if their jobs may be affected in any way, they will fight to avoid it. The status quo, the “way things have always been done,” is a very powerful force. Clearly, throwing money at it is not the key to change. Don’t get me wrong. Financial resources may be necessary for change, but they are not the most important part.

The most important part is buy-in, and not just the buy-in of your collaborative partners, although that is very important.  Often, the buy-in you need most to make anything real happen is on the part of people you didn’t even think to bring to the table.

So, as you are thinking about applying for a collaborative grant, ask yourself, “If we get this grant, who could really sabotage our efforts and cause us to fail?” That is who also needs to be at the table from the beginning.

Published by Creative Resources & Research http://grantgoddess.com

When Is It Time to Let a Client Go?

If you’re like me, you want to think that you can help everybody.  The truth, of course, is that you can’t. That is true in life, and it’s true in the world of grant writing and program evaluation, too.

I recently let a long time client go.  At the same time, I released about $70,000 in income they would have provided over the next year and half.

Why?

Because it was the right thing to do.

The bottom line is that when the relationship isn’t helping the client anymore and it’s making you crazy, it’s time to step back. I reached that point with this client.

My contacts for the organization were not taking any of my suggestions (which is their prerogative, of course) and they were making really poor decisions that were not good for anyone, especially the youth served by that organization. There was so much infighting and backstabbing and lying within their organization that nothing got done and no one knew who to trust.

After working with them for 8 years in various capacities, I spent the last two years focused on my role with them and just trying to stay alive.

Just trying to stay alive…..seriously.  My health suffered. I wasn’t sleeping. I had convinced myself that to walk away meant failure, and I just don’t do failure. So I was banging my head against the wall until I realized that my work with them wasn’t helping anyone.

Since they were ignoring my reports and advice, not letting me do my job (everyone’s an expert, ya know), and I was literally sick from all the stress, it made no sense to continue the relationship.

Sure, that was a lot of money to walk away from, and it made me nervous, but money was not a good enough reason to stay. Money should never be the main reason for taking or keeping a consulting job.  It’s about making a difference.  If you are not making a difference, what’s the point?

Walking away wasn’t easy.  I knew there would be gossip and speculation about what happened, and there was. I knew professional ethics wouldn’t let me speak about the detail of what happened, and I didn’t – even when I heard untrue rumors floating around. I also knew that there were some very bad things going on related to youth that I would not be able to even attempt to remedy if I walked away, but I had to. That was the really hard part.

So I walked away. What happened?  My health has improved dramatically.  I’m sleeping well again. I have time now to take on new clients who want to work with me, so I’m developing new relationships and my work is fun again and more fulfilling.

Oh yeah….and these new clients have just about replaced the income I lost from the old one, and it only took a couple of months. So my biggest fear – losing the income – was just a boogieman that couldn’t survive in the light of reality.

The client hired another firm to handle the work.  Maybe that will work out really well for them.

Maybe the change I made will end up being better for everyone in the long run.

I learned a valuable lesson from this experience – walking away from a client when it’s not good for anyone is not a failure. It’s an opportunity to grow. Sometimes it’s the only right thing to do.

Published by Creative Resources & Research http://grantgoddess.com

Silent Fraud in Federal Grant Evaluations Costs Billions

I’m stuck in a very difficult position with one of my evaluation clients right now. I have a report due very soon and there are some poor outcomes to report and some whistle blowing that needs to be done. This is the very reason why this particular program requires that all grantees hire independent external evaluators.  Many federal programs have the same requirement.  It’s an effort to ensure that grantees don’t fudge their evaluation results to make themselves look better and worthy of continued funding.

The problem is that most external evaluators are not independent.  In fact, they are very dependent on the grantees for their livelihood.  Sure, they aren’t employees of the grantees; they are usually independent contractors, but bias is inherently built into the relationship by the very people who want to ensure an unbiased evaluation – the funders.

The problem: Grantees have the freedom to fire evaluators who say things that they don’t want to hear and hire someone else who will be more amenable to telling the story the way the grantee wants it told. And in this time of economic hardship and massive budget cuts impacting almost every organization in the country, grantees have a powerful incentive to look good at all costs just to keep the dollars flowing.

Sure, you can say that an evaluator with integrity will tell the truth anyway, and I agree with you to some extent.  Unfortunately, in today’s economy jobs are hard to come by and independent contractors have to do everything they can to get and keep jobs, so many are faced with this ethical conundrum at a time when they will pay a very high price for their integrity. They are faced with biting the hand that feeds them, and hoping that the hand doesn’t bite back.

And for every honest evaluator who stands her ground, there are 20 unscrupulous ones ready and willing to step in and say whatever the client wants to hear.

And it’s not just about the integrity of the evaluator in that situation or keeping that job. The grant world is a fairly small one and word spreads.  No one wants the reputation of being someone who isn’t afraid to make their client look bad.  It makes you a hero among evaluators and funders, but it also makes you untouchable to clients, and they are the folks who make the hiring decisions.

Here’s another problem:  Many external evaluators write the federal performance reports for their clients.  In many ways this makes sense because they are the ones most familiar with the data and in the best position to describe and report the outcomes. However, performance reports technically are the responsibility of the grantee and they are submitted by the grantee as their statement of progress. In a performance report, the grantee has every right to change what the evaluator writes to align it with their own perspective. So, even if the evaluator has the integrity to tell the ugly truth, the funder won’t see it, unless of course the grantee doesn’t read their own report before it is submitted which is an unfortunate, but very common, practice..

Unlike performance reports, evaluation reports cannot be tampered with by the grantee, but the evaluator has to deal with the first problem I described – biting the hand that feeds them.

So here I sit, staring at some data that tell a very unflattering story. I’ll write the performance report that tells the truth and the client will get very upset and change it before they submit it. Then we’ll have some tension in our professional relationship, which I’ll spend the next 5 months trying to repair before the decision about contracting with me next year has to be made.

Yes, my friends, these are your tax dollars at work. It’s a corrupt system. Because performance reports are used by the federal government to make decisions about continuation funding, lying in performance reports constitutes fraud, but everyone looks away.  Looking away is the only way the corrupt system can continue.

In a time when banks and big businesses are being vilified for their fiscal practices, this fraud – which amounts to billions of dollars a year – goes unexamined and continues to thrive in every corner of the country.

***************************

Follow us on Twitter! @grantgoddess
Like us on Facebook!

http://grantgoddess.com

Published by Creative Resources & Research http://grantgoddess.com

Changing the World

“Never doubt that a small group of thoughtful, committed people can change the world. Indeed, it is the only thing that ever has.”  –Margaret Mead.

I had one of those awesome grant writer payoff moments yesterday.  I was sitting in an end-of-year evaluation meeting with a group of collaborative partners that has been implementing a grant funded, school-based violence prevention program for the last four years. The group was discussing the outcomes for the past year and plans for the next year.

It was an unusually lighthearted and joyful meeting.  Of course, there were many educators around the table and school is out for the summer, but even in that situation grant evaluation meetings are typically not that celebratory, at least not in my experience. However, this group had good reason to be proud. There was good improvement in our targeted outcomes in spite of the fact that the sites involved had been hit hard by budget cuts and had suffered several dramatic challenges late in the year (the death of a teacher at one school; the arrest of a teacher at the other).

As we were discussing the outcomes and fine tuning the plans for next year, the real magic happened. A student walked in the room bringing some copies to the meeting facilitator.  After the student left, one of the principals said, “Now she’s a real success story!” and he proceeded to tell us how troubled that young woman had been and how many thought that she might be in real trouble and lost beyond the ability of anyone at the school to help.

Then he talked about the services provided to the young woman through the project – not just through the grant, but through the entire collaborative effort.  We learned that she had been assisted in various ways by at least  8 of the project partners in that room, and that the grant had helped coordinate those services so the community could actually wrap its arms around that young woman and walk her through the difficult time in her life. Then he told us how well she is doing now (including earning a 3.5 GPA!). The principal finished his remarks with the words, “Seriously, we saved a life.”

I sat there listening quietly, but the truth is that it was a moment that took my breath away.  I couldn’t speak because there was a lump in my throat. There is no question that moments like that are the real payoff in grant writing, and they are the reason I do it.

Most of the time, I work in isolation as I write.  I communicate with people as much as I need to to gather the information I need to put together a high quality proposal, but hours and hours are spent alone with my notes and my computer. The process is so separate from the ultimate result (changing lives) that it’s very hard to see sometimes, especially when I’m backed up against multiple deadlines, and I’m tired, and my client is being difficult (yes, it happens at times).

Because I also serve as a program evaluator, I have the incredible honor of being able to see the result of my writing efforts. I get to see programs in place that weren’t there before, services that weren’t offered before, and yes, I get to meet people whose lives are forever changed for the better because of those hours I spent in isolation doing what I do best.

So, the experience yesterday will provide some good motivating fuel for my writing for a while.  When I’m tired of writing and I want to quit or I want to take a shortcut or two instead of giving it my best effort, I’ll remind myself that I’m not writing, I’m changing the world.

——————————–

Related Post: The Real Payoff

The Grant Goddess’ Online Learning Center opens in a few days! Keep checking back here or visit GrantGoddess.com to see the link on the home page.

Want to supercharge your grant writing?  Become a member at GrantGoddess.com! You’ll have access to a huge selection of grant writing, program evaluation, non-profit development, and research tools.

Changing the World

“Never doubt that a small group of thoughtful, committed people can change the world. Indeed, it is the only thing that ever has.”  –Margaret Mead.

I had one of those awesome grant writer payoff moments yesterday.  I was sitting in an end-of-year evaluation meeting with a group of collaborative partners that has been implementing a grant funded, school-based violence prevention program for the last four years. The group was discussing the outcomes for the past year and plans for the next year.

It was an unusually lighthearted and joyful meeting.  Of course, there were many educators around the table and school is out for the summer, but even in that situation grant evaluation meetings are typically not that celebratory, at least not in my experience. However, this group had good reason to be proud. There was good improvement in our targeted outcomes in spite of the fact that the sites involved had been hit hard by budget cuts and had suffered several dramatic challenges late in the year (the death of a teacher at one school; the arrest of a teacher at the other).

As we were discussing the outcomes and fine tuning the plans for next year, the real magic happened. A student walked in the room bringing some copies to the meeting facilitator.  After the student left, one of the principals said, “Now she’s a real success story!” and he proceeded to tell us how troubled that young woman had been and how many thought that she might be in real trouble and lost beyond the ability of anyone at the school to help.

Then he talked about the services provided to the young woman through the project – not just through the grant, but through the entire collaborative effort.  We learned that she had been assisted in various ways by at least  8 of the project partners in that room, and that the grant had helped coordinate those services so the community could actually wrap its arms around that young woman and walk her through the difficult time in her life. Then he told us how well she is doing now (including earning a 3.5 GPA!). The principal finished his remarks with the words, “Seriously, we saved a life.”

I sat there listening quietly, but the truth is that it was a moment that took my breath away.  I couldn’t speak because there was a lump in my throat. There is no question that moments like that are the real payoff in grant writing, and they are the reason I do it.

Most of the time, I work in isolation as I write.  I communicate with people as much as I need to to gather the information I need to put together a high quality proposal, but hours and hours are spent alone with my notes and my computer. The process is so separate from the ultimate result (changing lives) that it’s very hard to see sometimes, especially when I’m backed up against multiple deadlines, and I’m tired, and my client is being difficult (yes, it happens at times).

Because I also serve as a program evaluator, I have the incredible honor of being able to see the result of my writing efforts. I get to see programs in place that weren’t there before, services that weren’t offered before, and yes, I get to meet people whose lives are forever changed for the better because of those hours I spent in isolation doing what I do best.

So, the experience yesterday will provide some good motivating fuel for my writing for a while.  When I’m tired of writing and I want to quit or I want to take a shortcut or two instead of giving it my best effort, I’ll remind myself that I’m not writing, I’m changing the world.

——————————–

Related Post: The Real Payoff

The Grant Goddess’ Online Learning Center opens in a few days! Keep checking back here or visit GrantGoddess.com to see the link on the home page.

Want to supercharge your grant writing?  Become a member at GrantGoddess.com! You’ll have access to a huge selection of grant writing, program evaluation, non-profit development, and research tools.

Published by Creative Resources & Research http://grantgoddess.com

Distinguish Implementation Objectives from Outcome Objectives

Writing good objectives for your grant can be a challenge.   This post is about distinguishing between implementation objectives and outcome objectives. You can also check out Five Tips for Writing Good Grant Objectives.

Implementation objectives define your targets for implementing the program (e.g., Fifty program participants will be enrolled by June 30, 2010, as measured by intake records) and outcome objectives define your ultimate achievement targets (e.g., Forty students will complete the program each year, as measured by achievement of a passing score on the XYZ exam).

Think of it this way: the achievement of an implementation objective proves that you are implementing the program. The achievement of an outcome objective proves that the program works. While implementation objectives are good, outcome objectives guide the true measures of your effectiveness. Generally speaking, funding sources are most interested in your outcome objectives, and when an RFA refers to “Goals and Objectives,” it is referring to goals and outcome objectives.
Implementation objectives can also be used, but only when you clearly distinguish them from outcome objectives. Occasionally, a funding source will specifically ask you to list your implementation objectives. In that case, of course, you should follow the directions and provide the requested information, but typically implementation information is provided in the design section of the proposal.

———————————–
 
This is Tip 35 from 101 Tips for Aspiring Grant Writers.  Check out the book to see all 101 Tips!

Distinguish Implementation Objectives from Outcome Objectives

Writing good objectives for your grant can be a challenge.   This post is about distinguishing between implementation objectives and outcome objectives. You can also check out Five Tips for Writing Good Grant Objectives.

Implementation objectives define your targets for implementing the program (e.g., Fifty program participants will be enrolled by June 30, 2010, as measured by intake records) and outcome objectives define your ultimate achievement targets (e.g., Forty students will complete the program each year, as measured by achievement of a passing score on the XYZ exam).

Think of it this way: the achievement of an implementation objective proves that you are implementing the program. The achievement of an outcome objective proves that the program works. While implementation objectives are good, outcome objectives guide the true measures of your effectiveness. Generally speaking, funding sources are most interested in your outcome objectives, and when an RFA refers to “Goals and Objectives,” it is referring to goals and outcome objectives.
Implementation objectives can also be used, but only when you clearly distinguish them from outcome objectives. Occasionally, a funding source will specifically ask you to list your implementation objectives. In that case, of course, you should follow the directions and provide the requested information, but typically implementation information is provided in the design section of the proposal.

———————————–
 
This is Tip 35 from 101 Tips for Aspiring Grant Writers.  Check out the book to see all 101 Tips!

Published by Creative Resources & Research http://grantgoddess.com

Including Data Analysis in Your Grant Evaluation Section

I know, I know.  Data analysis is not everyone’s favorite topic, but it’s a topic you can’t ignore if you want to be successful with grant writing.  Not only do you need to be able to analyze your data appropriately to accurately and effectively describe your need for the project in the needs section, but you also need to describe how you will analyze data as part of your evaluation plan.

I have read many grant evaluation plans. Most do a decent job of describing what data will be collected and how/when it will be collected.  The majority also discuss how the data will be used for program improvement purposes.  But when it comes to talking about how the data will be analyzed (one of the scoring criteria in most government grants, and many private ones, too), that’s when most grant writers fall apart.

There isn’t enough time here to discuss all of the detail you need to know regarding data analysis (hmmm….do I sense a series coming on?), but let’s start with three basic concepts in analyzing the data that you should address.

Data Collection – Like I said, most people cover this pretty well in their evaluation plans.  You need to include what data you will be collecting, how you will collect it, when you will collect it, and who will collect it.  If new instruments (surveys, etc.) are going to be developed, you’ll need to describe that process, too. Think through the whole process from developing or acquiring the instruments through getting the data into your computer for analysis.  Yes, I did say, “into your computer for analysis.”  The days of tallying surveys by hand on paper are over.  Accept it.

Descriptive Statistics – This is a fancy way of saying that you’ll use the data to describe something.  Descriptive statistics include frequency counts, percentages, means, etc. You’ll use descriptive statistics to describe the population you served.  You’ll use them to describe your basic outcome data (survey results, etc.).  Of course, whenever possible, you should disaggregate your descriptive statistics by important subgroups to make sure you painting an accurate picture. Most of the time, descriptive statistics are all you need for a basic program evaluation, but not always…..

Inferential Statistics – O.k., here’s where we separate the men from the boys….or the women from the girls…or the real evaluators from the pretenders. Inferential statistics are used to help you make judgements about the data beyond what can be said by looking at the descriptive data alone. Inferential statistics help you determine the statistical significance of the changes you see (the likelihood that the changes occurred as a result of your treatment, rather than by chance).  They help you predict things, too. If you ever studied anything beyond descriptive statistics in school, you entered the world of inferential statistics.  It’s a scary place for some, but it’s the only place to go if you really want to show causation (that your program really made a difference), and isn’t that what evaluation is all about?

If you need a refresher course on research methods, the Research Menthods Knowledge Base is a great place to start.

The GrantGoddess.com Program Evaluation Resources page has some links to interesting articles on data collection and analysis, as well as a link to two free webinars we have posted on evaluation basics.

Including Data Analysis in Your Grant Evaluation Section

I know, I know.  Data analysis is not everyone’s favorite topic, but it’s a topic you can’t ignore if you want to be successful with grant writing.  Not only do you need to be able to analyze your data appropriately to accurately and effectively describe your need for the project in the needs section, but you also need to describe how you will analyze data as part of your evaluation plan.

I have read many grant evaluation plans. Most do a decent job of describing what data will be collected and how/when it will be collected.  The majority also discuss how the data will be used for program improvement purposes.  But when it comes to talking about how the data will be analyzed (one of the scoring criteria in most government grants, and many private ones, too), that’s when most grant writers fall apart.

There isn’t enough time here to discuss all of the detail you need to know regarding data analysis (hmmm….do I sense a series coming on?), but let’s start with three basic concepts in analyzing the data that you should address.

Data Collection – Like I said, most people cover this pretty well in their evaluation plans.  You need to include what data you will be collecting, how you will collect it, when you will collect it, and who will collect it.  If new instruments (surveys, etc.) are going to be developed, you’ll need to describe that process, too. Think through the whole process from developing or acquiring the instruments through getting the data into your computer for analysis.  Yes, I did say, “into your computer for analysis.”  The days of tallying surveys by hand on paper are over.  Accept it.

Descriptive Statistics – This is a fancy way of saying that you’ll use the data to describe something.  Descriptive statistics include frequency counts, percentages, means, etc. You’ll use descriptive statistics to describe the population you served.  You’ll use them to describe your basic outcome data (survey results, etc.).  Of course, whenever possible, you should disaggregate your descriptive statistics by important subgroups to make sure you painting an accurate picture. Most of the time, descriptive statistics are all you need for a basic program evaluation, but not always…..

Inferential Statistics – O.k., here’s where we separate the men from the boys….or the women from the girls…or the real evaluators from the pretenders. Inferential statistics are used to help you make judgements about the data beyond what can be said by looking at the descriptive data alone. Inferential statistics help you determine the statistical significance of the changes you see (the likelihood that the changes occurred as a result of your treatment, rather than by chance).  They help you predict things, too. If you ever studied anything beyond descriptive statistics in school, you entered the world of inferential statistics.  It’s a scary place for some, but it’s the only place to go if you really want to show causation (that your program really made a difference), and isn’t that what evaluation is all about?

If you need a refresher course on research methods, the Research Menthods Knowledge Base is a great place to start.

The GrantGoddess.com Program Evaluation Resources page has some links to interesting articles on data collection and analysis, as well as a link to two free webinars we have posted on evaluation basics.

Published by Creative Resources & Research http://grantgoddess.com

Assessing Results: Are You a Quant or a Qualit?

In this post, Non-profit Consulant Derek Link offers his thoughts on balanced assessment and evaluation:

In the world of social entrepreneurship the use of metrics for assessment of results has sparked an ongoing debate. The lines have been drawn between mathematically inclined folks who like to measure things using quantitative data (called Quants) and those who want to describe the social impact of programs using primarily qualitative data (called Qualits).

I would refer to myself as a hybrid, a Quali-quant. For me, the argument about which type of data is better is meaningless unless the right questions are being asked. Once you know what you want to know; in other words, once you know what will best demonstrate that your mission is accomplished, the kind of data needed to measure that reveals itself.

And the type of data is usually not one to the exclusion of the other. Typically a result is explained best by viewing it through data binoculars, not through a data telescope. I use the example of a child who comes to school on test day. The Quant will want to examine the child’s test score to see whether he has achieved to an expected level, whether he has raised his achievement from previous test administrations, how he compares to his peers, and how his test scores aggregated reflect on the teacher’s ability and the school’s curriculum and instructional program.

The Qualits, on the other hand, will want to modify the interpretation of the test score with qualitative information. Perhaps the child arrives hungry because the family was late getting up and she never had breakfast. Perhaps the child is sick or was up all night due to family violence. These qualitative factors impact the ability of the child to score well but are difficult or impossible to quantify.

In the end, I believe it is a disservice to the process/program/organization to have an imbalanced approach to assessment of results. Start off by asking the right questions.

—————-

For more resources to help you with the evaluation of your programs, read some of the articles on our FREE Evaluation Resources page or view some of our free  recorded webinars on program evaluation. For an even higher level of support, become a member of GrantGoddess.com.