Impact Measurement – Part Two of Three

 A post by James V. Toscano

There is currently considerable buzz about performance, outcomes, results and other measures of impact in the nonprofit world. Foundations and corporate funders are now interested in what the results of the inputs of their grants are. Individual donors are told that they should also be very interested in what their gift dollars produce.

In the Part One, questions on the uses of epidemiological variables and the need for standardized measurement and agreement on those measures were raised. In Part Three, the necessity for empirical testing to really determine causality will be explored in some depth.

Here, let us examine some middle-ground measures, not exactly scientific, but better than random selection. We’ll examine process measures, and then we’ll look at reputational measures.

Charity Navigator as Example

In the first category are those various tests, some empirical, some textual, that measure process and performance against a predetermined standard.

One of the better-known process measurement groups, one that relied solely on financial variables until recently is Charity Navigator, which does an excellent job on the purely financial aspects of a nonprofit. They have now added accountability and transparency measures to the mix to give a deeper, better evaluation. They also realize that they need to do something about looking at what happens when all of these financial performers become part of the operational mix, so they are preparing to have a third section on results.

One of the cries heard for a long time in the trenches is “What About Us?” the actual recipients of whatever it is that the nonprofit does. Charity Navigator intends to build into its process feedback mechanisms that nonprofits may use to obtain a fuller picture of the reaction to their work. This is a positive advancement.

The interesting conundrum that will pop will be about the relationship of results to selected financial, accountability, transparency and host of other variables. Will they track in the same directions, or will they diverge? Regression analysis, notoriously weak, will probably be the first attempt and it will fail.

So all of these good people will probably not be able to demonstrate empirically the causality of one set of variables on another. Part Three may begin to offer some directions.

The Expert Panel

 Everyone is familiar with the annual listings of the “best” doctors in local magazines, along with a number of other lists that seem to attract attention by the media and public.

Now we are seeing a more rigorous methodology used to pick the “best” of nonprofits in a specific category.  For example, one of the better of these groups, Philanthropedia, a group now owned by GuideStar, assembles experts on a particular aspect of nonprofits working in a specific field, from foundation professionals, researchers, and nonprofit senior staff. These experts have an average of 8-20 years of experience and are asked to fill out a very detailed questionnaire in a specific area of nonprofit activity, naming the best organizations in that area.

For an informed observer, these lists are generally accurate in naming industry leaders with good reputations that seem to be doing a good job, that are, in fact, doing a good job. So the list is useful for current choices, given little empirical information we presently have.

However, there are specific limitations to this methodology that usually chooses established organizations with good track records. But what about the fledgling group with a better idea? Where do they go for funding if there are these “lists?”

We also don’t know if these top organizations are really comparable, given the potential variation in a number of factors in their make-up, their clients, their staffs, etc. etc. etc.

Are they comparable on the Charity Navigator variables such as finance, overheads, accountability, others? Do these matter?

It would be interesting to determine if those high on one list, say four Charity Navigator stars, are consistently high on a Philanthropedia top ten. Hopefully so, because we would then know something we don’t presently know.

And that’s the point. We have some trustworthy data, although we don’t have a lot. In Part One, health care and education were singled out as those further along in measurement of true causality. In Part Three, we will put it all together and propose a systematic methodology used in business and based on the new book, Uncontrolled, by Jim Manzi.

Copyright 2012 The Good Counsel, division of Toscano Advisors, LLC. May be duplicated with citation.



  1. […] Reputational methodologies, using expert panels or other informants, to assess output are also currently employed, e.g. the rigorous survey methodology of Philanthropedia, a GuideStar subsidiary, is an excellent example of this approach. (See Part Two of this Series.) […]

Leave a Reply

Your email address will not be published. Required fields are marked *