Charity Navigator Meets Reality

YAegGCLhvo0aJugTGHimYcWBFvZBeakoqI8_Avwh8p8-1 A Post from James V. Toscano

Notwithstanding self-promotion, “Charity Navigator is American’s largest and most influential charity rater” and self-aggrandizement, “the largest and most utilized charity rating service that exists anywhere,” the New Jersey-based Charity Navigator is realizing the incredible complexity of its task.

At first, they presented themselves as charity raters, although they based their rating stars (1-4, with 4 best) only on financial variables. They gradually became more sophisticated on this dimension through critical feedback and through their experience in examinations of large charities with at least $500K in public support and $1M in overall budget.

By elimination of the vast number of small charities, and by requiring at least 4 years of 990 reporting, CN chose to focus on the largest charities, the top 1-2%, with a goal of evaluating 10,000 of the million or so out there. How useful this is has been the subject of an interesting, sometimes heated debate. 

Overhead

From the get-go, the financial standards, based on actual numbers and real data, stirred controversy. One of the most contentious standards is overhead, in its many variations and its many different ways of definition for purposes of financial reporting.

As chair of the Charities Review Council a few years ago, I saw our own standard, 70-30 of program percent to overhead, softened to allow for reasonable, higher overhead percentages under certain circumstances. Notwithstanding damage to CRC’s telephone number’s credibility,  224-7030, it did seem to ease the arbitrariness of the rating system.

And that’s the point of much of the criticism of Charity Navigator’s system, that it is unreasonably arbitrary regardless of the data used.

Accountability and Transparency

Sensitive to the limitations of financial data in evaluation performance of charity organization, of the above criticism, and clearly with an ear to the foundation funding ground, Charity Navigator added a second dimension, Accountability and Transparency in 2010.

The idea is good: how a charity reports its activity publicly.

Using the 990 and website of the charity, Charity Navigator examines whether information on governance, ethics and finances are easily available to a donor.

While Charity Navigator can’t verify the veracity of the information presented, they try to spotlight “open” organizations. At the time it started to use this second variable, approximately 5500 of the ultimate goal of 10,000 organizations had been rated on financial information, so there was a large task in doubling back.

Missing still, however, is comparability using empirical, systematic measurement rather than the arbitrary, more qualitative tools used to report on accountability and transparency. Certainly a numerical score is given; however, the mathematical foundation for that score is neither sophisticated nor reliable.

(This made me wonder about how many stars Charity Navigator would get when the phrases in the first paragraph above go under the Accountability lens.)

The Reality of Results

In January, Charity Navigator announced the third leg of their evaluation stool—Results. According to its website announcing CN 3.0: “…mission-related results are the very reason that charities exit!” As well as the latest focus of major foundation interest!

Sounds great, it’s going to take a few years to develop the system and to see how CN’s tools work. Rather than wait until 2016, when the project completion is predicted for all 10,000 targets, information will be added incrementally.

Because Charity Navigator is still uncertain how 3.0 applies to the universe of charities, they will not allocate any stars until satisfied that the system works.

Many charities do not yet have systems needed to measure “results,” nor are many of the grantors and donors willing to pay for such measurement. So what does Charity Navigator do? They take what they call a “development approach,” that includes “engaging, encouraging and incentivizing” results measurement. Incentivizing?

The Criteria

The first element of data used will be matching what the charities say and what they actually do with their funds. A second area is the use of logic models. A third is whether they earn some sort of Good Housekeeping approval, including, I would hope, the Charity Review Council’s seal. A fourth is how feedback from direct recipients of the charity’s services is used. Last, and most important, is whether the charity publishes evaluation done on its programs.

A very informative Results Reporting Concept Note is available to see what is proposed in detail. The problem, however, is that all of these measures add up to an arbitrary Charity Navigator standard and not a sector agreed-upon set of standards. The use of volunteers, even well-trained ones to do the work, makes Charity Navigator scoring more subject to doubt.

In a previous posting, I have complimented Charity Navigator on its attempts at rating, although I have also been critical of many elements of its system. Like in this recent article where I share my analysis of the rather weak methodology used when the second leg of the stool was added.

The Lack of True Comparability

The major problem with all of this is the lack of true comparability of results. Sure, Charity Navigator issues a score and compares it to others doing the same type of work. However, when scores based on arbitrary, often qualitative judgments, are used, and what I consider faulty methodology, they are largely meaningless.

This task undertaken by Charity Navigator needs more widespread industry agreed-upon sets of measurement that will yield empirical comparability in the various Charity Navigator subdivisions, whether they are 34 or 50 or more.

Until we have comparability, we cannot select high performers.

Until we have comparability and the open sharing of information, we cannot have continuous quality improvement. And yes, without comparability, without information sharing, and without continuous improvement, we really can’t have scalability, if we really desire it. (See one of our many posts on Impact Measurement.)

Now is the time, acknowledging methodological, political and sociological difficulties, to establish comparable standards for our industry. We know it is done all the time in other industries and sectors, so there are models to follow. Rather than have the IRS do it for us, we should thank Charity Navigator and proceed to develop nonprofit industry-wide measurable standards, not just for the top 10,000, but for all.



Leave a Reply

Your email address will not be published. Required fields are marked *