Today, few would question the value of assessing organizational impact and its promise of accountability. What was a bold crusade more than a decade ago has now become mainstream: assessment, evaluation, monitoring, and theories of change have become household terms, commonplace—even obligatory—in grant proposals, strategic plans, annual reports, and conference agendas. Robust assessment systems have come to be expected, and can confer competitive advantage.
Yet in some recent conversations, we have begun to hear questions about the effectiveness of these practices. Some sense a risk of measuring for measurement’s sake, without much thought about how the information generated will ultimately be used, and by whom. That the mere act of inquiry—especially when highly visible—can suffice as a signal of innovation and self-scrutiny. In short, these observers see a danger of assessment becoming an end itself, and in the process missing the point of doing it in the first place: to inform and improve what organizations do.
One factor at play, some suspect, is that for many organizations assessment is not native or organic—driven instead by funders, peer pressure, or even by internal efforts to justify predetermined strategy. In their view, one consequence of this pressure to assess is overcompensation, resulting in a vast surplus of information—information that no one is sure is actually being used. A recent examination by the World Bank, reported in the Washington Post, found that one third of Bank reports had never been downloaded, and another 40 per cent had been downloaded less than 100 times.
One answer may lie in another booming trend: big data. The Foundation Center, through its IssueLab, is beginning to explore ways of aggregating and analyzing data from those mountains of reports, to draw out lessons and discern broader trends. Perhaps these may even come in tweetable insights.