CAPD Presenting at Alliance for Nonprofit Management's 2018 Conference

We are delighted to announce that we will be presenting on equity and organizational assessment in Hartford, CT, at 11 am, October 12th, 2018. We will be co-presenting with our colleagues Mala Nagarajan and Keith Timko.

This session will engage practitioners in a conversation around how organizational assessments, on the surface, may seem race-neutral; however, assessments tend to privilege dominant cultural standards by what questions they ask, what constitutes a best practice or success, and who says so. We’ll present innovative assessments that embed a racial equity lens and ask participants to work with assessment questions that center marginalized communities. Participants will work with a case study and sample report to explore in groups how answers to those questions help foster conversations about racial equity. We provide frameworks, guidance, resources, and research around tools that capacity builders can use in their own work.

Click here for more information about the conference.

News and Notes from CAPD Staff

Sally Leiderman and Stephanie Leiderman recently took a webinar on “Evaluation as a Tool Towards Equity,” sponsored by ABFE and offered by Jara Dean Coffey at jdcPartnerships, an evaluation and planning organization.  ABFE is one of the grantees of W.K.K.F.’s Strong Sector Cluster Evaluation, which CAPD recently completed. A recording of the webinar is available online; we recommend you check it out.

Sally Leiderman also recently had an opportunity to serve as one of the guest speakers for a graduate evaluation class jointly offered by the Graduate School of Education and the Wharton School at the University of Pennsylvania. Participants are “educational entrepreneurs” -- many of whom are developing new products and strategies to advance academic achievement and/or more effective leadership within school systems. Most were working at the intersections of social policy, technology, innovation and capitalism in very interesting ways.

Not surprisingly, Sally’s role was to offer very practical lessons and examples about constructing rigorous, ethical and useful evaluations of the kinds of initiatives or programs in which the participants are engaged. She also talked with the class more broadly about evaluation as a field, and the participants as potential providers or consumers of evaluation. She engaged with the class on questions including: What is the role of an evaluator in negotiating expectations or indicators of success? Is there such a thing as an “objective” evaluation? How skeptical do you have to be when you review evaluation reports?  Is qualitative data “science?”  Those questions of course let to discussions of privilege and power dynamics in evaluation, multiple ways of making meaning, and ways of sharing findings. So, a lively discussion with many different perspectives offered.  

One of the other guest speakers was Peter Murray (http://ssir.org/articles/entry/the_promise_of_lean_experimentation). He talked with the class about how to apply “lean” design thinking to evaluation, particularly during the development phase of a new initiative or program. The essence of “lean” is rapid experimentation (sometimes called fast failure). It involves creating a “minimum viable product” that you can share with a small number of potential end-users to find out if there is even a market for your idea, before investing further.  Based on end-user input, you might jettison the idea, refine it or continue building out your design, testing with end-users at each step.

Murray mentioned that one of the strengths of the approach is that “you can immediately invalidate a lot of things.” He also mentioned that while “lean” strategies have really taken over in the for-profit arena, they are just starting to be used in the nonprofit and social sectors.

The lean approach offers some very interesting ways of helping our partners create,for example, more effective leadership development strategies, community change initiatives and needle moving collective impact approaches. Our role could be to help our partners articulate their theories of change in terms of hypotheses to test, help structure or carry-out the rapid experiments and participate in making meaning of the data gathered.  We can also be among those thinking about where “lean” makes sense, and where it doesn’t, so we are all good consumers of this interesting approach.

We’ve recently put some resources about “lean evaluation” and “equity evaluation” on www.racialequitytools.org, along with some new information about evaluation of collective impact. Our October newsletter for RET will feature resources around evaluation with an equity lens. We would love to hear more about your experiences with any of these methods and tools.