Written by Kevin Robbie - Interview with Jen Riley, SmartyGrants Chief Impact Officer | 16th November 2022
We recently had the privilege of interviewing Jen Riley, Chief Impact Officer at Our Community and the person responsible for its SmartyGrants grants management platform. The focus of the interview was on the work of SmartyGrants and it’s new ‘Outcomes Engine’ feature that enables grant makers and grant seekers to be more outcomes focused. Here are some of the highlights from the interview:
Can you tell us a little about SmartyGrants and how the application of the Outcomes Engine works to support grant makers and grant seekers to be more outcomes focused?
JR: “Powered by Our Community, SmartyGrants is Australia’s most used online grants administration system supporting more than 500 funder organisations including government, philanthropy, corporate and community grant makers to revolutionise their approach to grant making. To date, the SmartyGrants platform has administered more than $7 Billion in grants distributed to more than 10,500 programs and initiatives across Australia.
The Outcomes Engine is an add on feature to support funders using the SmartyGrants platform to track, measure and report on the impact of their grants from the application stage right through to reporting.
It helps grant makers to focus on the importance of the outcomes they seek to achieve through granting. The system helps grantmakers tackle social impact measurement by providing the ability to upload an outcomes framework and select from, and adapt, pre-made questions and reporting templates. What I love about it is it’s a simple and flexible system that can be used for small grants all the way through to million-dollar grants and, since it’s within an environment that many grant makers are already familiar with, it’s easy to start using it. We try to encourage grant makers start off applying outcomes metrics to one grant program and then building up over time to apply it across whole granting portfolios so they can see the aggerate results and answer the question, did our grants make a difference?”.
You recently analysed data from several funders who have been using the Outcomes Engine to collect data from their grantees. What were some of the key findings?
JR: “The analysis focused on the ‘grant seeker outcome’ field, which is designed to collect information during the application phase about prospective grantees’ outcome goals.
Overall, the quality of the outcomes statement from grant seekers was not very good. From the nine grantmakers, we reviewed 664 grant seeker outcomes from across 19 funding rounds and 613 applications. Most grant seekers wrote activity statements focused on outputs and only 3% actually wrote outcomes statements that weren’t double or triple barrelled and could be measured. Regardless of the problems people were trying to solve most of what we saw was all about activities; what people were doing, not what change they wanted to make as a result. There is so much confusion about what is an outcome which demonstrates there is a huge opportunity to build people’s impact literacy. However, given the community sector has been rewarded for activity-based outputs for the past 30-40 years it’s not that surprising. This is the system responding to what it has been told to do through years and years of government and philanthropic funding that has been activity based”.
What are the biggest challenges for funders when it comes to seeking to understanding the intended outcomes of potential grantee programs?
JR: “Across the grant making community not enough grant makers are asking the right questions and creating the right environment. There is a real opportunity for grant makers to show leadership and be bold about the intended outcomes of their granting portfolios – what sort of change do they want to see? While grantees are the experts on the ground and we should let them work out how to get to the outcomes, the funders can be setting some bold intentions for real social change. Being clear about the desired outcomes of a grant is a great way of demonstrating to the wider community that they are serious about positive social outcomes and engaging grant seekers in a dialogue about what works. That's what we're doing with the SmartyGrants Outcomes Engine, we are providing opportunities for funders to set outcomes and then invite applicants to share how their work aligns to those outcomes, this is an important step in nudging the giving sector forward to be more outcomes focused.
What are the biggest challenges for grant seekers when it comes to expressing their intended outcomes?
The number one issue for our local community sector is resourcing to identify, track, analyse and report their intended outcomes. In international development, it is common practice to be given 10-20% of a funding budget to use for evaluation. We don't have this arrangement in domestic programs. I do think it is that simple. If local community organisations were given a line in every funding grant to independently evaluate their activities, it would get done. The challenge for grant seekers is it is expected to be done on the side when they are already busy, stressed out and responding to communities in crisis.
We know lots of community organisations are not collecting data in rigorous ways because they don't have the resources to do it. Often, we see the same people in organisations who deliver programs in the position of asking questions about the impact of those programs and this leads to issues with bias in the data and compromises the quality of evidence.
There needs to be a conversation about a 10% rule for monitoring and evaluation. If we want to build the quality and strength of evidence of ‘what works’, then we need organisations to be adequately funded to undertake quality monitoring and evaluation. Once community organisations have the cash then they can invest that in the training, staff, knowledge, and skills needed to properly evaluate and measure the value and impact of their work. Imagine if community organisations had 10% budget across all of their grants, that equals a decent investment. It’s not surprising when you look at international development agencies where this is common practice that we find most staff have training in monitoring and evaluation. If we make this system change in the way funding is provided, it will ultimately not only improve how we measure outcomes, but it will improve outcomes for the community because we will know what is working and not working versus driving blind like we do at the moment.”
What else can we do to strengthen a stronger alignment of funding for outcomes and designing for impact?
I was presenting yesterday to a group of grant makers on outcomes. One of the questions was how do we get people to report honestly on outcomes that have been achieved? My response was ‘It comes down to trust’, there is a huge power imbalance between the grant maker and the grant seeker. The only way we're going to get honest reporting is if we build a culture where it is just as important to share what doesn’t work as what does work. The most progressive funders are running learning circles that enable honest dialogues with grant recipients based on trust, learning and improvement. If grant makers don’t create these sorts of enabling environments, we’ll just see the community sector shift to “outcomes-washing” to tell funders what they want to hear. We need to build the strength and internal capacity and capabilities of grant makers and grant seekers to work together to shift towards more outcomes focused investment”.
To find out more about SmartyGrants and the Outcomes Engine and how your organisation could access the platform visit: https://smartygrants.com.au/outcomes-engine
To book a demonstration of Smarty Grants and the Outcomes Engine module contact Jen Riley jennyr@ourcommunity.com.au
To get support developing your organisation’s impact literacy and how to build an outcomes framework that can support outcomes focused grant applications contact hello@thinkimpact.com.au