< Back to all episodes

Nick Cain and Sam Caplan

Foundations building for foundations: How the Patrick J. McGovern Foundation develops AI tech

Nick Cain lays out a path for foundations looking to support and develop technology for the nonprofit sector.

Foundations building for foundations: How the Patrick J. McGovern Foundation develops AI tech

36:47 MIN

Nick Cain explores how foundations can step up to fill the gap between the technology nonprofits have and what they need.

 

Description:

This episode of Impact Audio features Nick Cain, VP of strategy and innovation at the Patrick J. McGovern Foundation. He shares how the foundation develops and supports new technology for the nonprofit sector.

He digs into:

  • Real-world examples of AI shaping social impact outcomes 

  • How the Foundation builds the technology their grantees need

  • The story behind Grant Guardian, a financial due diligence tool for foundations

Guests:

Picture of your guest, Nick Cain

Nick Cain

Nick Cain has 15 years of experience working at the intersection of technology, philanthropy, and the nonprofit sector. As Vice President of Strategy & Innovation at the Patrick J. McGovern Foundation, Nick oversees all of the Foundation’s programmatic efforts to advance a human-centered technological future. He leads a team of grantmakers, technologists, and program leads who advance AI- and data-driven solutions to global challenges through a $60M hypothesis-driven grants portfolio, AI product development, nonprofit capacity building, and storytelling. Before he joined PJMF, Nick was a Principal and Climate Lead at Google.org, and helped build and scale a tech nonprofit that provided innovative education finance solutions for students in low- and middle-income countries.

Picture of your guest, Sam Caplan

Sam Caplan

Sam Caplan is the Vice President of Social Impact at Submittable. Inspired by the amazing work performed by grantmakers of all stripes, at Submittable, Sam strives to help them achieve their missions through better, more effective software. Sam has served as founder of New Spark Strategy, Chief Information Officer at the Walton Family Foundation, and director of technology at the Walmart Foundation. He consults, advises, and writes prolifically on social impact technology, strategy, and innovation. Sam recently published a series of whitepapers with the Technology Association of Grantmakers titled “The Strategic Role of Technology in Philanthropy.”

Transcript:

Episode notes:

Transcript:

This transcript was automatically generated.

Foundations and nonprofits have developed a reputation for being a bit behind the technology curve.

A lot of work over the past decade in the nonprofit sector has been to catch up to the systems that already exist in the for profit world. I'm a big proponent of closing this gap, but one major challenge is that sometimes it feels like we're pulling technology built for corporations into a philanthropic space, and it's not always a seamless fit.

Right now, we're in the midst of a pivotal moment, thanks to artificial intelligence. It's becoming ubiquitous. But for AI to be useful for foundations and nonprofits, we've learned that we can't sit back and wait for others to create the right solutions.

We need foundations that have the capacity and know how to take on the mantle of designing and developing the AI tools that fit this work.

Luckily, some foundations have already stepped up. The Patrick j McGovern Foundation is one funder who's putting their resources toward building technology we need.

And they're leading the way for the whole sector.

Welcome to Impact Audio. I'm Sam Caplan, vice president of social impact at Submittable. Today, I'm joined by Nick Kane, vice president of strategy and innovation at the Patrick j McGovern Foundation. He leads the foundation's efforts to develop solutions that empower grantmakers and grant seekers.

He lays out why this work is necessary and what makes it possible.

Nick Kane, from the Patrick J. McGovern Foundation. I am totally stoked to have you as a guest on Impact Audio. Welcome aboard, and I'm I'm excited to have great conversation with you.

Thanks so much, Sam. We appreciate the invitation, and, really looking forward to the chat as well.

I think what has really defined the Patrick j McGovern Foundation, at least over the last few years, has been your involvement engagement in, artificial intelligence, both in terms of funding, solutions, really from my perspective, like, leading the philanthropic sector in terms of AI. So catch us up a little, about the, Patrick J. McGovern Foundation's involvement with artificial intelligence?

Like, what led to that?

It's a great question, and and appreciate the kind words. We certainly, you know, have been at it for for for a while, I think, and have been studiously thinking about ways to the language we like to use is how do we build a more, sort of human centered tech enabled future. Right? And and and you asked where does that come from?

Where does our interest in that topic come from? The or in the kind of the origin story of our of our work certainly is, in our our namesake, Patrick J. McGovern, who was a technology entrepreneur and cared deeply about, technology as a tool to improve and advance the human experience. He was passionate about neuroscience and technology and technological innovation.

And when the foundation was created, you know, we set out to really think about ways that a philanthropy could ensure that technology does fulfill that promise. Right. And, we continue to believe that, you know, technology, sort of unintervened or un, unaltered does not necessarily by itself or inherently make that happen. Right? We as, institutions and civil society need to be empowered and have a voice, in that process. And we, as a as a philanthropy, can certainly use, our resources to help provide agency for the organizations that we support to both demonstrate the way these tools can be used to advance the public good, but also to help them, you know, if they're not doing so today, to learn more about the technology's use, how it may be affecting the communities in which they operate, and where their voice can really make a difference in shaping the future of these technologies.

Yeah. And and let's just go a little bit deeper into artificial intelligence. Like, what is it about AI specifically that has really shaped your theory of change? And it feels like as an organization, you have really coalesced around artificial intelligence as being a technology that I suspect has the potential to to shape society in ways like no other technology has been able to. Like, tell me about, like, how does AI align with your theory of change, and why did you guys have this intense focus on AI?

A great question. And I think I think it's important to state right from a a fundamental values, and sort of purpose question. What we think about at our organization is how are we building a future that is, you know, enhancing of human dignity, enhancing of human agency, that we're building a world with that that is more just and more equitable. In this moment, we see technology, as I mentioned, as a tool, that can advance those aims.

And in this moment, the kind of central conversation that sits at the intersection of those two things that we care about a lot is certainly AI. As you alluded to, it is a technology that is shaping our lives in countless ways. Always important to note, this has been true for many years, not just in a couple of years since, the world has been paying a bit more attention, certainly, to generative AI. But machine learning models have been shaping the technology products that we interact with, consumer products that we interact with.

They've been shaping the processes that may determine how we get a loan or whether we receive benefits from the government, and all kinds of other sort of, lower and higher stakes interactions that we have with with services and resources in the world. AI has been a part of that. And so in many ways, it's a it's a it's a gateway into a conversation for us about, equity, about dignity, about closing, gaps in access.

We certainly know another way to think about it. Right? We've certainly the Internet created one digital divide. We're certainly at a risk of reinforcing and if not expanding that digital divide through the, and the kind of the AI moment that we're in. And so all of those things kind of come together and coalesce into a a strategy that I'm happy to talk about more more specifically or from a from a grant making perspective and some of the other things that we do. But that really drives the moment and and our interest in being such a active participant in it.

It's really interesting to me. Like, I'm glad that you mentioned the digital divide. When I look at the way that philanthropy and and the nonprofit sector is approaching artificial intelligence. There it feels like there has been this very extensive conversation around equity and fairness and bringing people along and ensuring that we're not leaving marginalized communities behind. I would love to hear, like, whether it's through your grant making or or just the way that your organization is approaching artificial intelligence and responsible use and fairness and equity.

Do you have thoughts on how we ensure that we do a better job this go around with a emerging technology? And I realize it's not really emerging, but GenAI is still kind of emerging. Like, how do we make sure that we're servicing everybody out there and that we're bringing along all of these communities so that they have the opportunity to impact their own future, rather than, you know, be impacted by the way that that corporate America or others, you know, that have all the funding may dictate that AI will, you you know, play out in their their communities and and all of that.

I really like that framing. Just to pick up on what you said there at the end, we certainly it resonates with with us, and, and there's a version of that statement that we that we often will use in conversation at, at PGMF as well, which is this notion of technology sort of by us and not not built for us. And I just think that that's a really powerful statement. To answer your question about how we think about, I I would say the it's a big question that you've posed.

Right? And I think that there are so many different, interventions in each kind of portion of society or portion of player in the in the in the space, but that we'd be that the government, be that, technology companies or philanthropy and civil societies you've just mentioned, or say philanthropy and then separately civil society, as well, all have a role. Where we see an opportunity for, us to have where we see sort of a really, really well aligned opportunity for tech for excuse me, philanthropy to drive impact in a couple places. First is building sort of broad based literacy, about AI.

And very specifically, what that means is not necessarily turning everyone into an AI engineer or that everyone must adopt AI in every context in which they in the classroom, every moment of their learning journey, etcetera. But simply what are the interventions, whether they're curriculum that does end up in the classroom, whether they are part of, vocational training programs, as you start to leave school and enter the workforce, there's so many contexts in which people begin to prepare for how they're gonna interact with technology in so many parts of their lives, be it professionally and personally, and building a broad, what we say is broad based understanding of the tools, kind of how they work.

Can we get it, can we make it so that most people understand that there's data, sitting under this tool and that that data was used to train a model and that model sits inside a product that is then crafting and shaping an experience they're having that may shape their understanding of a topic or something that they need to learn or critically, an example I always like to give is someone who's training to become a nurse and may soon be asked to be in a hospital environment and realize that they're interacting with a tool that's making an an AI based recommendation on a next step to, pursue what be it on something as mundane as a piece of paperwork they need to be filled out or or potentially a treatment, course of treatment.

And are those are those individuals kind of do they have the agency to say, what's happening here? And how do I wanna interact with this situation? And how can my voice be used to make sure that this is happening in a way that is ethical and responsible? So that's one piece.

The second is making sure that datasets that, the datasets that are used to train many of the models that are, you know, influencing some of the most kind of hot and LLM based generative AI products that we're interacting with are, representative of community's interests, and representatives of community's experiences.

We do a lot of grant making that is about helping, organizations create, share, and, try to scale access to representative datasets.

We talk a lot about building new power through data. There are a lot of contexts in which ensuring that whether it's a climate risk prediction model or a, tool that is, you're using AI for translation and we wanna make sure that long tail underrepresented languages are are included in the training set to make those tools actually useful, that a community's experience is represented in the tool and therefore the tool then becomes actually something that is useful to them and creates greater parity and equity in the ways in which these tools may or may not be enhancing our lived experience.

So it's the second and the third big one is simply making sure that we can consistently demonstrate the public benefit use cases of some of this technology.

And maybe we I'm sure we may talk a little bit about, about that a bit more in this conversation. But demonstrating that, AI is a tool whose predictive power can help make sense of vast amounts of data. And in this case, I'm speaking just to machine learning. Right? And the idea that a machine learning model might be able to help, make a bunch of really, really important decisions about climate risk, about, health care, about disaster preparedness, and the list goes on, in a way that humans alone are, you know, are are less equipped to do, and demonstrating that that can be done ethically, responsibly, and in a way that drives real value for communities. That those all three of those things, I think, need to happen to to get the outcome that you described.

Yeah. We'll we'll drill down into, like, the specific grant making that PJMF is doing. Before we do that, I'd still wanna, like, sort of have, like, a a a broader view of what's happening across philanthropy. So when I look at funding AI, generating AI solutions, like, you know, there's Google dot org that you mentioned. They're doing a pretty tremendous job of working with, tech focused nonprofits to develop AI based solutions for their interventions.

There's organizations out there like Fast Forward. I know we we've both, are friends with Shannon Farley, like, who's doing great work out there. There's your organization. But when I look across philanthropy in general, it feels like the sector is a little bit slow in in acknowledging that this is a critical area, that that requires, like, thought and funding and sort of rethinking our our own theory of change.

Like, Chantal Forster is you know, she's working with the Annenberg Foundation. I think there's a handful of people. There's a handful of organizations that are engaged in funding AI and building AI solutions. Do you feel like we still have a long way to go, or are you seeing more of an uptick, in terms of organizations and and philanthropy beginning to focus on AI?

This is a great topic, Sam, and one which I'm very, personally passionate and excited to talk a little bit about.

I I think your observations are, well, let me just give you my sense of things. I think I think you're, largely correct. I think that for, I think an important thing to say at the outset is for, for, very good reason, right? There are, my sense has been that that certainly some institutions, you know, view it with sort of trepidation and, uncertainty.

What is the appropriate role for philanthropy with respect to rapidly changing technology? How should it be incorporated into their work? Whether they should engage at all? In some cases, that comes from, you know, quite well founded, I don't necessarily wanna say fear, but certainly a sense of, solidarity perhaps with communities that they work with that may or may not have been harmed in the past by technological innovation and a sense of duty to say, we need to be thoughtful and judicious in how we think about this. And I think that that's a really, really valid and important consideration.

However, unsurprisingly, given the work that we do and, certainly my personal background, we believe that philanthropy should not stay on the sidelines and instead, have an incredible opportunity and, in fact, a responsibility to both themselves as institutions and then through their work and the resources they can deliver to their grant partners, and grantees, fill a space that needs to be filled with voices that say, this is what we this is the vision that we have, that we those who are closest to community, those who have been innovating, in so many ways to deliver needed services to communities on the ground.

This is our vision for what these tools can and should be for us. And that's not gonna happen without a lot of engagement, and candidly dollars, right, that need to flow to nonprofits for them to also begin to engage with these tools and begin building as well in a in a meaningful way. The way we think about it is, we'd like to see more put most simply, we'd like to see more money flowing to builders of AI tools in the nonprofit sector.

We I'm happy to talk a little bit about what how I think that can happen. We've started to we certainly engaged, you know, one on one with boards and, foundation presidents and program officers sort of at the all across the different levels of institutions that are interested in learning a little bit more about this and very excited to continue to do that. We've started to dip our toe into doing that at a bit greater scale. We hosted a large event in partnership with, some great partners at will dot org, the GitLab Foundation, and FastForward, as you just mentioned, that we actually called Fund dot ai, and a hundred plus organizations showed up for two days to get training on on how to think about, diligencing and actually evaluating proposals where nonprofits are seeking resources to do this work. I think simply the excitement and the fact that we had such a high attendance for that event in and of itself is a really positive indicator of where the sector may be headed.

That's That's awesome. A hundred organizations said, okay. Well, that makes me feel a little bit better. So let's narrow it down a little bit here. I saw just today, that your foundation announced that it has granted seventy three point five million dollars in twenty twenty four, to ensure that AI becomes a force for good.

And that that, funding went to a hundred and forty four different organizations working across fields like health, climate, and education, human rights. I I love all of that. Can you give us an example or two, of a grant or or or a project that was funded that, you know, was especially interesting for you?

I can. Thanks for thanks for calling that out. We're, of course, proud as we always are at the end of the year to, put a little shine on the amazing partners that we're able to support, and, you know, collectively kind of reflect on what what they're all achieving in the world. I think they're I'll I'll share two examples.

One is one is a builder. We just we just got talking about what it looked like for nonprofits to actually build, tech. And the other is, an organization doing incredible work, in doing what I call the shaping the enabling environment that exists for AI to be adopted and and to to deliver social good. So the first is an organization called Climate Policy Radar.

We have a focus, on, AI solutions to advance, the fight against climate change. And what Climate Policy Radar does is they've built a tool, that uses machine learning to quickly, build essentially a comparative analysis tool for policymakers who are and researchers who are trying to understand and understand and contextualize, how various pieces of climate regulation or rulemaking may or may not be relevant in certain other markets, geographies, or contexts, to quickly search and understand how x y z piece of policy or rulemaking may or may not have impacted actual outcomes in x y z place.

And you can imagine to take hundreds and thousands of pages of documentation, as climate policy has been, you know, developed over the last couple of decades and turn that into something where actually the relationships between the lang the language that exists in those documents and outcomes, that exist in the world is very much a perfect machine learning task. Right? They've spent the the done the hard work to build a knowledge graph, understand the relationship between terminology, and, you know, language that exists in one policy and whether it may or may not be related to you know, that happened in Finland that may or may not be related to the state level law in California, etcetera, and built this incredibly robust tool that's used at the highest level of climate, policy negotiations at, at COP and, in the UN context and elsewhere.

Really, really excited about the work that they've been doing. So that's, that's a builder. You know, that's an organization that's founded and sort of said, we're gonna be tech native. We're gonna hire, technologists to to do this work. We also support UNESCO and their incredible work, to drive ethical, adoption of of AI around the world.

And they've, they've designed something, that's called it's an AI readiness assessment methodology that's been adopted by sixty countries. And it's a set of questions that policymakers, can engage with and and engage in essentially a research exercise that assesses their own country's strengths and also potential opportunities in and their ability to use the mechanisms at their disposal, whether that's regulation or, rule making or, sort of the platform of the the government to shape, the ethical creation, design, and deployment of of AI in their, in their countries.

The scale of this work, I think, is really what's most exciting. Right? Sixty countries and growing. And it's really, really important.

Right? When we talk about an enabling environment, we have a whole part portfolio of organizations that are tackling various pieces of this. There's, we can build the tech, but for it to scale and be useful and for people to feel empowered to actually want to engage with it, you need the right policies, protections, and stakeholders who care. And that's, we've been really, really thrilled about the work they've done to to advance that.

I mean, clearly, there's no shortage of really interesting solutions that AI and and machine learning can tackle. And and it's really critical for organizations like yours to provide the funding to really dynamic tech enabled nonprofits who are willing to do the hard work and and sort of be the boots on the ground and, you know, try to build out these these new and innovative solutions. But I think where things get really interesting for me is that the McGovern Foundation is actually building solutions as well. And you have been working on one that's called Grant Guardian. I would love to hear a little bit about the solution, but, you know, maybe as a as a bit of a preamble, like, you know, how did you, as an organization, sort of define that there was this need for GrantGuardian, and and what compelled you to to dip your toe into actually building an NA enabled solution?

Thanks for asking. Yeah. We were very excited about GrantGuardian, and I'll I'll I'll certainly speak to it. Maybe just to take a half step back. As you alluded to, we have, for for for listeners to understand, we have, what we refer to as a, products and services team here at, the Petrochem We Govern Foundation. And this is a group in in some ways, you can think of it as a mini a mini tech company that exists inside of philanthropy, which I think is very exciting.

We have data scientists and software back end software engineers, front end engineer and designer, and engineering director, all of whom are they have two jobs. One is to build products, and we'll talk a bit about Grant Guardian in a moment. And they're also an incredible internal source of expertise as we certainly, as we evaluate grant proposals and shape a lot of the content that we, that we deliver to nonprofits in the form of capacity building, helping them learn how to define a problem that's appropriate for a tech solution, how to assess whether they're data ready themselves as they wanna begin using their own data to potentially train a model or, you know, explore another type of AI solution with their with their own data. So they do those two things. They do them exceptionally well.

And, our most recent product, as you alluded to, is called GrantGuardian. GrantGuardian is an AI enabled financial due diligence, tool for foundations and other grantmakers to use, when assessing a nonprofit's financial health.

A nonprofit meaning in this case, a a potential grantee, a prospective grantee. And and you kind of might think like that's a that's a kind of interesting thing to land on, right, as a as a as a product for the for the team to to to build. A couple of things I think that are that are, really worth noting here. One, we found we're excited to build something, from from a lived experience. Right? This is a rather new team. It's actually the second product that we'll be launching.

And this is a real, a real challenge that we've experienced, which is how do you sort of scalably and replicably take the a set of financial due diligence standards and apply them across, you just mentioned, a hundred and forty four grantees each year, with a staff that's, you know, doing doing, many other things at once and, make sure that we do that in a way that builds a really solid record of that diligence. And and so we started to say, hey, how how can we how might we be able to improve that process here at PGMF?

And quickly learned actually that other foundations there was very there's other foundations run into the similar problem. Right? There wasn't certainly wasn't a, you know, externally available standard that everyone had agreed to in terms of how they do this.

You know, a lot of team members just spending time with financial documents and saying, you know, okay. My job is to pick this piece of data out and this piece of data out and throw it into a formula. And we said once we said, oh, that's what we're doing, I think AI could probably help us do that. Right? And so we set about, doing a bit of kind of user research to understand the problem even more robustly and realized there was really a there there. And that's what we've built in Grand Guardian. It is a tool, where any program officer or operations team member or finance team member, can build a either take a default risk profile, that we've defined as the one that we use here at PGMF.

They can design one that bet that matches their own internal processes for how they assess, financial risk.

They then we we then feed those, feed the grantees documents into, an LLM with using some prompt engineering that we designed here, at the foundation. The LLM automatically extracts the relevant financial variables that match the risk design that the that the that the organization has set as their as their own risk profile. Excuse me. And it spits out a transparent score with a summary of with a written summary of the organization's financial health based on the direction that the risk profile has given the system to follow, a single easy to understand score, and sort of a easily shareable PDF that is the kind of the risk report, that can be added and attached to any sort of file that you may use for grants management, etcetera. So we're really excited about it. It's been in a in a beta phase for a while, getting some really great feedback from our initial users, and, very excited to see where it goes in the months ahead.

So so you now have the attention of of thousands of program officers around the world who are saying, thank thank goodness. Finally, something to help me, you know, with financial analysis of of organizations that we're considering funding. It's a huge need. I mean, I I come from a philanthropy background myself, and and I've, you know, personally been engaged in in that due diligence process. And I had an opportunity to, like, dig into the tool a little bit, and I love that, you know, you can sort of pick and choose the financial metrics, and you can assign various weightings to those metrics. So you can really tailor it for like your own organization's way of of doing that, financial analysis of nonprofit organizations.

I'm also kind of curious though, when it comes to doing this financial due diligence, I think that a lot of nonprofit organizations historically like, this is a very sensitive area. And and nonprofits often feel like you can't paint with too broad of a brush. There are, you know, unique things about each organization, and a a particular metric for one organization may look very different than a metric for another type of organization. Like, is there a role for AI to help sort of, you know, normalize or, like, level the playing field a little bit even when it comes to, like, this is a very data driven approach, but it would be great to be able to incorporate non numerical input into this model as well.

That's a great question. And I should also add, I mean, I think a very critical input or kind of insight, I should say, into our motivation to develop the tools, also an awareness that, greater standardization of the process for doing this actually benefits grant seeking organizations as well in that some organizations may say, hey. Here's here's a form we need you to fill out to put you put your financial data in because we don't we don't have the resources to do it or, you know, we want you to send us x y z document, and the next organiz next grant maker says, please send us x y z and a b c document.

And so the idea that a grant maker using GrantGuardian could simply say, just send us what you've got, and we'll upload it on our side. At the most basic level of sort of time saving, I think we're optimistic there's a there's a real shared benefit there for the grant seeker as well. But your question's a good one. And another thing I'll say before answering it, I apologize, is there's also a dynamic of, we've certainly heard some questions about how do you think about what does it mean to, you know, actually define this risk?

What if we you alluded to this in your question. What if we, you know, disagree or sort of don't have a, an opportunity to quote, unquote, explain ourselves or the context for which, you know, this that that surrounds whatever indicator or financial situation that may exist.

And I think it's really important certainly for us to to to to to explain here at PGMF and the tool is built with this in mind that, it's certainly not meant to spit out a score that is a binary yes, no alone on any given Right.

Agenda. Right? With a piece of context, emit a broader analysis, which is always how we treat it at our institution.

And, certainly with in my experience in working with my peers in the sector, that's a shared, that's a shared approach. And the the built in customization, ability to customize and say in fact, there's as part of designing a risk profile, you can essentially say, I wanna know this indicator, but I'm actually gonna mark it as low priority. I don't really I wanna see it, but it's not super material to my, to my analysis, and I'm actually gonna underweight it in the overall score.

These are the things that we give the user the the the opportunity to find for themselves. I do think, you know, potentially at at scale as the tool grows, there are all kinds of ways we could think about, how this might be able to kind of change the conversation around this particular part of the grant making and grant seeking process, whether that's creating more sort of public access to these common indicators and helping a nonprofit understand how their financials are being perceived by, you know, a grant maker, etcetera, etcetera. I think there's a lot of places it could go, and we're excited to once we've built some some, some traction and had it's had some time in market so to see where we go.

So to me, the real value of incorporating AI in this process is that it ultimately generates a report, and a human doesn't have to spend all of the time generating that report. Like, you know, we we can use technology to generate the report, but the real value is that now that that report is generated, I can get a human in front of it to review it and interpret it and ask the right questions. Right? It ultimately just becomes a tool to help people do their work in a more dynamic, creative, innovative, effective way.

Right? Like, I mean, that's sort of the way that I'm thinking about AI in terms of, like, where we are in that development is I don't want AI to make a decision about a grant. I don't want it to review, you know, its own output and say, you know, we're gonna make a grant to organization a because it has these financial metrics not to be. Like, I think the real value is being able to say, here's a useful report, and now let's let Nick take a look at it as a program officer, right, and and make a decision ultimately.

Totally agree. Couldn't agree more. I mean, I think and that's also an appropriate, I mean, that's a representative analysis that could be applied to in so many use cases of AI. Right? That that this is a tool to aid, you know, human decision making in many contexts and to, focus the humans' time on the, the work that is truly value add, that builds you know, brings their expertise and contextual understanding and awareness into the conversation.

And, yeah, in this case, saves them the time of finding cash and cash equivalents on page forty six, of the financial statements.

For sure. Alright. Well, I can't wait to see how, Grant Guardian continues to develop and and, you know, watch its adoption across the sector. I wanna go back for just a minute though, because the thing that is most wild about all of this, Nick, is that your organization has a product and development team, which to me is absolutely unheard of, for foundations. Like, it's it's such a innovation in and of itself to be able to say, like, you know, not not only are we, you know, making very thoughtful grants to organizations that can leverage this funding to develop amazing AI solutions, but we also have all of this internal knowledge and creativity.

Right? And and we ourselves can begin to build some products for the nonprofit sector, and it's that realization of, you know, nonprofits building their own their own solutions and controlling their own destiny. So, like, what's next on the horizon for your product and development team?

Really sticking the landing on Great Guardian is is is is our focus, right now. And and I would I would also say that the the team is really excited to be giving their time, to our partners in the form of some of the consultation work that we do. I I alluded to this earlier, but in addition to sort of helping design and define some workshops that we offer to cohorts of nonprofits, We also do direct consultation where an organization may, either through our grant pipeline or through other means, get in touch with us and say, here's a here's a particular data science question that we're grappling with, or we've started to implement AI in the following way, and we're not getting the results that we're looking for. Or, we need to we think we need to hire somebody to take on this challenge.

We're not really sure. Could we talk about that? Or increasingly, and we've started to sort of expand our offerings a bit, we are planning to design a new user interface for this, merged database that we need to build to enable x y z thing.

We'd love a little bit of a design consultation. You know, can we hop in can we hop in the queue? And these are all these are all questions that our team is prepared to support and answer on.

And, it's been it's been a real pleasure to be able to, as always, to make those connections and to, have someone say, oh my gosh. Like, with two hours of time, we feel like we've got a, you know, step change on the path forward here. We haven't solved everything.

But but, oh my gosh. How exciting. And and and and this is gonna be going forward. So, that's where the team is really focused right now.

Yeah. And it makes perfect sense. And I love the idea of, like, building these solutions that can help individual organizations do their work more effectively, more creatively, better. Are you beginning to think amount, like, sort of philanthropy writ large in terms of, like, the data?

Like, I have so many conversations with so many brilliant and creative people, and and we always come to the same conclusion. Like, oh, if there was only some way that we take all of our philanthropy data, combine it into some sort of data commons, and then point artificial intelligence and machine learning at it, like, we might finally be able to answer some of the biggest, most, you know, intractable questions we have about the work that we do. Like, are we really changing the system? You know, when when we combine the data from these seventy five foundations who were all doing similar types of funding, you know, can we actually begin to see, like, the impact of the collective work out there?

I'm super curious. Like, is this an aspiration, you know, for the McGovern Foundation at any point in the future to really get involved in this sort of, like, you know, big data analysis of philanthropy?

I don't know necessarily of, I mean, do we have do we have an interest to let me say this. I I I share your enthusiasm for that particular, you know, AI use case. Right? Like, the the if we had that dataset, that would be a really, really exciting opportunity to, or kind of a perfect use case to to pull insights out, using machine learning or and or, generative AI. You know, we are always interested in the ways in which, I guess I guess that's the I guess that's the what I would say is the data needs to come first.

And I'm not sure we have that right now.

Do we think that, you know, greater transparency and, where folks are are are funding, how those decisions are being made, how they're tied to strategy, and, you know, the specific goals and hypotheses that organizations have and being I mean, I think that's that's one critical piece. I don't necessarily see enough of that, in the sector. I think all of those would be important to be able to get to the outcome that you described, but I I I would love to get optimistic that we could get there someday.

Well, all I can say is god speed. You're you're doing amazing work. You're you're living the dream in terms of, building products, focused on AI for our own sector. I'm super excited about everything happening at the Patrick J. McGovern Foundation. So, Nick Kane, thank you so much for being my guest today and having this really engaging conversation.

It was such a pleasure, Sam. I really again, thank you for the invitation, and, we're really excited to be here. Thank you.

I'm hopeful that the Patrick j McGovern Foundation's work will inspire more funders to prioritize tech innovation because they're setting a terrific example of how to create tools that empower funders and ease the burden on nonprofits.

If you're looking for more info about the Foundation's work, check out the episode notes.

As we get more of this development within the philanthropic sector, there's an opportunity for the nonprofit world to set the standards for what ethical human centered innovation looks like. Then maybe the for profit world will be the ones trying to keep up. Thanks for tuning in to Impact Audio, produced by your friends at Submittable.

Until next time.

Additional Resources

Season 4 , Episode 17| 19:11 Min

How to use partnerships + transparency to address big social problems

Photo of Fred Tan

Fred Tan

Photo of Sam Caplan

Sam Caplan

Season 4 , Episode 16| 43:48 Min

How GivingTuesday went from grassroots movement to data powerhouse

Photo of Woodrow Rosenbaum

Woodrow Rosenbaum

Photo of Sam Caplan

Sam Caplan

Season 4 , Episode 13| 29:14 Min

The AI imperative: How tech nonprofits are leading the way

Photo of Shannon Farley

Shannon Farley

Photo of Sam Caplan

Sam Caplan

Subscribe to Impact Audio

Impact Audio features short conversations (and a few longer ones) with social impact experts and practitioners. We cover the world of philanthropy, nonprofits, corporate citizenship, and social change.