AI for the greater good

We're building thoughtful AI tools for social impact, so that we can build a better future–together.
Join the Conversation

What’s possible with AI right now?

Right now, AI can save you time. There is a lot of busywork on all sides of social
impact programs that we’ll speed up or eliminate with AI tools that:
Help applicants, reviewers, and
administrators extract accurate
information from PDFs.
Help employees find new nonprofits to support around causes meaningful to them.
Help applicants fill in information quickly, and help reviewers summarize applications.

The future of social impact is vitally human

The problems social impact people work to solve are human problems that require human solutions. To understand where AI tools fit within this very human work, we must first understand what the people doing this work want to achieve.
I want to be a strong advocate

Grantees want to spend more time passionately advocating for and actively addressing their cause. This starts with reducing the amount of time they spend on rote, basic questions for each and every application.

What we're thinking about:
How can we reduce grantee
burden?
How can we make sure automations prioritize equity?
"People in the nonprofit space are so pressed
for time. The biggest benefit of AI in
organizations that really can't afford the
staffing levels of a corporate [organization] is
in those basic, menial tasks."
A grantee
I want to make the best decisions I can

Reviewers want to remove human error without removing human agency. This starts with smarter automated workflows and accurate document parsing.

What we're thinking about:
How can we work to eliminate bias?
How can we further centralize communication?
"I would have my AI looking for key aspects and data points that would do some filtering for me."
A grant manager
I want to be a good steward

Grant administrators want to make sure funds go to where they’re needed most. This starts with the strictest security measures and accessibility tools like translations and helpdesk documentation.

What we're thinking about:
How can we continue to ensure accessibility?
What would it mean to be both efficient and careful?
"Where are the areas that maybe aren't getting as many resources? And then potentially that could be a gap to fill [with] a new type of program."
A grant administrator
I want to do good as a part of my job

CSR program managers and their employees want to do good without logistics getting in the way. This starts with connecting the dots between every stakeholder through automations.

What we're thinking about:
How can we give employees more agency?
How can we make it simpler to participate?
"I don’t want our employee volunteers to have to think about anything besides actually volunteering."
employee volunteer event organiser

Ethical AI starts with firm principles

Five principles direct our AI work. These principles help us make sure we’re developing tools that are as responsible as they are transformative.

Empowering

Help people make better decisions and achieve more while preserving human agency.

Accountable

Experts will continuously monitor, evaluate, and improve our AI tools.

Transparent

Make users aware of when, where, and why they’re interacting with AI.

Equitable

Identify, mitigate, and correct for bias and engage stakeholders in model development.

Private & Secure

Implement the strictest measures to ensure all data is private, secure, and never shared.

Learn More About Our AI Principles

READ ON

AI tools built with you, for you

AI is powerful. And with any such technology we need to be thoughtful about how we use and develop it.
To that end, we’re working with you to build these tools responsibly. Here are some of the insights we’re working with as a part of our research.
This is some text inside of a div block.
I need a B.S. meter
"AI might make it challenging – from a funding point of view – to understand if a robot wrote it, or the individual did. [I need] a B.S.-meter, candidly. I want to know if the applicant is speaking from the heart."
Funds need to go to the right applicant
One impact could be that an applicant gets funding that isn’t able to do the work. Those funds could’ve been used for another applicant or another project. To me, I look at it as the stewardship of the funds we’re entrusted with.
We need space for nuance
If people are writing things that have the essence of something but aren’t using the certain words, you can lose applicants because they didn’t fit inside of that box of using the right terms. And they may be describing the same thing in a more eloquent way.
Accuracy is all-important
For hiring, a lot of times good people don’t make it through applicant tracking systems because there’s a weird fault in their PDF. The format of their resume – which looks really beautiful – doesn’t come through properly. I would worry if AI was making funding decisions [in the same way].
We need to keep what’s human
I don’t want AI to defuse the passion that comes from individuals applying for funding. With great power comes great responsibility and I want to make sure it’s used for good.

Partnering with the best to create the best

Submittable is proud to be a Microsoft Tech for Social Impact prioritized
partner. We’re also using Azure OpenAI services to create industry-
leading AI tools built upon the most advanced infrastructure.

Learn More

Learn with us

We’re actively writing, researching, and thinking deeply about how AI
affects the social impact sector and the world at large.
A More Personalized Way to Discover Causes that Matter: Introducing Discover Causes, Powered by AI

Learn More
Introducing Submittable’s Responsible AI Principles

Meet the principles guiding Submittable’s development of AI-powered tools to accelerate social impact.

Learn More
AI for Foundations: A Primer for Pragmatists

Ground your understanding of AI with practical advice from experts in social impact and data science.

Watch Webinar
Dispatch: What You Missed at the Microsoft Global Nonprofit Leaders Summit

Go behind the scenes of the exclusive event to learn how top social impact experts think AI will redefine their work.

Listen Now
FOMO vs skepticism: AI strategy for grantmakers who feel both

Jean Westrick offers advice for grantmakers who are curious about artificial intelligence and want to embrace it responsibly.

Listen Now
Beth Kanter on AI & the One Question That Will Help You Make Sense of It

Beth Kanter, nonprofit trainer and author, and Sam Caplan explore the nuances of how nonprofits can start using AI today in a responsible way.

Listen Now
No items found.
AI Responsibility Principles with Submittable

Anne and Sam join host Steve Boland to talk about five specific principles that Submittable is deploying as it makes Artificial Intelligence (AI) tools available in their platform – both for nonprofits and those organizations that support charities.

Learn More
What grantmakers and grantseekers actually think about AI

Read the article by Sam Caplan

Learn More
The AI Revolution Has Arrived

Sam Caplan's retrospective of Microsoft’s inaugural Global Nonprofit Leaders Summit, where the focus was on AI.

Learn More