Towards Responsible AI Adoption for Nonprofits
Nonprofits have to reckon with technology and especially AI if they want to solve their fundraising and staffing problems. Governance, not technology, is the key to unlocking the potential of AI.

TL;DR - Nonprofits have to reckon with technology and especially AI if they want to solve their fundraising and staffing problems. Governance, not technology, is the key to unlocking the potential of AI while preserving jobs and income for people and expanding the ability to meet the demands for services.
One of the things that really jumped out at me in the 2025 Salesforce Nonprofit Trends Report is that while for the first time in four years fundraising is the paramount concern the next four challenges all relate to managing workload and staff and yet when asked about willingness to deploy AI agents 64% of nonprofits showed only moderate or less interest with a full 38% in the survey essentially saying no thanks.
So nonprofits need to raise more money, they struggle to find, manage, and retain staff who they need to do that - and they reject AI tools that could help. I get it - I don't like nor necessarily trust most of the off the shelf AI.
Technology generally, and AI specifically, are problematic for many reasons when it comes to the impact on people. If we use technology and AI to become more efficient that could mean a woman loses her job - I'd argue more often it means the work that the AI takes on becomes devalued and salaries for the role fall leaving people underemployed.
Indeed, research published by the Centre for Economic Policy Research in October 2024 found that “Because AI is designed to substitute for non-routine tasks, it has the potential to exert downward pressure on the wages of highly skilled workers and the skills premium more broadly.” In non-capitalist language that means AI makes high value work less valuable and it reduces income inequality not by lifting people up but by pushing them down.
For many, if not most, people in the nonprofit sector that is the end of the AI evaluation process. We are not willing to contribute to further “enshitification” (It's a real word, I promise.) of our and our coworkers lives through reducing salaries and cutting jobs, and we're right to resist and demand better.
While we feel better about ourselves for rejecting AI on those macroeconomic grounds and more prosaic ones like the poor quality of off the shelf generative AI output we are still left facing the problem. The sector has to raise more money to meet the increased demand for its services while struggling to keep people employed and productive in the mission. We have to find a different path - a way to mitigate the devaluation of human work and preserve jobs not in spite of AI but with it helping in that purpose.
The good news is the path exists. Multiple researchers have shown that generative AI can augment human expertise more than supplanting it. You can find a good summary of that work in the article “How AI can become pro-worker” also from the Centre for Economic Policy Research. But how do we travel that road?
The answer, unsurprisingly, is not more technology - it's good governance. As a sector we need to set and enforce standards and practices that safeguard people and ensure the gains in productivity and reductions in costs benefit those we serve and those who do the work. Guardrails, not roadblocks, are the key to successful deployment of any innovation or technology.
Nonprofits should focus on giving their people tools, not replacing people with tools. They should resist external facing autonomous AI (AI Agents) and instead build internal facing AI Agents who aid but do not replace a person. Here's an example.
A small nonprofit needs to raise $200,000 to fill the gap in funding they lost due to a shift in federal policy. They could create an external AI Agent to reach out to potential donors at scale. It could research the prospect, write its own content, answer questions and even make the ask for the gift without human interaction. (Salesforce and others offer this functionality out of the box today.)
The nonprofit can cut a researcher role and junior fundraiser role at once and now they only have to raise $100,000. Instead, what they should do is create an internal AI Agent who serves up detailed prospect research and suggests compelling personalized engagement strategies to a fundraiser who can engage with the donor more meaningfully. The researcher who struggled to compile donor information is now able to spend her time researching the outcomes of the nonprofit’s work to craft more compelling stories to share with those potential donors.
The same internal AI Agent is also training new fundraisers, reinforcing the culture of the organization that is embedded in its instructions and knowledge base. In the process AI made two people more productive at the same cost and allowed a third to take on a new complex role in support of the mission. They raise the $200,000 and preserve their people in the process.
There are myriad fundraising and programmatic use cases we could consider, all of which can be addressed by the deployment of AI that will reduce costs and boost a nonprofit's ability to raise money and accelerate their missions. The key isn't using or not using AI, it's doing it correctly. Does it benefit people first and create new value, or does it extract value and result in lost jobs and lower wages?
The problems that the nonprofit sector faces aren't going to go away, in fact the sad truth is they are likely to get worse. AI is not going away and is only going to become more capable.
As a sector we have to accept that reality and come together to ensure we solve our problems with AI the right way. We can't dictate to the corporations and wealthy people we depend on for their money how the technology is developed but we must deploy it in the most beneficial ways we can, not to just reduce harm but prevent it all together.
Convening organizations like NetHope have made a start in pockets of the nonprofit sector but the efforts have to spread and scale. Working together the sector can expand on the simple governance example presented here to build a lasting framework that ensures responsible use of AI.
Ready to stop fixing and start scaling?
Let’s discuss a post-implementation health check and a roadmap for maximizing your Salesforce ROI.
The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of Hikko.


