Trust Over Transactions: What Non-Profits Teach Us About AI Adoption
How to Use AI While Keeping Humanity at the Center
Most of the conversation about AI is centered around maximizing revenue and cutting costs in for-profit businesses. But there’s a whole kind of organizations that are being neglected in the coverage that are also delivering meaningful and measurable AI results: non-profits. Because when non-profits look at AI, the conversation can’t just be about efficiency or cost savings. It has to be about trust, relationships, and mission. That’s why I was curious about Scott Rosenkrans’, VP of AI Innovation at DonorSearch, perspective on bringing AI to non-profits and invited him to join me on “What’s the BUZZ?”. Here’s what we talked about…
Viewing Trust as the Currency of Non-Profits
In the for-profit world, when customers buy a product, they get something tangible in return, like a phone, a service, or a subscription. With non-profits, the “product” is trust. Donors give money believing the organization will use it responsibly to advance a mission they care about. That trust is fragile.
When AI is introduced into non-profit work, trust must be the guiding principle. For example, one national non-profit replaced its call center with a chatbot, hoping to cut costs and modernize. Within days, the bot was giving harmful advice to people with eating disorders. It may have passed generic ethical guidelines, but it failed in context, and trust was broken. Not only did that organization suffer reputationally, but others in the sector may have been tarnished by association.
That’s why for non-profits, AI innovation must pass a higher test: does it build trust or risk breaking it? Predictive models that help staff focus on the right donors can pass that test. Autonomous fundraisers that impersonate human conversations may not. Trust is the foundation of giving. If you lose it, the mission itself is at risk.
NEW ONLINE COURSE — Mitigate AI Business Risk
Business leaders are under pressure from their boards and competitors to innovate and boost outcomes using AI. But this can quickly lead to starting AI projects without clearly defined, measurable objectives or exit criteria.
Learn how to implement proven risk mitigation strategies for starting, measuring, and managing AI projects. Along the way, get tips and techniques to optimize resourcing for projects that are more likely to succeed.
Prioritizing Relationships Over Transactions
Fundraising has always been about building a meaningful connection between a donor and a cause. AI can either strengthen that connection or erode it. The danger is in allowing AI to turn fundraising into a purely transactional activity. Autonomous fundraising bots, for instance, may be able to “work around the clock” and optimize for dollars raised. But their goal orientation can also make them manipulative, reducing donors to revenue sources rather than partners in impact. The short-term gain of increased transactions comes at the long-term cost of weakened relationships.
» We need to make sure that we're always putting trust and relationships first, and we're not just going for what's a quick win. «
— Scott Rosenkrans
Instead, non-profits should use AI where it supports human work instead of replacing it. Automating reports, processing donations, and flagging potential new supporters are areas where AI can save time, allowing staff to focus on what truly matters: connecting with people. Predictive models can even help identify supporters who are likely to become more generous over time, ensuring fundraisers nurture new relationships rather than just circling back to the same donors. The shift in measurement also matters. If success is defined only as “dollars raised this year,” then transactions will always win. But if metrics include retention, acquisition, and three-year rolling averages, nonprofits can incentivize sustained, relationship-driven growth.
Just Because We Can Doesn’t Mean We Should
AI tools are multiplying. Chatbots, virtual assistants, autonomous agents; the possibilities seem endless. But non-profits must constantly ask: just because we can, should we? The nonprofit sector faces unique pressures. Staff are overworked and under-resourced, and many are considering leaving altogether. AI is often marketed as a silver bullet: “98% of your time back for $20 a month.” But technology layered onto broken systems won’t fix them; it will only accelerate the dysfunction.
That’s why non-profits need their own frameworks for responsible AI adoption. The guiding principle is sustainability: AI should support long-term mission outcomes, not short-term gains. Practical applications of AI in non-profits work best when they respect this principle. Predictive AI helps staff focus on the right donors. Generative AI can create drafts of communication, but humans must own the relationship. Automation can ease repetitive tasks. But relational work at the heart of the mission remains human.
Summary
Non-profits face a more complex AI challenge than most. In addition to managing efficiency and profit, they also manage trust and relationships. Trust must guide every AI use case, relationships should always take precedence over transactions, and not every shiny AI tool is worth adopting.
For leaders in the non-profit sector, the path forward is clear. Use AI to relieve staff of menial tasks, strengthen decision-making, and surface hidden opportunities. But keep the relational, trust-based work where it belongs: between people. When AI is used responsibly in this way, non-profits go beyond operational efficiency and strengthen the mission itself.
Equip your team with the knowledge and skills to leverage AI responsibly. Book a consultation or workshop to accelerate your company’s AI adoption.
Listen to this episode on the podcast: Apple Podcasts | Other platforms
Explore related articles
Become an AI Leader
Join my bi-weekly live stream and podcast for leaders and hands-on practitioners. Each episode features a different guest who shares their AI journey and actionable insights. Learn from your peers how you can lead artificial intelligence, generative AI, agentic AI, and automation in business with confidence.
Join us live
September 23 - Jon Reed (Industry Analyst & Co-Founder of diginomica) is back on the show, when we will discuss what’s next with AI agents.
October 07 - Danielle Gifford (Managing Director of AI at PwC) will discuss how hybrid teams of agents and humans can best collaborate.
October 21 - Christian Muehlroth (CEO of ITONICS) will share his perspective on effectively driving radical innovation with AI.
November 04 - Tim Williams will join and talk about how to evolve Agentic AI identity, security, and trust. [More details to follow on my LinkedIn profile…]
Watch the latest episodes or listen to the podcast
Upcoming events
Join me or say hello at these sessions and appearances over the coming weeks:
September 24-26 - Attending Okta Oktane in Las Vegas, NV.
October 01 - Keynote at AI for Enterprise Architects, Newtown Square, PA.
October 14 (Private event) - Lecture for MBA students at Wharton School of Business in Philadelphia, PA.
October 15 - Fireside chat at TECH360 in Malvern, PA.
October 21 - Webinar: Data Quality—Your Hidden Advantage.
October 23 - Webinar: Smart Sourcing Strategies: How AI Finds Savings Others Miss with Konnect House.
October 30 (Private event) - Roundtable for HR Leaders with Northeastern PA Manufacturing Association in Pottsville, PA.
November 10-12 - Hands-on workshop and track chair at Generative AI Week in Austin, TX.
December 11-12 - Track chair at The AI Summit in New York City, NY.
March 09-11 - Attending Gartner Data & Analytics Summit in Orlando, FL.
April 22-23 - Keynote at More than MFG Expo in Cincinnati, OH.
Follow me on LinkedIn for daily posts about how you can lead AI in business with confidence. Activate notifications (🔔) and never miss an update.
Together, let’s turn hype into outcome. 👍🏻
—Andreas