3 min read

For nonprofits rooted in transcendent values and focused on human flourishing, the way we adopt AI must look different from the for-profit world.

Dave: Open the pod bay doors, please, HAL…

HAL: I’m sorry, Dave. I’m afraid I can’t do that.

—2001: A Space Odyssey

If there is a single exchange from popular culture that captures our modern uncertainty about technology, it is the tense dialogue between Dave and HAL in 2001: A Space Odyssey. There is something deeply unsettling about hearing a human’s plea for help calmly rejected by a machine, especially one designed to serve him.

For how long will this dialogue be considered science fiction? Artificial intelligence is no longer the exclusive domain of tech giants and Silicon Valley startups; it is rapidly making inroads into everyday life: reshaping how we work, how we organize our lives, how we spend our time, and how we interact with others.

This tension between service and autonomy, promise and threat, captures the questions nonprofits now face as AI tools move into everyday work. Already, many non-profits are experimenting with generative chatbots to answer donor questions, predictive models to identify prospective volunteers, and automated grant writing tools. Yet all this new technology raises an uncomfortable question: Will algorithms erode the personal bonds that are at the heart of philanthropic work?

For nonprofits rooted in transcendent values and focused on human flourishing, the way we adopt AI must look different from the for-profit world. Where corporations see efficiency gains, nonprofits must ask: How does AI strengthen relationships, deepen trust, and further our mission?

People and Culture come first. Technology should never be an end in itself; it should always serve the organization’s mission and its people. Local nonprofits thrive on personal relationships among volunteers, donors, and the people they serve. If we start automating human contact, we risk undermining the very bonds of trust and reciprocity that make civil society unique. Can an algorithm truly pass on the “timeless ideas” that animate our work—or does it flatten them into data points?

A digital transformation is, overall, a cultural transformation, a deeply human process in which we have the opportunity to redesign not only how we do things through digital technology, but re-shape the conversations about what we do. Our first question should always be: How does this tool expand our mission and deepen human connection?

Rethinking talent. AI has the potential to free staff from repetitive administrative tasks, such as managing databases, processing vendor payments, and scheduling meetings, allowing them to spend more time cultivating relationships, being entrepreneurial, and exercising judgment. However, if the adoption of AI seeks to cut budgets or remove entry-level positions, it will hollow out the apprenticeship path by which younger professionals learn an organization’s mission and culture.

Nonprofits should invest in up-skilling staff and redesigning entry-level roles so that long-serving employees can supervise AI-enabled processes while younger workers build relationships and bring fresh ideas to the mission. Rather than eliminating roles, nonprofits can reimagine them—transforming entry-level positions into apprenticeships that cultivate human judgment in an AI-enhanced world. This means budgeting leadership time and organizational resources for training on new technology and resisting the temptation to outsource institutional memory to a chatbot.

Ethics, trust, and transparency. Data‑driven systems are only as good as the data that feeds them. Although AI algorithms are trained to predict outcomes, biased or incomplete information will inevitably lead to errors. Donors and beneficiaries deserve to know how their information is used. When we deploy AI to personalize fundraising campaigns or recommend services, we should clearly explain what data the system uses and how decisions are made. This transparency fosters trust and accountability.

Practical steps for nonprofits:

  1. Start small, experiment, and learn. Pilot AI tools on low-risk internal processes, such as summarizing meeting transcripts or drafting reports, before applying them to donor communications or program delivery.

  2. Create a cross functional AI task force. Bring together staff, volunteers, donors, and board members to evaluate tools and define success. Ensure that people who understand the mission and community have a say in technology decisions.

  3. Invest in digital literacy. Train employees and volunteers in basic data privacy, algorithmic bias, and critical thinking so that they can use AI tools wisely.

  4. Embed ethical review. Before adopting any AI tool, ask whether it aligns with your values. Seek input from people who could be most affected, such as beneficiaries or volunteers.

  5. Establish governance and accountability. Assign responsibility for monitoring AI use, ensuring ongoing alignment with your mission and values.

In a data-driven society, philanthropic work holds a key that few industries possess. We rely on human connections, not tools or machines, to do our work. When used adequately, AI can free us to focus on and engage further in building strong and lasting human relationships that help us fulfill our mission and expand our impact. The nonprofit sector will thrive in an AI-powered world only if it ensures that technology remains a servant of our mission, never its master.