AI Bias in Philanthropic Grantmaking

An active investigation into how AI mechanics create and amplify bias in grantmaking decisions, and developing frameworks for responsible AI adoption grounded in what I'm finding.

What I'm Investigating

I'm looking into how AI-powered tools in grantmaking and social service delivery can inadvertently perpetuate historical inequities. This work is practical and grounded—audits of real tools, conversations with foundation leaders, and analysis of algorithmic systems shaping millions in funding decisions. As a first-gen founder, I recognize these patterns and am documenting what I discover.

The challenge: When foundations adopt AI tools for efficiency, they often unknowingly amplify historical biases embedded in their data. Community voices (the expertise that should guide funding) get sidelined by automated decision-making.

Core Questions I'm Exploring

  • How do AI recommendation systems in grant management encode historical bias?
  • What happens when "efficiency" metrics override community voice?
  • How can foundations audit their tools for equity impacts before deployment?
  • What governance structures protect participatory processes from algorithmic override?

Living Lab

This work is ongoing. Follow along as I share what I'm finding.

Investigation findings · Framework releases · Field insights

What We're Developing

In Progress

The Hidden Costs of Algorithmic Efficiency

Working Paper / Expected 2025

An examination of how AI-driven tools in grantmaking can inadvertently perpetuate historical biases, leading to inequitable resource distribution. This paper proposes a new set of metrics for evaluating "efficiency" that centers community outcomes over speed, documenting real examples from foundation tooling.

AI Bias Equitable Metrics Philanthropy Analysis
In Development

Foundations for Responsible AI: Equity-Centered Technology Adoption

Framework & Toolkit / Expected 2026

A practical, equity-centered framework for foundations and large nonprofits preparing to integrate or audit AI tools. This guide outlines key governance structures, community engagement protocols, equity assessment questions, and ethical guardrails necessary for responsible AI adoption. Includes auditing checklists and implementation roadmaps.

AI Governance Responsible AI Framework Implementation Guide
Research Phase

Community Voice or AI Noise? Navigating Participatory Grantmaking in the Age of AI

Case Study Analysis / Expected 2026

A critical reflection on the tension between participatory methods and automated decision-making. This article will explore case studies where community-led processes were either undermined or enhanced by technology, offering principles for maintaining authentic engagement and power-sharing in an age of AI.

Participatory Methods Community Engagement Tech Ethics Case Studies

Collaboration & Partnerships

This is collaborative by design. We're building this with foundations, technologists, researchers, and communities impacted by these systems.

For Foundations

Are you interested in implementing AI or auditing your tools for bias? Let's talk about how your organization can lead responsibly and contribute.

Connect

For Researchers

If you're working on AI ethics, participatory methods, or philanthropic innovation, we'd love to hear from you and collaborate.

Let's Collaborate

For Communities

If your organization has experienced AI bias in grantmaking or funding systems, share your story. Your experience is expertise that shapes this work.

Share Your Story

Stay Updated on This Research

Follow my LinkedIn for regular updates, working papers, and insights as the research evolves.

Follow on LinkedIn