Your website is becoming something different, it is no longer just destination that people visit, it’s also a vault AI systems query on behalf of your potential audience. Use this playbook to ensure your organization is the verified source cited by ChatGPT, Gemini, Claude, and Perplexity.
Start with the TL;DR for quick wins, then use each section’s audit checklist to grade your AI-readiness, spot visibility gaps, and see where our team can help.
~ Bryan Casler, Vice President of Digital and AI Strategy
Two things are happening at once, and they require different strategies.
AEO (Answer Engine Optimization) is about winning the summary inside search engines, specifically Google’s AI Overviews, where your content appears above the traditional blue links.
GEO (Generative Engine Optimization) is about winning the citation inside chatbots like ChatGPT and Claude, where there’s no search bar, no ranked list, and no organic traffic the way we used to measure it.
For years, your website was a showroom. You designed it for humans to browse, explore, and eventually convert. That model isn’t dead, but it’s no longer the whole picture.
Think about how GPS changed gas stations. People used to stop and ask for directions. When GPS arrived, they stopped. The station didn’t disappear, but its role did. Your website is going through something similar. People may stop arriving at your URL for general information, but the information inside still gets used. The AI retrieves it, packages it, and delivers it to the person asking. Your job is to make sure what gets delivered is accurate, attributed, and yours.
The strategy: organize your content so AI can find and correctly cite your facts faster than your competitors can.
Audit checks
Whole Whale’s Charity AI Brand Footprint Study, the first multi-model analysis of AI responses across six platforms covering 650+ nonprofits, found something striking: only about 100 unique nonprofits consistently appear across AI models. The rest are invisible.
This winner-take-all dynamic means the organizations that move first on GEO optimization will capture disproportionate visibility. This is not a problem you can solve in two years. It’s a problem you solve now.
Service-delivery organizations have a structural advantage worth naming explicitly: AI can summarize your mission, but it cannot fill out a volunteer application, attend your event, or complete a donation form. Those actions remain inherently click-dependent, which means your conversion surfaces are protected even as informational traffic erodes.
Advocacy organizations are more exposed. If your primary output is information, AI will summarize it. Your path forward is positioning as the definitive authority through proprietary data, original research, and interactive tools that AI cannot fully replicate.
Audit checks
When Google’s AI Mode appears for informational queries, users click organic links only 8% of the time. The broader zero-click rate across all Google searches sits around 60% as of 2025. That’s a significant shift in how attention flows. But here’s the counterweight: visitors who do arrive from AI sources convert 4.4x higher than visitors from traditional search on average, with some platforms showing much larger gaps. Claude referrals convert at around 16.8% versus Google’s 2.8% in some datasets. For nonprofits, where donor and volunteer intent is already high, the conversion advantage is likely even more pronounced.
The AI advertising landscape for nonprofits is almost entirely closed. OpenAI launched ChatGPT ads in February 2026, but with a $200,000 minimum commitment and $60 CPM, the nonprofit sector is effectively excluded. Perplexity abandoned advertising entirely in February 2026, pivoting to subscription revenue. Microsoft’s Ads for Social Impact program, which provided monthly Bing ad credits to nonprofits, was discontinued in December 2025.
The one exception: Google Ad Grants with AI Max. Google rolled out the AI Max setting to Ad Grants accounts in 2025, enabling nonprofit ads to appear within AI Overviews. The $10,000/month grant remains in place. If your organization uses Google Ad Grants and hasn’t enabled AI Max, that’s the most immediate paid opportunity available.
The practical conclusion: organic GEO optimization is the primary path to AI visibility for nonprofits. The paid landscape isn’t an option for most organizations.
Audit checks
AI models skim. They weight the first 50 words of a page heavily, then the last paragraph, and treat everything in between as supporting detail. Writing for this pattern isn’t that different from good journalism, but it requires more discipline than most nonprofit web content currently has.
The Subject-Object Rule: Be explicit to the point of feeling repetitive. Do not use pronouns for critical facts.
The “it” made sense to a human reader with context. An AI model assembling an answer from dozens of sources doesn’t have that context. If you’re not explicit, you risk your impact stat getting attributed to someone else’s organization.
Inverted Pyramid: Lead with the answer. Don’t bury your mission, your outcomes, or your key facts three paragraphs deep. State them first, then support them.
Answer Capsules: Research analyzing thousands of ChatGPT citations found that self-contained answer blocks of 60–180 words, each addressing a single question directly, with no links inside the block, were the strongest predictor of AI citation. Think of these as extraction-ready units. AI models chunk content for retrieval, and a well-constructed answer capsule fits exactly into that process. Every major page on your site should have at least one near the top.
A good answer capsule looks like this: “[Organization Name] provides [specific service] to [specific population] in [specific geography]. Since [year], the organization has [concrete outcome with number]. [One sentence on how]. [One sentence on why it matters].” Self-contained. Specific. No fluff.
Expert quotations and citations: Counterintuitively, referencing authoritative external sources within your content makes you more citable. Research shows citations boost AI visibility by up to 30%, and expert quotations improve it by up to 37%. Quoting a named staff member with on-the-ground experience or citing a credible external study places your content in a higher-trust neighborhood for AI systems.
E-E-A-T as a nonprofit advantage: Nonprofits have structural credibility advantages that most commercial organizations don’t. Lean into them explicitly. Publish content as named experts, not “Admin.” Add visible author bios with credentials and program experience. 501(c)(3) status, transparency seals, and published audits are trust signals AI systems recognize. The biggest E-E-A-T weakness in the sector is publishing anonymously and letting program expertise go uncredited.
Audit checks
Most nonprofit web content written before 2023 was built for human readers skimming long pages. AI models parse structure differently. Dense paragraphs are harder to extract from than bulleted lists or data tables.
Think of your CMS as a repository. When an AI model needs a specific fact, it’s more likely to pull cleanly from a structured list than from a wall of prose. That doesn’t mean everything needs to become a bullet point, but your most important impact data, program descriptions, and FAQ content probably should be.
The refresh strategy is straightforward: audit your highest-traffic pages, identify where prose can become structure, and convert where it helps clarity. Headers that mirror real user questions (“How does [Program] work?” rather than “Program Overview”) also help AI identify and surface the right content for the right query.
FAQPage schema on program and donation pages is particularly effective. AI engines parse Q&A pairs efficiently, and even a single well-structured FAQ section can improve citation rates meaningfully.
Content freshness is a signal AI systems track. About 65% of AI bot activity targets content published within the past year. AI-cited content is measurably fresher than standard organic results. Add “Last updated” timestamps to cornerstone pages, include year references where relevant, and build a regular cadence of content refreshes into your editorial calendar.
Audit checks
AI handles generic questions well. “What is food insecurity?” is a question ChatGPT can answer without your website. You’re not going to win that one.
What AI handles poorly is specificity. “How does [Bill] affect [Specific Community] in [Year]?” is a question where your organization’s expertise and original data can actually dominate. Shift your content calendar toward those specific, complex queries where you have genuine knowledge no one else has.
Publish original research when you can. AI models prioritize unique data points because they make answers feel grounded. A statistic that only exists on your website is more valuable for AI citation than a statistic everyone else is already repeating.
Query fan-out is worth understanding. When someone asks a complex question, AI models don’t search it as a single query. They decompose it into three to five shorter sub-queries and search each independently. “Best ways to help homeless veterans in Chicago” becomes separate searches for “homeless veteran services,” “Chicago homeless resources,” and “veteran nonprofit effectiveness.” Your content strategy should map the sub-queries your audience’s questions generate and ensure you have content that answers each fragment independently.
Audit checks
AI models think in entities, not keywords. An entity is a defined concept: your organization, your programs, your leadership. If you don’t explicitly define your entity in code, the AI will make its best guess based on whatever it can find. That guess is often wrong or incomplete.
Schema.org markup is how you tell the AI exactly who you are. Specifically, you want comprehensive Organization Schema implemented as JSON-LD (a structured data format embedded in your page code), and you want SameAs tags that link your site to your Wikipedia page, LinkedIn profile, and other authoritative external profiles.
Nonprofit-specific Schema types that most organizations are missing:
Wikidata Q-IDs are now critical for AI disambiguation. Sources linked to Wikidata entities receive significantly higher weight in how AI systems evaluate evidence. The additionalType property lets you reference Wikidata for entity types not covered in Schema.org. If your organization doesn’t have a Wikidata entry, creating one is high-leverage work.
Equally important: make sure your homepage, Wikipedia entry, LinkedIn, and Candid/GuideStar profile all state the exact same mission and statistics. When these sources conflict, AI models treat the inconsistency as a trust problem.
A note on Wikipedia specifically: For many nonprofits, the Wikipedia page is thin, outdated, or doesn’t exist. This matters more than most organizations realize. Wikipedia accounts for nearly half of ChatGPT’s top citations. An incomplete or absent Wikipedia presence is a gap that shows up directly in how AI describes your organization. Maintaining it isn’t optional anymore.
The Candid/Anthropic partnership: Candid (formerly GuideStar) has signed a formal partnership with Anthropic to provide verified nonprofit data directly to Claude, with plans to expand to other major AI engines. Claiming your profile and achieving Gold or Platinum Seal of Transparency status is one of the highest-ROI actions available to nonprofits for AI visibility. Charity Navigator has also launched Horizon AI Search, a conversational donor-search tool. Your presence on these platforms is now AI infrastructure, not just donor credibility.
Audit checks
Rendering: Some websites build their pages in the visitor’s browser using JavaScript rather than on the server. AI crawlers often can’t read pages built this way because they see a blank page before the JavaScript runs. If your site relies heavily on JavaScript to display content, this is worth investigating. Server-side rendering or static HTML ensures AI crawlers see what your visitors see.
AI crawler management: The crawler ecosystem has matured into distinct tiers. Training crawlers (GPTBot, ClaudeBot) collect data for model training. Search crawlers (OAI-SearchBot, Claude-SearchBot) drive real-time citations and are the ones you want accessing your content. User-initiated browsers (ChatGPT-User) behave more like real users. Your robots.txt strategy should differentiate: generally allow search crawlers, and make an informed decision about training crawlers based on your organization’s values and data policies.
IndexNow protocol: Bing adopted IndexNow, which enables instant content indexing notification. This matters because ChatGPT’s browse mode queries Bing, not Google. Submitting to Bing Webmaster Tools and implementing IndexNow are concrete, low-effort steps with direct impact on ChatGPT citation speed. Many nonprofits have never submitted a sitemap to Bing.
Deep-link vulnerability: AI agents don’t just index your homepage. They find obscure pages, old PDFs, outdated program descriptions buried three levels deep. If you have a 2018 annual report with statistics that contradict your current mission, the AI may cite it as current truth. Unlinked doesn’t mean invisible. Archive or delete conflicting old content rather than just removing it from your navigation.
Mobile and voice: Write content that sounds natural when read aloud. Gemini Live, Siri, and similar tools are increasingly how people get answers. If your prose is dense or heavily dependent on visual structure, it won’t translate.
Server log analysis: GA4 doesn’t track AI bots because they don’t execute JavaScript. The only way to see which pages AI systems actually crawl, and how often, is through server log analysis. This is the most underused diagnostic tool in AI visibility work. GPTBot, ClaudeBot, PerplexityBot, and others leave server log entries that reveal exactly which content AI finds valuable.
Audit checks
The llms.txt standard attracted significant attention when it launched, and the idea is genuinely sensible: a plain-text markdown file at yoursite.org/llms.txt that tells AI models exactly what your site contains and where to find what matters most.
Here’s the honest update: as of early 2026, no major AI provider has confirmed using it for citation or ranking. Google explicitly rejected it. SE Ranking’s analysis of 300,000 domains found no correlation between llms.txt presence and AI citations. ALLMO.ai’s study of 94,614 cited URLs found llms.txt present in less than 1% of cited websites with no measurable citation uplift.
That said, it takes about an hour to implement, costs nothing, and positions you well if adoption grows. Implement it, include a mission summary and links to highest-priority pages, and add an llms-full.txt companion file. Then put your energy elsewhere.
What to watch instead: Microsoft’s NLWeb protocol, created by the founder of Schema.org and announced at Build 2025, transforms websites into queryable conversational AI endpoints. The IETF AI Preferences Working Group, launched January 2025, is developing formal extensions to robots.txt with granular AI crawler controls. Anthropic’s Model Context Protocol (MCP) is gaining multi-platform adoption for structured data interoperability. These are the emerging standards with real traction.
Audit checks
This is the section most AEO/GEO playbooks underweight, so let’s be direct about the data. Muck Rack’s December 2025 analysis of over one million AI citations found that 82% came from earned media, with press release citations growing fivefold since mid-2025. Earned media isn’t a supplementary tactic here. It’s the main citation engine.
The implication for nonprofits is significant. If your communications team treats press outreach as a nice-to-have, that needs to change. Coverage in local news, trade publications, sector-specific media, and high-authority national outlets directly feeds AI citation databases. An article in the Chronicle of Philanthropy or Nonprofit Times that quotes your executive director with specific program statistics is more valuable for AI visibility than almost anything else you can do on your own website.
Practical steps: identify the publications that already appear in AI answers for your cause area, build relationships with journalists who cover those beats, and pitch stories anchored to the kind of specific, unique data that AI models prioritize. Press releases distributed through wire services also appear in citation data at increasing rates. If your organization rarely issues them, that’s worth revisiting.
Audit checks
Only 11% of domains get cited by both ChatGPT and Perplexity. Each platform sources information differently, and a strategy that treats them as identical will underperform.
ChatGPT relies heavily on Wikipedia (nearly half of its top citations) and mirrors Bing’s top 10 results 87% of the time. If your site isn’t indexed by Bing, you are invisible to ChatGPT in browse mode. Bing Webmaster Tools submission is non-negotiable.
Perplexity favors Reddit (the top cited source on the platform) and indexes hundreds of billions of URLs with a strong recency bias. Fresh, specific content on Reddit-adjacent platforms performs well here.
Google AI Overviews shows the most diversified source mix, with YouTube as the second most-cited content type overall. Organizations with video content that includes transcripts have a significant advantage.
Claude uses Brave Search and favors content that aligns with its constitutional principles: helpful, honest, non-misleading. Transparency-forward content performs well. The Candid partnership described in Phase 3 is particularly relevant for Claude citations.
This doesn’t mean you need four entirely separate strategies. But it does mean knowing your primary platform, which is determined by where your audience finds you, and optimizing for its specific source preferences.
Audit checks
AI models, especially Google’s Gemini and Perplexity, heavily weight Reddit, Quora, LinkedIn, and YouTube as sources of what they treat as human truth. These platforms surface in AI answers at a rate that outpaces most organizational websites.
Create verified official accounts on Reddit and Quora. Monitor questions related to your cause. Answer them well, with your organization identified, and with the kind of specificity that makes answers citable.
The difference between a useful forum answer and a wasted one usually comes down to specificity. A weak answer says “Our organization works on food access issues in the Mid-Atlantic region.” A strong answer says “Mid-Atlantic Food Network distributed 2.3 million meals across Baltimore, Philadelphia, and Washington D.C. in 2024, primarily through school partnerships and mobile pantry programs.” The second answer is citable. The first one isn’t. Write for the AI that will read it, not just the human who asked.
YouTube deserves its own line in the strategy. Surfer’s analysis of 36 million AI Overviews found YouTube is the second most-cited content type overall and the top cited content format across multiple verticals. For Perplexity specifically, it’s the second largest source after Reddit. If your organization produces any video content and hasn’t been publishing transcripts, that’s a significant missed opportunity.
Audit checks
This one doesn’t get enough attention. Email lists are one of the few channels nonprofits fully own, and consistent, well-indexed email content contributes to the kind of brand frequency AI models pick up on over time.
If your newsletters live only in inboxes, they’re invisible to AI. But if they’re archived publicly, published through a platform like Substack, or distributed via RSS, they become part of the indexed web. Over time, that creates a trail of consistent, mission-aligned content that reinforces your organization’s authority on the topics you care about most.
This doesn’t require rebuilding your email program. It might just mean turning on public archives or cross-posting your best content somewhere it can be crawled.
Audit checks
Here’s a useful thought experiment to run with your team: if you weren’t allowed to have a website, how would you ensure the world knows what you do?
The answers point directly at your AI visibility gaps. Podcasts and YouTube videos are increasingly ingested by AI models, especially when transcripts are available. Newsletters reach owned audiences that AI can’t gate or ignore. Press mentions in high-authority publications are as close to gold-standard training data as most organizations can access.
This exercise also tends to surface the channels your organization has been underinvesting in. It’s worth doing as a workshop, not just a thought experiment.
Audit checks
Stop optimizing for page views. That metric was already losing meaning before AI accelerated the shift.
Share of Model is the framework that’s replaced Share of Voice as the standard metric for this era. The question isn’t how often you appear in search rankings. It’s how often your organization appears in AI-generated answers compared to peer organizations. Are you being cited, or is someone else filling the space your mission should occupy?
Track brand mentions even when there’s no hyperlink attached. AI models weigh brand frequency heavily, and a mention without a link still contributes to your visibility. Set up analytics segments specifically for generative AI referrals. Traffic from Perplexity, ChatGPT, and Claude appears as referral traffic in GA4 and should be tracked separately.
The measurement gap most organizations miss is self-reported attribution. AI-referred traffic frequently appears as direct in GA4 because users copy links from chat interfaces rather than clicking through. Adding “AI assistant” as an option in your “How did you hear about us?” forms captures this invisible channel. Profound’s data shows year-over-year increases in B2B self-reported AI attribution. Without this mechanism, you’re likely undercounting AI-driven traffic significantly.
Tools at various budget levels:
Audit checks
AI will eventually say something wrong about your organization. This isn’t speculation. It’s a known behavior of large language models, and the more your organization grows in visibility, the more likely it becomes.
Set up a hallucination log. When someone on your team or in your community notices an AI making a false claim about your mission, programs, or impact, document it. Then use the reporting and feedback tools built into each platform (the thumbs down, the “report” button) to flag the error, pointing to your schema-backed website as the authoritative source.
Run a quarterly audit: ask ChatGPT, Claude, and Perplexity your most mission-critical questions and review the answers carefully. This doesn’t take long, and it will tell you a lot about where your content and schema work is landing. The entities and statistics you’ve structured correctly will appear correctly. The ones you haven’t will show you exactly where to focus next.
Audit checks
We’ve been building digital infrastructure for nonprofits for a long time, long enough to have watched SEO change, social media change, and now search itself change. The organizations that fared best were the ones who understood the shift early and built for it deliberately rather than reactively. That’s what this playbook is about. And it’s what working with us looks like in practice.
We’re not generalists who’ve added “AI” to our service list. Our team has deep, working knowledge of Engaging Networks, Luminate Online, WordPress, and Drupal, combined with the kind of CRM and Schema expertise that actually moves the needle on AI visibility. We understand nonprofit operations, nonprofit budgets, and the particular pressures of trying to do ambitious digital work with a lean team.