How foundations show donors what their money built
Donors expect to see what their money produced — in documented digital form. A practical framework for foundations and nonprofits without an internal CTO.
A mid-size European foundation funds eight programmes per year. Total annual disbursement is around €4M. The programme officers know what's working: they see the field reports, they read the partner updates, they hear the stories. The board sees a quarterly summary deck. The donors who fund the foundation itself receive an annual report.
Then a major donor asks a specific question: "You've funded our €450K commitment to women's leadership in research. What did the programme actually produce, and how do I see it?"
The programme officer puts together an answer over four days. It involves chasing PDF reports across email inboxes, screenshot evidence from partner websites, two phone calls to verify dates, and a spreadsheet reconstructed from memory. The answer arrives. The donor accepts it. The foundation thinks: we should have a better way to do this.
This is the moment most foundations discover that "programme accountability" is a digital infrastructure problem disguised as a reporting problem.
What donors are actually asking for in 2026
The pattern is consistent across the foundations we've worked with: donor expectations have shifted from "annual report" to "show me what my money produced, on my timeline, in a format I can verify." Three things have changed.
First, donors increasingly want programme-level visibility, not foundation-level summaries. A donor who underwrites one programme wants to see what that specific programme produced. The annual-report aggregation that used to satisfy donors now reads as deflection. They want their programme, not your average.
Second, the audit trail has become part of the deliverable. It's not enough to say "we funded X and X happened." Sophisticated donors want to see the chain: this grant funded this partner, who produced this output, which can be verified by clicking through to this artefact, which is dated and timestamped. "Trust us" is no longer the deliverable.
Third, the time horizon for visibility has compressed. Quarterly reports were standard a few years ago. Now donors want to see programme activity in something close to real time, especially for programmes that involve their public profile. "What did our partnership produce this quarter?" is a question programme officers should be able to answer in five minutes, not five days.
None of this is unreasonable. Most of the foundations facing it agree the donors are right. The challenge is that the foundation's existing infrastructure was built for the previous decade's expectations.
The four documents donors increasingly ask for
In our delivery work for foundations and grant-funded consortia, we've seen the same four documents requested repeatedly. A foundation that produces all four is meeting the new bar. A foundation that produces fewer is going to be having more frustrating donor conversations than necessary.
| Document | What it contains | Update cadence | |---|---|---| | Programme dashboard | Current state of the programme: active grants, partner status, key metrics, last update. Not a chart museum — a single page that answers "what's happening right now?" | Real-time or weekly | | Outputs ledger | A versioned, dated record of every concrete output the programme produced: publications, events, products, datasets, policy submissions. With links to the actual artefacts. | Per-output, on production | | Decision log | What major decisions were made, why, and by whom. Especially for programmes that pivoted, paused, or reallocated funds mid-cycle. | On decision | | Donor-facing summary | The story version of the above, written for the donor audience. Not the boilerplate annual-report copy — the specific narrative for this specific programme. | Quarterly or on donor request |
The dashboard, the ledger, and the decision log are infrastructure. The donor-facing summary is communications. Most foundations have some communications capacity but very little infrastructure capacity. The result: every donor request triggers a frantic four-day reconstruction.
How to structure a programme dashboard
A programme dashboard is not a Tableau workbook with twelve charts. A useful programme dashboard is a single page that answers four questions in under thirty seconds.
- What is this programme? One sentence. Funder, mission, current cohort, total budget.
- What is the current state? Active partners, in-progress deliverables, budget consumed, time elapsed.
- What did the last quarter produce? A list of the concrete outputs from the previous reporting period, with links.
- What's coming up? Next decision points, next reporting deadlines, next public events.
That's it. A programme dashboard with more than this is usually compensating for the absence of clarity, not adding to it.
The best programme dashboards we've built for foundations look almost minimal at first glance. The minimalism is the point. A dashboard that takes a donor or a board member three minutes to understand has failed. A dashboard they can read in thirty seconds and trust has succeeded.
Multi-stakeholder reporting without rebuilding for each audience
Foundations report to multiple audiences: board, donors, programme staff, partners, public. The temptation is to build a different report for each. The result is that staff spend more time formatting than thinking, and the underlying data ends up inconsistent across audiences.
The pattern that works: build the underlying data layer once, then derive audience-specific views.
- Board view: aggregated, year-over-year, includes financial overlay, weights toward strategic decisions.
- Donor view: programme-specific, recent activity, weights toward verifiable outputs.
- Programme staff view: operational, includes partner notes, status flags, action items.
- Partner view: their programme only, focused on what's expected of them next.
- Public view: cleaned, anonymized where appropriate, weights toward storytelling.
If the underlying data is one consistent record, generating these views is mostly a matter of filtering and templating. If the underlying data is five spreadsheets, every view has to be hand-rolled and the inconsistencies multiply.
Most foundations we've worked with discover that the highest-leverage investment is not "build a beautiful dashboard." It's "fix the data layer so that one source of truth feeds all the views."
When to build vs commission vs use existing tools
Foundations often ask whether they should build internal tools, hire an agency to build something custom, or use an existing platform. The honest answer depends on three factors.
Build internally when you have a recurring data-management need that will outlive any specific programme, when you have at least one technical staff member who can maintain the system, and when your programmes share enough common structure that one internal tool serves multiple programmes. Build internally is rarely the right answer for foundations that fund eight different kinds of programmes.
Commission a custom build when you have a specific programme or grant cycle with concrete digital deliverables, when the timeline is finite (3-12 months), and when you want to own the resulting code. Custom build with a finite, scoped engagement is the most common right answer for foundations. The output is your code, your data, your infrastructure — not a vendor's platform you rent forever.
Use an existing platform when your need maps cleanly to what platforms like Salesforce Nonprofit Cloud, Bonterra, or Submittable already do. This works well for grant management, donor CRM, and application intake. It works poorly for programme dashboards, output ledgers, and donor-facing transparency tools because those are too specific to your programme to fit a vendor's template.
The mistake we see most often: foundations buying a generic platform when they actually needed a custom build, or attempting a custom build when they actually needed to fix their data hygiene first.
The "digital deliverable" mindset for grant-funded programmes
Foundations that disburse grants increasingly add digital-deliverable expectations to the grant terms themselves. The grant funds a programme, and the programme is expected to produce specific digital outputs as part of the deliverable bundle: a public report, a dataset, a documented methodology, a programme website. This shift mirrors what's happening in research funding (Horizon Europe, ERC, MSCA all require digital deliverables now), and the pattern is moving into philanthropic funding too.
For foundations, this creates a parallel problem: not only are donors asking for visibility into what programmes produced, but the programmes themselves are now expected to ship digital outputs that the foundation will need to verify, archive, and (sometimes) host.
The foundations getting ahead of this are doing three things:
- Adding digital-deliverable specifications to grant agreements at the outset, not as an afterthought.
- Budgeting for digital-deliverable production in the grant itself (typically 5-15% of programme budget for grants where digital outputs are a stated goal).
- Building the infrastructure to receive, archive, and surface those deliverables as part of the foundation's own programme accountability layer.
The foundations that don't do this end up retrofitting at the end of each programme, when the panic time-cost is highest.
Practical first steps for a foundation that has no infrastructure today
If your foundation is in the "we should have a better way to do this" stage and you don't know where to start, the practical first move is not "build a dashboard." It's an audit.
A useful programme-accountability audit answers four questions:
- What data already exists? Across email, shared drives, partner reports, accounting systems, programme officer notes — what raw material is already being captured?
- What questions do donors and the board actually ask? Pull the last twelve months of donor and board questions. Sort by frequency. The top five are your dashboard requirements.
- What's the gap between current data and the questions? This is usually 30-40% — the data exists for most questions but is dispersed and unstructured. The gap is shape and accessibility, not collection.
- What's the smallest infrastructure that closes the gap? Often it's a single data layer, a single dashboard, and a single workflow for capturing outputs as they're produced rather than reconstructed at reporting time.
That audit gives the foundation a concrete proposal: "build this, and 80% of current donor questions become five-minute answers." The proposal is finite, scoped, and verifiable. It is not "we need a digital transformation" — that framing always overshoots.
What this kind of engagement looks like
For foundations and nonprofits that need to close the gap between current programme data and donor expectations, the engagement is typically a 4-8 week project. It produces a programme dashboard, a structured outputs ledger, a documented data-collection workflow, and a handover pack the foundation's existing staff can operate without the engagement team in the room afterwards.
The output is your infrastructure, in your repositories, on your hosting. Not a vendor's platform you rent. Not a custom build that locks you in. A finite engagement with a clear exit, leaving the foundation more accountable to donors and the board on Day 60 than it was on Day 1.
This is what Data-to-Report Sprint and Grant Digital Closeout Pack are built around. If your foundation is staring down a donor question it doesn't have a clean answer for, request a Scope Review — free, no commitment, written assessment within 2 business days.
Related notes
Multi-Site Research Data Governance: Preventing Drift
Multi-site consortia drift in three places: DMP-to-data, between sites, and dashboards-to-reports. A governance framework that survives the project.
FAIR Data Compliance Without a Data Manager
Most research teams promised FAIR-aligned data in the proposal and never built the practice. How to make FAIR compliance real without a dedicated data manager.
From Prototype to Handover: Making Research Software Maintainable
Most research software dies within 18 months of the developer leaving. A small set of engineering practices at the prototype stage prevents it.