Skip to content
For · Research teams

What Your Horizon Europe Project Needs Before Digital Closeout

Pre-closeout checklist for grant-funded research teams: what evaluators review, what slips in the last quarter, and how to recover six weeks from deadline.

Published · 18 November 2025·6 min read

It's six weeks before your closeout deadline. The data analysis isn't finished. The digital deliverable promised in the work plan exists in fragments across three PhD students' laptops. The dashboard mockup is still in Figma. The evaluator has already asked when the public-facing artefact will be ready. Your team is still focused on the science — which is exactly where they should be — but the closeout package is not going to assemble itself.

This is the most common pattern we see in grant closeout work. The science was strong; the digital outputs were under-resourced from kickoff. By the time anyone is paying attention, there's no engineering capacity, no time to hire, and no margin to fail.

This post is a pre-closeout checklist for Horizon Europe and equivalent grant-funded research projects. It covers what evaluators actually review, the three deliverables that slip most often, and what to do if you're already inside the danger zone.

What digital closeout actually means

Digital closeout is the bundle of artefacts the project committed to produce that are not the published research itself: the dashboards, datasets, repositories, documentation, demo platforms, and software outputs listed in your Annex 1 work packages. Funders like the European Commission, ERC, MSCA, and national agencies expect these to be finished, documented, and handed over by the formal end of the project — or in some cases, before the final review meeting.

The mistake teams make is treating digital closeout as administrative paperwork. It isn't. For Horizon Europe in particular, the public-facing digital outputs are part of how the project is evaluated. A consortium that delivered great research but a half-finished platform tends to score lower on dissemination, exploitation, and impact than one with mediocre science and a polished, documented, runnable artefact.

The asymmetry matters: research quality is judged by your peers; digital deliverables are judged by your evaluator. They are not the same audience.

The three deliverables that slip

After delivering closeout work for several EU-funded consortia, the same three things slip in almost every project.

1. The data-to-report pipeline

Every grant-funded project produces data. Few of them produce the analytical outputs evaluators expect. The gap is real: cleaning, structuring, and analysing a multi-site dataset, then producing the figures and tables the deliverable demands, is several weeks of focused engineering and statistical work. PhD students do it under duress in the last fortnight; the result is rushed and undocumented.

What evaluators want: cleaned data with documented schemas, reproducible analysis pipelines, and the figures embedded in the report exported from those pipelines — not screenshotted from Excel. Research data reporting that survives the deadline is structured and re-runnable.

2. The public-facing digital artefact

The "platform we will build" line item in the work plan. By month 30 of a 36-month project, this often exists as: a Figma mockup, a Streamlit prototype on someone's localhost, a Shiny app the postdoc deployed to a free-tier server that's now down, a barely-styled Next.js scaffold from a hackathon. None of these are deliverables.

What evaluators want: a runnable, accessible, documented version of what the work plan promised. URL that loads, README that explains what's there, and credentials if it's auth-gated. If your platform isn't reachable from a fresh laptop with the documentation you provided, it isn't delivered.

3. The handover documentation

The thing nobody enjoys writing and everyone needs at year three. Without it, the next consortium picking up the work cannot continue. Without it, your institution cannot demonstrate that the deliverable will be maintained. Without it, the FAIR-data alignment commitment you made at proposal time is provably untrue.

What evaluators want: data dictionaries, FAIR-aligned metadata, deployment guides, runbooks, repository READMEs, and a clear statement of what is maintained, by whom, for how long.

Pre-closeout checklist

Six to ten weeks out, work through this list. If you can't tick a box, that's the gap.

| Check | What "done" looks like | |---|---| | Final dataset cleaned and documented | Single canonical version, schema document, data-management plan updated | | Analysis pipeline reproducible end-to-end | One command runs cleaning → analysis → figures, no manual steps | | Figures and tables sourced from the pipeline | No screenshots from Excel; every number traces to a script | | Public artefact deployed at a stable URL | Accessible from outside your institution's VPN, with uptime through review | | README + setup guide tested by someone outside the team | Fresh-laptop test passes; new team member can run it | | FAIR-aligned metadata in place | Findable, accessible, interoperable, reusable — not aspirational | | Final report sections drafted with deliverable references | Each work package output cited with URL or DOI | | Repository archived to long-term storage | Zenodo, institutional repository, or equivalent | | Stakeholder handover document complete | Who maintains what, until when, with which contacts | | Closeout meeting agenda prepared | Demo flow rehearsed; no live coding during the review |

If three or more of these are red, you're in takeover territory. Not catastrophic — but you need help that isn't your team.

When to bring in external help

The signals are usually clear in retrospect and ambiguous in the moment. Some patterns we've seen:

  • Your PhD students are best at research, not engineering — they don't have time to build the platform on top of their thesis writing
  • You tried working with a generic agency but they didn't understand grant context, Annex 1 obligations, or the deliverable format the evaluator expects
  • Your university research-software-engineering (RSE) team is overbooked and can't take on your project before the deadline
  • You have a Streamlit, Shiny, or REDCap prototype that needs to become a maintainable, documented production tool
  • The reporting deadline is approaching, project closeout is around the corner, and the digital deliverables aren't ready

If any of those describe your situation, you need a partner who reads grant work plans natively, knows what a closeout package looks like, and can move in 3–6 weeks rather than six months. Generic dev shops will build the wrong thing or take too long. Strategy consultancies will not build at all.

What good closeout takeover looks like

The structure of good closeout work isn't mysterious. It's three phases compressed into the time you have left.

Audit (week 1). What exists, what's documented, what runs. The team you bring in reads the Annex, lists every committed digital output, and flags the gap. No new work yet.

Scope (week 1–2). What can realistically be delivered in the time available — not what would be ideal in a parallel universe with twelve more months. Where the gap is too wide, what's the minimum-viable version that satisfies the evaluator? Document the trade-offs explicitly so the funder isn't surprised at review.

Deliver (weeks 2–6). Build, document, hand over. Each work package output gets a clear "delivered" marker. The handover pack is written as the work happens, not at the end. The public artefact is deployed early and iterated, not deployed at the deadline.

The end state: a deliverable bundle that an external evaluator can run, read, and verify, with documentation that survives your team moving on.

Where Pragma fits

Pragma Digital Labs is a small studio that builds the digital deliverables grant-funded research projects promised. We've delivered closeout work for EU-funded consortia, designed evaluator-ready reporting pipelines, and rebuilt half-finished prototypes into documented production tools. The team reads Annex 1, understands what evaluators expect, and ships in 3–6 weeks rather than six months.

If your closeout deadline is closer than your delivery readiness, that's exactly the engagement we exist for. Our Grant Digital Closeout Pack scopes a takeover in the first week and delivers the bundle the evaluator will review.

Three things to do this week

  1. Open your Annex 1 and list every digital output committed to. Compare to what actually exists.
  2. Pick the most under-developed deliverable. Define the minimum-viable version that the evaluator would accept.
  3. If the gap is wider than your team's remaining capacity allows, request a scope review. The earlier the conversation, the less compressed the delivery.

The deadline is fixed. The deliverable bundle is not — but the time to make it real is short. Better to know now than at week one.