Skip to content
For · Research teams

Research Tool MVP: When to Build, When to Buy, When to Avoid

Your grant promised a research tool. Before you build, decide whether to build at all. Build, buy, configure, or avoid — a decision framework.

Published · 4 December 2025·6 min read

Your grant proposal promised a digital tool. Two years in, you have a Figma mockup and a vague sense that REDCap "isn't quite right". The PI is asking when "the platform" will exist. Your team has three months until reporting and exactly zero developers. The question on the table is: do we actually build this thing, or is there a faster way?

The right answer is sometimes "build", sometimes "buy", sometimes "configure", and sometimes "the deliverable as written is wrong, request a scope amendment". This post is a practical decision framework for research tool development in grant-funded contexts — when each path is right, when it isn't, and how to tell the difference.

What "research tool" usually means at proposal time

The phrase covers a wide range of artefacts. In the work plans we read, "research tool" is one of:

  • A data-collection tool for participants or field assistants (surveys, audits, observations, sensor logging)
  • A dashboard for the team or stakeholders to view processed data
  • A workflow tool that supports a specific research operation (CV evaluation, sample tracking, intervention assignment)
  • A public-facing artefact the funder expects (an interactive demo, a citizen-engagement portal)
  • A piece of academic research software that implements a method other researchers will use

Each of these has a different build-vs-buy answer. Lumping them together is the first mistake.

The 4-path decision framework

For each tool concept, ask the same four questions in order. Stop at the first "yes".

Path 1: Configure an existing platform

Is there a mature, configurable platform that does 80%+ of what you need? For data collection: REDCap, KoboToolbox, ODK, Qualtrics, Open Data Kit, LimeSurvey. For workflow: Excel + Power Automate, Airtable, n8n. For dashboards: Looker Studio, Metabase, Grafana.

Pick this path when:

  • The functionality is well-trodden (surveys, basic data review, simple dashboards)
  • Your institution already has a licence or supports the platform
  • You don't need a custom UX or differentiated brand
  • The tool's audience is internal/research-context, not public-facing

The honest math: a configured REDCap project takes 1–3 weeks of effort. A custom-built data-collection tool covering the same ground takes 6–12 weeks of development. Unless you have specific reasons the configuration doesn't work, the configured path is usually right.

Path 2: Buy / subscribe to a domain-specific tool

For some research domains there are specialised commercial or open-source platforms that handle most of the work. Clinical trial management (Castor, OpenClinica), bibliometric analysis (Dimensions, OpenAlex), spatial / GIS work (QGIS, ArcGIS), research notebook + collaboration (LabArchives, Open Science Framework).

Pick this path when:

  • Your domain has a recognised tool that already does what you need
  • The licence cost is bounded (one-time purchase, annual fee within budget)
  • Lock-in risk is acceptable (data export possible, vendor stable)
  • The tool supports your funder's compliance requirements (data residency, FAIR alignment)

The honest math: most research grants underestimate the value of buying. €4–10K of platform licence over a 3-year project is usually cheaper than the developer time and ongoing maintenance of a custom build.

Path 3: Build a custom MVP

Build only when:

  • The tool is central to the research method (the thing you're measuring is an interaction with the tool itself)
  • The required workflow doesn't exist in any commercial or configurable platform
  • The tool needs to outlive the project as a platform other researchers will use
  • The funder explicitly committed to a custom artefact in the work plan

When you build, you're committing to ongoing maintenance, security patching, and continuity planning. Budget for it. Custom research software that nobody maintains becomes a liability inside three years.

Path 4: Avoid (and amend the scope)

The deliverable as written may be wrong. Patterns that suggest the deliverable shouldn't be built:

  • The tool was added to the proposal because "every project needs a digital element" without a clear use case
  • The intended user base is small and reachable through email or a shared document instead
  • The scientific value of the tool is unclear (it's a deliverable for the deliverable's sake)
  • Available platforms cover the use case adequately and the team would be reinventing wheels

In these cases, the right move is a scope amendment with the funder. Most funders will accept a substitution if you propose something equivalent or more impactful. Building a tool nobody needs because the work plan said so is wasted time, wasted budget, and a liability you'll inherit at closeout.

What separates a good MVP from a doomed build

When path 3 is right and you do build, the difference between "successful research tool" and "abandoned codebase" comes down to a small number of factors.

The team mapped the actual user workflow before scoping the tool

The PI has one workflow in mind. The PhD students who'll use the tool have a different one. The field RAs have a third. If the spec was written without sitting with the actual users, the tool builds something nobody operates.

Good practice: 30 minutes with each user persona before scoping. Map what they do today, where the friction is, and what the tool needs to remove. The spec follows from that, not from the proposal text.

The first usable version ships in 3–4 weeks

Not the final version — the first version users can use. A research tool that lands in users' hands at week 4 has 8 more weeks for iteration than one that lands at week 12. The first version doesn't need to be feature-complete; it needs to be deployable, accessible, and shippable.

Research projects evolve. A tool built for 12 weeks then handed over works only if every assumption made at week 0 was correct. They never are.

The stack is one your institution can support

Built on Next.js + TypeScript + Postgres? Most institutional RSE teams can support that or hire for it. Built on Elixir + Phoenix + an obscure database? Maintenance is your problem alone. Choose the stack your institution can hire for. Niche stacks lock the tool to its original developer.

Documentation is written as the work happens

A tool with a working README from week 2 has documentation; a tool whose README is rushed at week 12 has documentation theatre. The handover is where research tools live or die — institutional continuity depends on someone other than the developer being able to operate the tool.

A 60-minute scoping exercise you can do this afternoon

Block 60 minutes. Write down:

  1. The deliverable as written in the Annex. Verbatim.
  2. Who will use the tool. Names, roles, n. If you can't list specific users, that's a flag.
  3. What they do today. Without the tool. Walk through one realistic example.
  4. What the tool will change about the workflow. Specifically.
  5. What configurable / commercial platforms exist. Spend 20 minutes on this. If you find one that gets you 80% there, your build-vs-buy answer is probably "configure".
  6. The minimum viable feature set. What's the smallest version of the tool that delivers research value? If you can't articulate it, you're not ready to build.

The output is either a clearer build spec or the discovery that you should be configuring something instead. Both outcomes are wins.

Where Pragma fits

For research projects where the right answer is path 3 (custom MVP), we build research tools in 4–10 weeks: workflow mapping, scoped feature set, responsive web or mobile-first tool, admin logic, full deployment, handover. We've shipped a multi-country mobile assessment app for an EU-funded sports evaluation programme (PROMISE), an AI-assisted CV evaluation pipeline (CoARA), and analysis tools for academic research collaborations.

If the build-vs-buy decision is genuinely "build", that's the engagement we exist for. If it's "configure" or "amend the scope", we'll tell you that on the scope review and not waste your project budget.

Three things to do this week

  1. Run the 60-minute scoping exercise above. Note which path the answer points to.
  2. If the answer is "configure", spend 30 more minutes evaluating two configurable platforms. Often the gap is smaller than feared.
  3. If the answer is "build", request a scope review. We'll confirm whether the build is genuinely needed and how to get to a working MVP in your remaining timeline.

The biggest win in research tool development is usually deciding not to build the wrong thing. The second biggest is shipping the right thing in 4–10 weeks instead of 6 months.