DX Discovery
You know DX matters. But knowing where friction actually lives is a different problem entirely.
A pragmatic methodology for engineering and platform leaders who want to stop guessing, and start making DX investments based on what developers actually need.
Why This Matters
Developer Experience shapes whether engineers adopt the platforms you build. But most platform teams design for capability, not experience. The result: underutilised platforms and frustrated engineers.
The Problem Space
The Challenges of Getting DX Right
Platform teams face a specific set of research and decision-making challenges. These are not generic product management problems — they are distinct to the context of building for developers.
Building on Assumptions
Platform teams are experts in their own tools and forget what it felt like to be a newcomer. Their experience is the worst proxy for their users'.
Loudest Voice Wins
Without structured research, roadmaps are shaped by the most vocal stakeholders — not the most representative developer needs.
Low Adoption Mystery
Teams build platforms, launch them, and watch adoption stagnate — without knowing whether the problem is awareness, onboarding, or fundamental misfit.
Solving Symptoms
Without research, teams fix the bugs developers complain about, missing the deeper workflow issues that are the actual source of friction.
Too Late to Change
Discovery happens after launch, when design decisions are baked in and changing direction is expensive. Research is retrospective when it should be generative.
Fragmented Evidence
Support tickets, Slack messages, and one-off conversations exist but are never synthesised into a coherent picture of developer needs and pain points.
The Solution
How DX Discovery Helps
DX Discovery is not a silver bullet. It is a structured approach to the messiness of understanding developer needs — one that gives platform teams the confidence to make decisions and the evidence to defend them.
Build Confidence
Discovery gives platform teams the evidence base to defend prioritisation decisions. Instead of 'we think developers need this', you have 'here's what 18 developer interviews told us'.
Reduce Product Risk
The biggest risk in platform engineering is building the right thing in the wrong way — or the wrong thing entirely. Discovery de-risks both before significant investment is made.
Enable Strategic Alignment
Research findings give platform teams a shared language with stakeholders. Evidence-based insight is far more persuasive than intuition when negotiating for roadmap space.
The Framework
The DX Discovery Methodology
DX Discovery is a four-phase research methodology. Each phase builds on the previous one — moving from context and landscape through research, synthesis, and into experimentation. The result is an evidence-backed foundation for platform decisions.
- 01
Landscape
Map the developer ecosystem, identify the right scope, and plan the research
- 02
Research
Apply mixed qualitative and quantitative methods to surface developer needs
- 03
Synthesis
Turn raw data into actionable insights, priorities, and testable hypotheses
- 04
Experiment
Frame experiments that validate hypotheses before full product investment
- 05
Post-Discovery
Sustain momentum: make research shareable, sociable, and actionable long-term
Phase 1
Landscape
Before you research, you need to understand the terrain. Phase 1 is about mapping the developer ecosystem: who are the users, what platforms and tools do they interact with, where do the team's assumptions live, and what is in scope for this discovery?
The four flight levels model is a particularly useful lens in this phase. It helps the research team identify which level of the system the discovery question lives at — and whether they are looking at the right level at all.
The Four Flight Levels
Operational
Day-to-day developer tasks and tools
Coordination
How teams work together and hand off
Portfolio
Prioritisation & sequencing across teams
Strategy
Business & technical direction
Phase 2
Research
Phase 2 is where you gather evidence. DX Discovery uses a mixed-method approach — combining qualitative depth with quantitative breadth. The balance depends on what you know and what you need to know.
Watch Out: The False Consensus Trap
Platform teams are experts in their own tools. That expertise is also a blind spot. Because they understand the system deeply, they've forgotten what it feels like to encounter it as a newcomer. This is why platform teams are the worst proxy for their own users — and why structured research with actual developers is non-negotiable.
Qualitative Methods
- Developer interviews
- Contextual inquiry (observe developers working)
- Usability testing
- Diary studies
- Expert interviews
Quantitative Methods
- Usage analytics and telemetry
- Support ticket analysis
- Developer surveys (NPS, CSAT)
- Funnel and drop-off analysis
- A/B testing of developer flows
Early Exploration Research
When you don't know what you don't know
Use when: the problem space is unclear, the team has strong assumptions to challenge, or you're scoping a new discovery from scratch. Favour qualitative depth and open-ended methods.
Later Exploitation Research
When you need to validate and prioritise
Use when: you have strong hypotheses from qualitative research and need to test them at scale. Favour quantitative methods to validate frequency and impact.
Phase 3
Synthesis
Synthesis is where raw data becomes insight. It is the most cognitively demanding phase — and the most underinvested. Teams rush from research to solutions, skipping the step where the real learning happens.
Raw interview and observation recordings are transcribed and tagged. Tags mark recurring themes, friction points, workarounds, and emotional signals. This is where the data becomes searchable.
Tagged data is grouped into insights — patterns that appear across multiple data sources and participants. An insight is not a quote: it is a claim about developer behaviour or need, supported by evidence.
Insights are evaluated by how frequently they appeared across participants and how much impact they have on developer experience. This creates a prioritisation matrix that informs the product roadmap.
For the highest-priority insights, the team generates multiple potential solutions without committing to any. Diverge before you converge. The goal is a solution space, not a solution.
Promising solutions are framed as testable hypotheses: 'We believe that [change] will result in [outcome] for [user]. We will know this is true when [signal].' This connects research to experimentation.
Findings are packaged into a format that stakeholders outside the research team can understand and act on — an insight repository, a research report, or a shareable presentation.
The DX Advantage
In DX discoveries, developers are often more articulate about their workflow than typical product users. They can describe the exact moment friction occurs, what they expected, and what they did instead. This specificity makes synthesis richer and insights more actionable.
Phase 4
Experiment
Discovery doesn't end with a report — it ends with a decision about what to test. Phase 4 frames experiments that validate the most important hypotheses before full product investment is made.
Minimum Viable Evidence
The goal of an experiment is to validate or invalidate a hypothesis at minimum cost. Design experiments that answer the key question without building the full solution.
Define Success Before You Start
Every experiment needs a predetermined success criterion. Decide what 'validated' looks like before you begin — not after you see the results.
Timebox Ruthlessly
Experiments without deadlines drift into ongoing projects. Set a fixed window — one or two weeks — after which you commit to a decision: continue, pivot, or stop.
Inputs
- Prioritised insight list from synthesis
- Defined hypotheses with measurable success criteria
- Stakeholder alignment on what to test
- Developer cohort willing to participate
- Baseline measurements for comparison
Outputs
- Validated or invalidated hypotheses
- Evidence-backed product recommendations
- Prioritised backlog items with research provenance
- Documented experiment results for future reference
- Clear go/no-go decisions on proposed solutions
After Discovery
Post-Discovery Considerations
Discovery doesn't end when the synthesis deck is shared. Sustaining the value of research requires deliberate effort after the project closes. These considerations are open-ended by design — the right answer depends on your organisation.
Research that lives only in slides gets forgotten. Consider building a living insight repository — a searchable, maintained database of findings that future discovery rounds and product decisions can draw on.
DX Discovery is not a one-time exercise. Developer needs evolve, platforms change, and new friction emerges. Plan for regular discovery cycles — even lightweight ones — to maintain evidence freshness.
The developers who gave their time to be interviewed should know what happened as a result. Closing the loop builds trust, improves future participation rates, and demonstrates that research leads to action.
Track whether the changes made post-discovery achieved the intended outcomes. This is how you learn whether your research-to-action pipeline is working — and how you improve the methodology over time.
Research findings have value beyond the platform team. Engineering leaders, product managers, and other platform teams often benefit from understanding the developer experience landscape. Find the right communication format for each audience.
A Different Perspective
AI-Powered Discovery
AI is changing both what platform teams discover and how they discover it. Developers are adopting AI coding tools rapidly, and this creates a new class of DX challenge: the experience of AI-assisted development is itself worth studying. Simultaneously, AI tools are becoming useful research accelerators within the discovery process.
The risk with AI-assisted discovery is over-automating the human parts. Transcription and thematic coding can be accelerated with AI. But the interpretation of context, the recognition of what's unsaid, and the judgment about what matters — these still require human insight. Use AI to speed up the mechanics. Preserve the human for the meaning.
A Note on AI Coding Tools
If your developer audience is using AI coding assistants, your platform's DX in that context is a distinct research question. How does your API, SDK, or tool behave in an AI-assisted workflow? Is your documentation designed for both human and AI consumption? These questions are worth adding to your discovery scope.
A Personal Note
Most of what you've read here isn't new. These are well-known discovery practices — adapted, not invented. What's new is the context they're being applied in.
I've worked with a lot of engineering and platform teams. Very few apply these practices consistently when it comes to internal platforms and developer experience. I've been guilty of that myself.
It's not a knowledge problem. The methods are understood. What gets in the way is underestimating how much they actually matter here — or convincing yourself you'll do it properly later, once the real work is done.
There is no later. Research isn't something you sprinkle on top.
I run discoveries with engineering and platform teams.
Jacob Lueg Tiedemann
Product Leader & Product Builder
FAQ
Frequently Asked Questions
About DX Discovery
A DX Discovery is a structured research effort designed to reduce uncertainty before you commit. It answers three questions: who your developers are and what they're trying to do, what your business goals and constraints actually require, and whether there's something worth building — and if so, what. The output is confidence, not just findings.
A survey tells you what developers think when you ask them a direct question. Discovery tells you why things are the way they are — the context, the workarounds, the friction that never makes it into a survey response. Surveys measure. Discovery understands. Both are useful. They're not interchangeable.
Discovery defines the problem. You're figuring out who your users are, what they're trying to achieve, and whether there's a real opportunity worth pursuing. Assessment evaluates current state against a target — diagnosing gaps, measuring maturity, and producing prioritised recommendations. In practice, they're often combined. The right starting point depends on how much you already know and how much confidence you need before the next decision.
A discovery produces four things: a clear picture of your developer personas and their real friction points, a set of prioritised opportunities specific enough to inform roadmap decisions, validated hypotheses ready for prototyping or testing, and a go/no-go signal you can take into a funding conversation. It's the foundation everything downstream builds on.
Readiness and Conditions
Three conditions matter. First, uncertainty is high enough that you can't make a confident investment decision without research. Second, the stakeholders involved have real influence over the areas being researched — out of control means out of scope. Third, there's genuine readiness to act on the findings. Research raises expectations. If there's no appetite or budget to move on what you learn, the discovery will do more harm than good.
Possibly not — if you have recent, structured evidence rather than informed intuition. But most engineering leaders who believe they know are drawing on their own experience as developers, feedback from a vocal minority, or assumptions that haven't been tested. Discovery exists to correct for that. If you're confident and right, the research will confirm it quickly. If you're confident and wrong, you'll find out before it costs you.
No — and this is where the cost tends to be highest. Platform teams built by engineers, for the engineers they imagine, routinely ship tooling with good intentions and low adoption. The false consensus effect means the team's own experience reads as user insight. It isn't. The developers who struggle most are rarely the ones represented in the room. What you get is a platform optimised for the people who built it — which is usually not the people who need it most. That gap shows up in adoption numbers, mandated usage, and workarounds nobody documented.
Don't run the discovery. Research raises expectations in the organisation. Developers and teams who participate expect something to change. If there's no mandate, budget, or appetite to act on what you learn, the discovery will surface real problems you can't fix — and that does more damage than not asking in the first place. Scope the research to what you can actually influence, or wait until the conditions are right.
Running the Research
A well-scoped discovery runs in weeks, not months. The Frame phase typically takes a few days. Research — five to eight interviews per user group, combined with quantitative data collection — takes two to three weeks. Analysis and synthesis, especially with AI-assisted tooling, can move fast. The bottleneck is almost never the research itself. It's alignment before you start and decision-making after.
Five to eight interviews per distinct user group is enough to surface most recurring themes. Beyond that, you're largely confirming what you already know. The key word is distinct. A platform engineer, a mobile developer, and a backend engineer live in different realities and count as separate cohorts. Quality of interviews matters more than volume.
No — and this is one of the structural advantages of internal platform research over B2C or B2B. Your users are colleagues. They're reachable, context-aware, and motivated to see things improve. There's no recruiting pipeline or incentive budget. Getting five to eight developers in front of you is a calendar problem, not a research problem. Often the developers you interview can point to solutions already. That's their job.
Discovery works best as a team effort, not a solo task handed to a product manager or designer. Engineers, platform leads, and business stakeholders all bring different lenses — and the closer the builders are to the research, the fewer assumptions fill the gap later. Combined business and technical expertise on the research team matters significantly for speed and quality of insight. The team should have enough separation from day-to-day platform work to approach findings with curiosity rather than defensiveness.
Post-Discovery
It can — and in mature teams, it should. Early discovery answers whether you're building the right thing. Discovery during delivery answers whether you're building it right. The two operate at different cadences and with different questions, but they're not mutually exclusive. The organisations that get the most value from research treat it as a continuous practice, not a phase that ends when delivery begins.