- 4 Feb 2026
- Loïc Roux
Make as Insight, Not Execution – Why “Make” Is the Most Misunderstood Step in DMTA
Introduction — The DMTA Make phase is where reality appears
In drug development, when it comes to Design Make Test, most conversations focus on Design and Test. Both these aspects are crucial, of course: Design is where ideas are shaped, and test is where biology speaks.
Make tends to sit quietly in the middle.
In practice, this is where assumptions are first challenged and reality first appears.
Make is the point where molecules stop being concepts and start behaving like chemicals. They begin to encounter synthesis constraints, purification limits, analytical ambiguity, scale effects, and eventually CMC expectations. The decisions made – or avoided – at this stage quietly determine what can be measured, what can be manufactured, and what can ever be registered.
Despite this, Make is often treated as a service function rather than a crucial source of insight. It is often expected to execute decisions made elsewhere, rather than challenge them. When that happens, complexity is only deferred, not managed. In this article, I will explore the observation that many problems that surface later in development were already visible during Make; it’s a matter of being aware.
What “Make” Really Means in DMTA
Within a DMTA framework, the Make phase is frequently misunderstood. It is often reduced to a practical step: synthesise the molecule, purify it, and generate enough material to enable testing.
In reality, Make is the phase where a development hypothesis first encounters physical, chemical, and analytical constraints.
At this stage, Make encompasses far more than synthesis alone. It includes early choices around route design, purification strategy, analytical method development, and an early assessment of whether the molecule can be reproducibly made, measured, and ultimately controlled. Although this work sits upstream of GMP manufacturing, it is already inseparable from CMC thinking.
Recent perspectives on developability in oligonucleotides increasingly emphasise that many late-stage failures originate from assumptions made during early synthesis and characterisation, long before formal manufacturing begins (Vinjamuri et al., A Review on Commercial Oligonucleotide Drug Products, 2024). These assumptions often remain unchallenged because everything appeared to be working at small scale, and remain invisible until scale, analytical scrutiny, or regulatory expectations force them into view.
In that sense, Make is therefore not a handoff between Design and Test; it is the first real point at which feasibility, robustness, and controllability can be interrogated meaningfully.
The Questions That Create Answers
For complex programmes, the Make phase is where several critical questions are first encountered, often implicitly rather than deliberately:
- Can the molecule be made reproducibly with acceptable yield and robustness?
- Can it be analytically defined in a way that distinguishes product from closely related variants?
- Do purification and isolation steps scale without altering the product profile?
- Are emerging impurities controllable, or are they structurally intrinsic?
None of these are manufacturing problems in the narrow sense; they are development defining issues. If they are not explored early, they don’t disappear. They simply re-emerge later, when timelines are compressed.
Quality-by-Design principles formalised in ICH guidance reinforce this idea: control strategies and critical quality attributes cannot be retrofitted efficiently if they are not understood early (ICH Q8(R2), Q11). Make is where that understanding begins.
The Common Failure: Treating Make as Execution
Despite the technical challenges being well described in the literature, there is a pattern that shows up repeatedly.
Design advances rapidly, supported by strong hypotheses. Test generates encouraging biological data. The molecule looks promising; momentum builds.
The Make phase is then asked to deliver material that conforms to those decisions, rather than inform them.
When challenges appear – analytical ambiguity, purification fragility, scale sensitivity, – they are treated as obstacles to be overcome, rather than design inputs or signals to be interpreted. Method development becomes reactive. Impurity profiles are tolerated rather than understood. Scale-up considerations are deferred because they are perceived as premature.
The consequence is a gradual accumulation of technical debt:
- Analytical methods struggle to clearly define the product
- Purification schemes become increasingly complex to meet evolving purity expectations
- Growing batch-to-batch variability
- CMC narratives forced to rationalise decisions were never explicitly made
What makes this failure mode particularly costly is that it often unfolds slowly. Programmes rarely fail outright at this point; they stall. Timelines stretch, confidence erodes, and resources are consumed addressing problems that could have been surfaced — and often avoided — much earlier.
Importantly, this is not a failure of chemistry, analytics, or biology in isolation. It is a failure of integration. The Make phase is treated as execution rather than as a source of critical information. In DMTA terms, the loop is broken.
Make generates data, but that data does not meaningfully influence Design decisions. Instead of guiding the programme, it is used to justify continuing along a narrowing path.
How Make Should Feed Back Into Design
In a functioning DMTA system, Make does not sit downstream of Design. It sits alongside it, continuously informing which design choices are viable and what isn’t.
Practically, this means allowing Make to ask difficult questions early, and influence the direction:
- Can this molecule be analytically defined with confidence?
- Are observed impurities structurally intrinsic or process-driven?
- Does purification feasibility constrain linker or attachment choices?
- Will scale-up amplify variability or collapse separation?
When these questions are answered early, they do not slow development. They accelerate it by preventing rework.
In mature DMTA implementations, analytical readiness becomes a design input rather than a downstream deliverable. If a product-related variant cannot be reliably detected or controlled, its presence must influence whether the design is acceptable. Similarly, early scale-up experiments are used not to optimise yield, but to probe robustness and identify failure modes.
Quality-by-Design principles explicitly support this approach. ICH guidance frames control strategy as an outcome of process understanding, not a documentation exercise performed late in development (ICH Q8(R2), Q11).
When Make is allowed to feed back into Design this way, DMTA works as intended; the framework stops being a production pipeline and becomes a learning system.
Conclusion: The Importance of Reframing Make
Make is often described as the bridge between Design and Test. That framing undermines its importance. In reality, it is where future feasibility is first exposed.
Make is where complexity is no longer abstract, but rather it begins to shape what is possible. It is where programmes either adjust deliberately – or continue forward while assumptions lay hidden.
In mature DMTA workflows, Make is regarded not as execution, but as a source of invaluable insight. It informs Design and surfaces complexity early, while there is still time to address it deliberately rather than rationalise later.
The lesson is simple but often overlooked:
Complexity is not the enemy. Unmanaged complexity is.
And DMTA works when Make is allowed to reveal it early.
Discover more
If you want to learn more about CatSci’s DMTA services, head here.
To read more about DMTA, take a look at our resources here.
If you’d like to discuss how CatSci’s integrated services could address your challenges, get in touch here or reach out to me on LinkedIn.