Derek Lowe wonders how to kill off bad drug candidates before companies invest valuable time and money in them
People inside the drug industry disagree on many things, but there’s one statement that you could get everyone to sign: that we need more new drugs. The problem has become especially acute recently: not only are development pipelines a bit light, but we’ve been losing drugs we thought we already had, such as Vioxx. Opinions differ, naturally, on how to realise this goal. Suggestions ranging from ’replace everyone with robots’ to ’wait for someone else to discover a drug, and buy their whole company’ have been proposed over the years, with varying levels of success.
Less revolutionary approaches start with looking over the whole process of discovery and development with an eye to their (improvable) bottlenecks. One problem with this is the large number of candidate compounds, many of which have a plausible claim to importance. In general, though, it’s better to make improvements in the earlier stages of the process, given the tremendous expenses of later clinical development. It may not be a matter of finding the good molecules, candidates or projects - rather, we need to recognise and kill off the bad ones more quickly, before they destroy everyone’s time and money. Merck’s recent halt of its MK-0557 obesity candidate is an example of protracted (and expensive) development effort, which in retrospect everyone would rather have stopped years ago.
From that point of view, one important task that we do quite poorly is estimating human pharmacokinetics. That, in fact, is the whole purpose of almost all Phase I clinical trials - to tiptoe very carefully into human dosing, checking constantly to see if the original assumptions about blood levels and clearance were worth anything. One obvious way to improve things would be to become better at predicting human dosing de novo, using virtual or in vitro models of membrane penetration and transport processes. Great efforts in this area have led to underwhelming results, however. Many promising assays have ended up telling us more about the assays than they’ve told us about humans, and it seems too early in our understanding to hope for computational methods to tell us very much.
Another way to approach the problem would be to find better connections between our readily available animal models and eventual human dosing. We can’t predict rodent blood levels, either, but we can at least measure them easily. As things stand, we use these assays to cross off compounds with poor pharmacokinetics, which assumes that the factors involved are probably more general than rodent-specific. But while that’s a reasonable conclusion, we really don’t have as much data to support it as we should. No doubt some promising compounds that underperform in mice have been prematurely buried, but we’ll never know.
A better bridge between the animal and human data is greatly needed, but efforts to correlate the current large data sets haven’t turned up any useful prediction schemes. Perhaps a more brute-force approach is needed, which advances in tissue engineering might make possible. A rodent with a more or less humanised gut wall sounds like an odd creature, but it could be a tremendously useful one. While there are other factors that affect blood levels, that type of animal would be the place to start.
Later, we could expect more approximately human versions of blood carrier proteins, liver enzymes, and so on. The end result may sound a bit monstrous, but I’d argue that it’s much less so than our current system of putting the drugs into human volunteers with only a rough idea of what might happen to them.
Derek Lowe is an experienced medicinal chemist in the pharmaceutical industry, working on preclinical drug discovery.
No comments yet