This content is current only at the time of printing. This document was printed on 20 February 2017. A current copy is located at http://apvma.gov.au/node/19411
You are here
Transcript for Distinguished Professor Jim Riviere
Impact of Computational Methods on Animal Health Drugs Approvals and Risk Assessment
This presentation was delivered at the APVMA’s science feature session on 15 October 2015. The full video is available on our YouTube channel.
Thank you for having us here and allowing me to present some science to you. I really appreciate Dr Phil Reeves for making this aware to me. I've known Phil for decades and we've managed to run into one another over some of these topics for a long time. What I'm going to try to do without putting everyone to sleep first thing in the morning is to talk about pharmacokinetics and animal health. What I really want to do is to give you an historic perspective of where this comes from because embedded into a lot of animal drug product approval, regulations, calculation withdrawal times, there's actually a very strong pharmacokinetic basis. But what I really want to open up to you is that the processes that you regulate are actually determined by physiology, by species differences and by biochemistry, and the mechanisms available to quantitate this is pharmacokinetics. I want to go over some approaches that have been used relatively extensively and are completely — in fact most of what I'm going to be talking about, are completely embedded on approval of human drugs.
An interesting aspect that I'll touch on in the last section of my talk is, so how do you actually move the exploratory science, the kinds of things Karina said, the forward looking, enquiry‑based data, when does that actually become regulatory science? More importantly, when does the cutting edge that was cutting edge, say, 40 years ago, finally get recognised in some regulatory approaches as not being cutting edge any longer? I'll give some very good examples of that because some of the standard kinetic approaches used for determination of some very standard regulatory endpoints like withdrawal times would be rejected outright in a peer review journal if this were the way the data would be analysed.
I want to give you some background. I'm not going to go into incredible depth on a lot of these but hopefully I can give you some idea as to where these kind of processes have been applied and what they're being used for. First of all, pharmacokinetics is just a really simple approach of using mathematical models to quantitate the time course of drug absorption and disposition in man and animals. It's nothing magical. It's basically our attempts to quantitate what is actually happening in drugs or toxic deposition in animals and man.
What I look at pharmacokinetics it's not to do pharmacokinetics for the sake of doing pharmacokinetics. It's a tool and it is the key bridge that allows you to go between the administered dose and whether there's efficacy or toxicity. It's a bridge for you to extrapolate between different species. It's a bridge to go from in vitro to in vivo studies. This becomes critically important as we start looking at the kind of science that's being generated in both in vitro platforms and actually even in silicone and computation platforms, how do we move the results of that data into an in vivo situation?
It's a bridge from pre‑clinical to clinical studies. It's extensively used in pre‑clinical initial dose determination and then it's a critical part of determining what is the highest dose that you would actually test for toxicologic endpoints, but there's a lot of assumptions to those dosage extrapolations that really pharmacokinetics is ideally designed to handle.
It's another approach for assessing clinical variability due to populations and disease. In the initial applications of pharmacokinetics people have tried to minimise variability. Variability was seen as evil. Statisticians look at variability variates as not a well‑designed study. Well, guess what, variability is part of life and variability is part of differences in population of animals. Humans have it really easy. Their variability is essentially due to weight changes, due to age factors, due to co‑exposure to other areas. Animals are entirely a much larger amount of variability. Just think of dogs. We go from Chihuahuas to Great Danes, very big differences in drug metabolising and ability to handle these types of approaches. In fact, I can make a very clear argument, is that the type of approaches that have been developed, and this is not cutting edge two or three years ago, twenty years ago to handle population variability on the human drug approval side ideally suited and much better applied on the animal health side, but it has to force us to rethink what we think of pharmacokinetics and what we think of endpoints.
The nature of the problem actually being modelled for a long time has dictated the kind of model used but as these other approaches have matured, there's a lot of crossover that can occur. Again, I want you to think of this, not that you're going to come out here and everyone should be a pharmacokineticist, no, it's a toolbox. Very similar to using this computer, I have absolutely no idea how this is programmed, how anything works in that computer, if I had to build one but I can't live without it. The same kind of situation has to be thought of on new scientific approaches.
The kind of approach we're actually thinking of is, no matter what you want to say about drug disposition in animals, this is essentially the plumbing that is going to occur. These are the kind of processes that's going to dictate what is the administered dose, what is the effect of efficacy, what is the effect on toxicity. The drugs are either going to get injected in a systemic circulation, get absorbed through the GI tract, directly into the systemic circulation or going through the portal circulation, going to the liver, which means for a lot of compounds they will be metabolised. Right there, a major aspect of variability and disposition is how effective our animals are metabolising compounds.
Once it's in systemic circulation it's going to distribute to tissues. So the pharmacokinetic model will define this type of tissue exposure. But you the regulatory agency or you the clinician is going to determine that is that site the site of action? In other words, if I'm looking at something for an anti‑rhythmic, the site of action is going to be the cardiac tissue. You're going to decide what is the site of toxicity. In fact, all of these sites including say tissue binding for a residue could be in the same organ. Easily you could be trying to kill liver flukes or something in the liver, your toxicity could be hepatic toxicity and the rate limiting decay of that drug could be the liver, therefore determine your withdrawal times but the concentrations could be orders of magnitude different. The pharmacokinetic models that actually cover that may be very different as a function of concentration.
This is going to determine what you see, okay? No matter how much you want to try and simplify what's going on, if a drug has relatively complex metabolism and tissue binding, it's going to have non‑linear kinetics and it's going to have more complex behaviour. The key for us is to come up, to try to help you explain this, eliminate the stuff that we really don't care about what's happening and focus on what we actually want to look at. Then a drug is going to have to get excreted by some route, either the liver, either direct excretion in a bile or by metabolism or it's going to get excreted in a kidney. Because these are the primary routes of elimination, guess what? Diseases of the liver or the kidney are going to modify what happens. I'm going to give you some very clear examples that at least in the US, and I will blame everything on the US, I will blame all abnormal regulatory agencies on USFDA, so I'm not going to get in trouble here on that line and nobody ever will hear about this at USFDA. I've had a lot of contact with FDA. Just select when you turn this on the website, they've seen and heard all this.
The fascinating aspect is as I'll show you later on, in many cases we actually approve drugs and do certain kinetic models in perfectly healthy animals and then we go out and use them in diseased animals, so a lot of the initial kinetic work is not covering what happens in those diseased animals. I'll show you some interesting problems with that.
The other aspect is that I've talked pharmacology and pharmacokinetics to veterinary students and to medical students. The MDs have it easy, okay, because essentially you're going to try to get a dose that essentially is efficacious. Generally that's going to be in a µg kind of range dosage. This is really just for orders of magnitude, but is not so much that it's going to cause toxicity. That's pretty much what you worry about in small animals, what you worry about in people. If it's an antibiotic you have to come up with an optimal dose over here that actually handles the evolutionary resistance and the evolution of higher MICs as many bacteria have been exposed to these drugs for decades. You might have to modulate the dose in this line.
In the US we actually have a policy of extra‑labelled drug uses called AMDUCA, the Animal Medicinal Drug Use Clarification Act, that allows a veterinarian to go off label. In a small animal situation they can pretty much go off label as much as they want. In a food animal situation they have to assure that there's a normal withdrawal time. That's where my life for 30 years in a food animal resident ward is data banked. We talked about this a little bit yesterday at APVMA. I'm not going to talk about it now but that's how come I've been involved for animal health kinetics a lot.
What that means is you have to give an efficacious dose that's not toxic but it can't result in residues in the animal that you're then going to consume. This is something you don't worry about in humans. The pharmacokinetics of what happens in withdrawal times are very different than what happens over here.
Why be bothered with any of this? What you really have to realise is that there are simple artificial mathematical models that quantitate some type of interaction between a drug and an animal's physiology. Every single model you have has some kind of assumption inherent to the model being used. These differ between modelling approaches. You need to know what those assumptions are when you actually use the data. Models link data to biology, okay? Essentially what we have is we have some kind of data we're trying to explain. There are biological processes affecting the kinetics. We come up with a model and can we actually make a prediction. Whatever model we come with has two determinants. One is structure of the model and one is what are the parameters in that specific model.
What has been applied in veterinary pharmacokinetics basically are very traditional compartmental pharmacokinetic models. I don't want to go into a lot of discussion on this but essentially we model the body by how rapidly stuff moves around in the body. In other words, if we have a simple drug that distributes the body water, gets excreted by the kidney, essentially our model is this. Everything distributes and I'm not going to do it but if I punch a hole in this cup, there's my elimination, that's my model, things are easy.
However in reality what happens is drugs may distribute to different compartments and then we create models where we homogeneously group things according to their rates. That becomes these multi‑compartment models and we actually, this is what you pretty much consider pharmacokinetics in animal health. We essentially have rate constants, we have apparent volumes of distribution‑that is, how big is my cup? But if I put something in my cup and it likes to stick to the edge of the cup, it's going to say that my volume of distribution of this, because everything is stuck on the side, is going to be three gallons but it's not three gallons, it's a cup. So it's an apparent volume that basically gives you the proportion between what is the dose and what is the plasma concentration.
Secondly there's clearances. How fast is this stuff eliminated? This is linked to biology. From a biologically perspective we want to know about clearance and volume. We know about that, we tie that to physiology. From a regulatory perspective you don't actually measure clearance or volume, you measure rate constants and slopes so it becomes an interesting perspective of how you actually look at this data. I don't want to go into how these models are constructed. They link very simple differential equations. Mathematically these are relatively simple but really it's only data simulation. These models are not based on actual biology and the volume of distribution, they may not have inherent physiological meaning because they're just a proportionality constant.
The history of pharmacokinetics and animal health
Now I'm going to get in a little bit of trouble because it addressed your question as to what is cutting edge science versus what do we do today? Actually an Australian professor Desmond Baggot working out of the University of Illinois back in 1977 actually published a textbook which I saw in Phil's office yesterday on the principles of drug disposition in domestic animals. The way he used to do pharmacokinetic modelling was he used what was called an analogue computer because what he actually did is he created compartment models and he had electrical analogues for clearance and volume of distribution, so you actually put wires in there with capacitors and resistants and then you saw a plot and you tried to see could your data match that plot. He did some elegant work back when there was no digital computers and essentially developed many approaches that actually, as I'm going to show you, are actually the same type of approaches that are embedded into a lot of our animal health regulations now. This is four decades old.
A lot of this work could be easily done on analogue computers and then you could actually do all of your kinetic models using this thing, which is a slide rule. I have a slide rule in my office and when graduate students complain that their computers are not fast enough I say, "Here, you can do this." it used to work because actually people knew what slide rules were. Now it's getting to be a problem.
The history of this, 1980 saw the introduction of digital computers, which broadened the application of kinetics to animal health issues. Early programs were written in Fortran, data was entered on punch‑cards, and if you made a mistake on a punch‑card or if the machine creased it in the process of running it, guess what, nothing happened. You'd have to go there and re‑run these over and over and over again. Many current pharmacokinetic applications in veterinary medicine are based on this technology. The taller and slimmer tests for determining your withdrawal times in USFDA is based on a Fortran program. The official program is programmed in that and everyone else has to try to figure out how to simulate to try to get the same results as that program. Now it can be done on an Excel spreadsheet, which is loaded on my smartphone so, I mean, in reality the world has really changed but the regulations haven't changed.
This was really shown, for instance, on withdrawal times is one must get a long linear decay within the liver or kidney because this type of approach can't handle anything that's not linear. That's fine but guess what, animals have disposition that are not linear so we basically have to shift things around in order to come up with something that works.
My key of history over the last 20 years, digital computers advanced, they shrunk and the internet was born. Now it's an entirely different world. The data can be out there in the cloud, the computational ability of again, a smart phone dwarfs most cutting edge large‑scale mainframe computers 30 years ago. When you actually look at the power of computing using parallel processing and everything, this kind of data can be run in almost no time at all. Programs such as WinNLIN from Certara Phoenix and SASS allowed pharmacokinetic programs to be easily run by non‑kineticists, which is an interesting problem.
We, in 2000, published the first population pharmacokinetic study for vet medicine using another horrendous difficult program, NonMEM, which is approved. This is in 2000 so this isn't something that was done a couple of years ago. In 1990s, population kinetics were being used on human health. A lot of what I'm presenting to you now is pretty well routine studies. Then what happened now is pharmacokinetic done in individual animals, which is how we tend to do it in animal health, is now moving in populations. The approach of what's called pharmacostatistical modelling has taken route. That's where the population kinetics comes from.
As I show you what we're going to be talking about, I'm not saying we should use population kinetic models that have been used for human clinical trial design. No, that structure of it isn't there but the tools available that are used in those kind of approaches can easily be adapted.
Physiological‑based pharmacokinetic models have come into idea. These types of models that essentially now look at a tissue, the kidney, the liver as an actual organ with blood flow to those organs that allows incorporation of in vitro data into these models have been used in toxicology for 30 years. In fact, if you look for NIHS or EPA in the US, this is how risk assessment is done for numerous compounds and there's a lot of work done on what is a statistically valid model, identify ability of parameters, it's a very mature modelling approach. As I show you, it's ideally suited to do interspecies extrapolation and disease effects in animals.
Pharmacokinetic pharmacodynamic approaches were spearheaded by Professors Toutain in France and Peter Lees in the UK, probably about the same time. There's a lot of work done on looking at pharmaco‑PKPD models and some of these actually have been implemented into animal health products.
What are the new pharmacokinetic strategies that can be used in animal health?
None of these are new. The application to vet med could be considered new. One of them is allometric analysis. This has been around since 1920 or 1930, that essentially tries to scale physiological processes between different species. The idea is, is that most of the processes that if you had, say, in a mouse compared to a horse, is basically a function of body size and basal metabolic rate which correlates to surface area and therefore a milligram per kilogram dose is incorrect, it should be a milligram to kilogram raised to some power. You can, using this kind of data when it works, as I'll show you in a few seconds, you can extrapolate between mice and horses with no problem at all.
What's really important is back to what I said about veterinary medicine. I have a dog here but dogs can range from Chihuahuas to Great Danes to St Bernards. In fact, on a cancer chemotherapeutic perspective, you will actually use allometric dose and body surface area dosing to scale anticancer drugs within dogs. Why? Because of the toxicology. Because you haven't got a big window of safety to figure out what the effective dose is and in most cases in human medicine you're managing the toxicity. You're not avoiding the toxicity in order to get effective anticancer therapy. As you would do doses, a milligram per kilogram dose would completely be the wrong dose for many indications. You have to use an allometric approach.
Essentially what this is based on is we tell time in terms of minutes. Harold Boxenbaum came up with a number of different time type terms called calinicrons and syndodisacrons and I'm not going to get into this. Essentially think of it this way. A human is one minute, one minute. Why? Because we invented the clocks and we're the ones who control what we're looking at. In reality, a mouse minute is only 0.13 minutes. this becomes really, really important when I'll touch upon another approach on some nanotechnology and doing pharmacokinetic modelling and when you actually start looking at how to extrapolate between mice and humans, that the time scale is different in different species.
People have done all kinds of oddball things with this. For instance, they've counted the number of heartbeats over a lifetime and pretty much all species have the same number of heartbeats. A mouse just gets rid of it really fast. You actually look at this approach and this relates to cardiac output and rate of blood flow to organs. This is the determinant of biodistribution.
If you actually look at a series of studies and look at an allometric analysis, you would compare drug disposition, say, in rat and guinea pigs, rabbits, dogs, sheep, horse. This is too large‑scale so everything pretty well fits in some lines. The key is that there is some difference in clearance, say, between a rat and a horse and it is relatively predictable. Using the FARAD database which is a huge multispecies, multidrug kinetic database, we did an analysis about 20 years ago on 45 drugs. Now we did an analysis with 85 drugs covering all the veterinary species and laboratory animal species and in some cases people which we just considered another source of data. The 85 drugs had to have at least four to five species, it had to have replicate studies within a species. As we went into this we had 85 that handle this, you can see here for body weight, this is for ampicillin and chlortetracycline and actually looking at clearance in the top line, volume distribution here, we have excellent relationships between these species which means if we actually want to extrapolate an ampicillin dose across species, except you will see an outline over here with llamas, or oxytetracycline, it's relatively easy to dose and extrapolate dosages.
When it works it works. It works for about 35, 40% of veterinary drugs. It doesn't work for the other 60%. What we use this is an indicator that since it doesn't work for 60%, we therefore will have not be able to use an allometric analysis and you're going to have to use something else. This gets used often just in dose finding and dose ranging.
We did an analysis, and this didn't come up too good in the slide, but there's some animals here that are actually bowl‑faced. These are outlier animals. In fact, the classic example in the human area is diazepam in which we can predict pretty much across our veterinary species but the human is an outlier. If you go to a human conference and someone talks about allometry, this is called vertical allometry, it's an outlier in that line. Essentially what we can use in this line is to come through and find out for which kind of drugs do we actually have different pharmacokinetics that can't easily be extrapolated. The difference between these species is usually something to do with metabolism or something to do with protein binding. If you look at the average coefficient it's about 0.75, 0.8. That's good for average but there's a whole bunch of drugs that that's not the coefficient for clearance. Usually volume of distribution scales to a coefficient of about 1.0 but not all drugs and you can even see a number of them are different.
All this does is a very, very high‑level look of comparative pharmacokinetics across species and it shows that some drugs are easy to scale, some drugs are not easy to scale, some species are well‑behaved, some species aren't well‑behaved.
Next approach to people of use is population pharmacokinetics. This essentially takes a pharmacokinetic, a simple model of volume distribution and clearance, but adds in statistical correlates to this and tries from a limited amount of data to actually now predict what is the pharmacokinetic parameters as a function of disease, as a function of age, as a function of co‑exposure to another drug. This becomes important in toxicology because co‑administration of two drugs, one may alter the metabolism of another and that value can link these type of models together.
You assess different models by adding in what's called these covariates and you assess what's called goodness of fit by what's called a minimum objective function. We're not going to worry about any of this. We also do another approach, is we split a population into two groups. We model something with a study population and then we use a validation to see if our model predicts a validation. This is all routine. It is routinely embedded in Phoenix WinNLIN programs. it's routinely used on human drug development in phase I, phase II, phase III trials.
Essentially you come up with a simple pharmacokinetic model with clearance and volume distribution. You then relate clearances to something you think might actually be important. For a drug that's eliminated by the kidney maybe you correlate it to sim keratnine. Maybe the volume of distribution is a function of body weight and therefore our fixed effects now are data that we input. In an animal we input the weight, we input the stratum corneum concentrations, and then we use our drug concentrations.
Then we create a model that gives us intra‑individual random effects between and within any individual what's different, inter‑individual effects, inter‑individual random effects on these parameters. All this does is allow you to partition the variates in the various statistical models.
What happens is if you have a drug that you know the body weight has a factor and you look at the predicted versus observed concentrations, you realise you're making a mistake over here. The way you can think of this is you do a residual plot. Here's the observed minus the predicted, here's the difference. You look at the difference and plot it against a covariate, and lo‑and‑behold, bodyweight seems to have a covariate. If now I create a model that has bodyweight as a covariate, my predictions are right on and my residual plot is random. It's this kind of approach that's used to fit this.
You can sort of look at another aspect, a traditional model would take your plasma concentration. You have some type of variates and here, there's always these variates here from random error. You essentially would sample at very specific time points and for the line. Population model doesn't really care when the time points are collected so it allows you actually to collect only a few samples maybe from an individual but a whole lot of individuals. You essentially then fit the model and everything goes good. Where is this important? Is when there's two populations. What are the two populations? Faster 0:27:07 acetylene or slower 0:27:08 acetylene as in humans. We found some pigs farms, you know, pig farm over here and a pig farm over here, looking at deposition of simple drugs and you get two entirely different populations. Is that a function of diet, housing, breed of the pig? Who knows?;
The key is, is that in this case you have two populations actually existing out there and now if you collected the data without any covariates, you really have a lousy model and someone is going to turn around and say, "We can't use that for a regulatory approval because there's no slope and it's a mess." If you actually have a covariate that explains that, you have two well‑predicted models, it's just that there's an additional covariate in there. This is being used all the time on dose structures and everything to identify specific populations on the human side.
To give you an example of flunixin in cow. Another thing we use this for is that the withdrawal time is determined in tissue data and yet the disposition of the drug, remember before it can get to the tissue it has to go through the blood. That's the base plasma pharmacokinetic model. Another thing the population pharmacokinetic models allows you to do is to create a model that can go back into existing data sets in the literature and actually use all this data that defines disease effects on the plasma disposition and then link it to what happens within the tissue and then determine, is there an effect, say, of a disease process on the withdrawal time.
If you do that, a model here now that ties in to two peripheral compartments. Remember we have a lot of data now. We have intramuscular, subcutaneous and we have oral administration, and then we have liver data. This becomes linked. We can predict the plasma concentrations. We can predict the liver concentrations and we can calculate a withdrawal time based on the 95% confidence and come up with what that withdrawal interval would be. What's interesting about this is, I'll show you with flunixin, if we assume healthy animals given by the approved route it's fine. But guess what? If we dose the animal with the disease on the label it doesn't work because there's a longer half‑life. Why did we get into this? Because the US FARAD program and USDA was detecting flunixin violations in either dairy or in animals and obviously looking in to this, they were actually using the approved dose level. Now we start coming up with, so why could that be happening?
Findings from the flunixin time pretty much showed that in many cases in treated populations the withdrawal time was longer. Again, if we use the healthy animals we get the same as the tolerance method. Essentially variability actually was due to disease states, different field conditions but what was nice is we could actually link these together in the same tissue. Once we get a link between a plasma and a target tissue, our model now becomes applicable across those tissues.
There is little debate that disease alters pharmacokinetics. There was a classic study done in 1977 by Nouws that showed that, guess what, most residues occurred in culled dairy cow. Why? Because the culled dairy cows were the animals who actually were treated with drugs and the disposition occurred and altered the disposition. For flunixin work was done a long time ago, indicating that actually NSAIDs bind to inflammatory products and so if you have an inflammatory disease you essentially can alter the disposition of it because of the products of that disease.
If you actually start looking at what happens with flunixin penicillin, in swine penicillin in cattle and look at what the withdrawal time could actually be here, and you could find that there's something like a 20% to 30% change in a disease elimination rate constant that starts pushing it up to the edge of that rate tolerance limit. Realise your tolerance limit is created with healthy animals and it tries to come up with this 95% confidence. What occurs in a disease is that the slope of this line is getting shallower and shallower because the clearance is different and that's pushing it up ahead as to what the potential withdrawal time would be.
We've done these types of models for, say, penicillin in cattle and swine, cattle liver and kidney and sulphurs. Here's your model predicting plasma. Here are your kidney predictions. Here are your liver predictions. This works out pretty well. We are now exploring actually with US FDA of using this type of an approach to come up with a way of getting a label change in order to extend the withdrawal time purely because of what the pharmacokinetic differences are. The interesting aspect, as I'll touch at the end of the talk, is just people have different ideas as to what statistics are.
We use this as a meta‑analysis framework to basically take a look at what happens in pharmacokinetic modelling. Again, it's fairly well approved. We've done this work publishing in major pharmaceutical science journals. Again, this is not rocket science. It's just a matter of defining what data needs to be collected to solve the models. The second approach is physiological‑based pharmacokinetic models. You can make these models under any level of complexity that you want to deal with. Very, very simple models to very complex models to say, models focused on one specific aspect.
For instance, if we're trying to figure out what's going on with the nose of a cow, we don't really care what's happening in the rest of the cow, we're only trying to get a prediction here, so our models can be focused on collecting here and basically the rest of the cow we're not going to really try to get statistical inference on. The models we construct are based on what we want to do with them.
Physiological‑based models use a series of equations defining rate in and rate out of specific organs. Then we create these models with blood flow to the organs, and then our models pretty much predict what happens across all tissues. Here's a model of oxytetracycline that we recently published in The Journal of Pharmaceutical Science that essentially can allow us to easily extrapolate between dogs and humans. There are large datasets to handle this based on published datasets. We know what's going on for the oral absorption here. We essentially have the blood connecting these different organs and then you can make inferences between what happens in these different tissues.
When you start looking at the predictions we can predict pretty much oral administration, different approaches and routes of administration and these can be cleaned up a lot more if you actually have the data matching what the formulation is. What this also allows you to do is probe, is it really a formulation effect on something? You can do all this without redoing all of these studies. You essentially create the model, build it based on your base aspect of the model, and now you can tweak it by adding in other types of approaches.
Here's the flunixin physiological‑based pharmacokinetic model that actually ties together, very similar to what happens but in this case we have metabolism. Metabolism up here. We have milk clearance. We have the five‑hydroxyzine metabolite produced. Using the same data that we used on the population model, we can create a physiological‑based model. That allows us to now really probe, what's the effect of disease.
In that case we found the big problem was going to be clearance affected in healthy versus mastitic cows. We did a field study in which we looked at healthy cows, we looked at mastitic cows which had the conditions on the label for using flunixin. Eight out of ten of those diseased cows had violated residues which was easily predicted from this. We've duplicated this study with the Ag Research Service in Fargo, North Dakota under FDA funding. Same situation occurs. The problem in this one is, is that the withdrawal time is determined in healthy animals but the drug is labelled in diseased animals and the 95% confidence covers you for mild disease but it doesn't cover you. The reason we get into this whole situation was because violated residues had been reported, FDA and USDA did milk survey studies and found violated residues so obviously for this drug there was an issue.
There was work done in collaboration with some Chinese colleagues on cyadox, pharmacokinetics. Again, you can look at this model that pretty much predicts both the disposition of the parent drug and the metabolite. We use this for melamine risk assessment. The focus on toxicology is that back in 2008, dog food was contaminated with melamine and where do people normally get rid of their old dog food? They feed it to pigs. In this case it wasn't old dog food, it was melamine contaminated dog food. Twenty‑thousand hogs were fed contaminated melamine dog food, FDA held them, USDA. What do we do? We did a very fast population pharmacokinetic study. We actually did an IV rapid study in pigs to get an idea of what the clearance is.
You then created a model that tied together based on existing rat data, actually the disposition of where melamine was going. Most of the melamine was ending off in the kidney and urine. That's what causes the disease process. You create a pharmacokinetic model and realise actually only a 24‑hour withdrawal time was actually needed to clear that out of pigs. That went forward. These pigs were held in quarantine for like, a couple of months as this analysis went on. The withdrawal time is 24 hours so the pigs were actually let go and marketed with no effect.
It's a useful tool for describing tissue disposition. It allows incorporation of metabolism and variability. It allows data collected from different studies and in vitro data to be integrated and into a single model. Excellent for interspecies extrapolations.
Now I want to show you where life can get a little complicated. This is a slide I stole from Nancy. She is going to talk about some nanomaterial stuff later but there's a lot of work and I know you've been looking at nanomaterial effects. We've done work on nanomaterial pharmacokinetics and physiological‑based pharmacokinetics and I'm going to give you an insight into where nanoparticles aren't like chemicals.
One, from a disposition perspective, fundamentally different. The reasons is, is that one must first define the different between the property of the core versus the surface. The surface probably determines where the particle goes, the core determines what the particle does. The concept of protein binding is completely different. Why? Because the nanoparticles are bigger than the proteins. What we're looking at, as I'm going to show you, is a very different paradigm that now has been shown in laboratory animal studies and very little human data to actually be a relevant process.
Membrane transport is via endocytosis of some cellular engulfing process and not by simple molecular diffusion or simple transport. When a pharmacologist thinks of transport and a transport protein, it doesn't work that way in nano. It's the whole membrane actually invaginating. Finally, we aren't dealing with a particle. You have a drug that says a molecular weight of this weight is 147.67. That doesn't happen with nanoparticles. It's a colloidal aggregate of cell particles that has a mean size but there is a distribution of that size. Which means if we want to do pharmacokinetic studies, and we still haven't got this worked out that well, we actually need to model the entire distribution going forward.
This is the biocorona paradigm that essentially, if you look at different mechanisms of uptake, a large amount of work has been shown using native particles looking at cellular uptake and you get a certain rate of uptake. Nancy has shown, I think she has a slide later on, that actually if you coat these particles with different proteins or actually expose them to plasma, guess what, the rate is entirely different and where they go in a cell is entirely different. First of all, that means that probably about 70% or 80% of in vitro nanotoxicology work is wrong. Period. That's been shown by a number of other methods, that we really have to start thinking how do we deal with the corona because essentially the corona is the face of the particle to the biological interface. A lot of work has been done on this over the last four or five years.
From a kinetics perspective, we realised there's another issue and that is, is if you look at the time, it takes time for this corona to form. Initially what happens is, think of it as little velcro balls. They essentially get in there, they bind up all kinds of stuff and over an hour, two hours, three hours, four hours, some particles 24 hours, the corona changes as it's exposed to a different biological milieu. That can be inhalational, it's exposed to surfactants. It gets in the blood, it's exposed to plasma proteins. It gets into a tissue, it's exposed to inter‑tissue proteins and that corona changes. The nature of the particle changes over time. In many cases for a short duration the obstinate‑type proteins bind to it and it gets engulfed and gets taken off to the reticuloendothelial cells. The only strategy to avoid this now is to put polyethylene and glycol on the outside and hang them out a long time, they don't get engulfed as fast but slowly they accumulate a corona and go someplace else.
The time factor is important. What we realised is that if you look at in vitro studies and look at mouse studies where 90% of the work has been done, you essentially have a corona that might form but it might take an hour or two hours or three hours in vitro. Guess what happens when you give it to a mouse? You don't have a three‑hour circulation time. You have a five, 10‑minute circulation time so the corona and a mature corona doesn't have a chance to form. If you go into a human, a mature corona forms and the targeting moieties that might have worked nice in vitro and worked nice in a mouse don't work at all. This helps to explain a horrendously failure rate on phase I human clinical trials getting some of this material up. People are starting to recognise this now.
I want to show you, we talked about it, if you look at this, you're essentially looking at a nanoparticle corona evolving over a fixed time scale. Then you get these rate constants to get it to the targeted cell and if another corona forms it goes to the reticuloendothelial system. Now that you're all experts in allometric modelling, guess what, this is the allometric equations that determine distribution of blood flow to the target cell of reticuloendothelial cell but this is not an allometric exponent. This is a fixed component based on temperature and strike eometry, what's the concentrations of these products? Which means for you to extrapolate what happens between species, you need to take this rate difference into effect. I don't want to go into too many equations but you can actually calculate this out pretty effectively. If you want to read it, look at that nanomedicine paper.
What happens is, for example, a corona that has a half‑life of something like six hours but the distribution is 1.5 hours versus 24 hours and there's a number of particles approaching the market that have these characteristics. Target cell uptake in the mouse versus reticuloendothelial cell uptakes. Here's the target cell, here's the RES cell, stuff all goes off to a different target versus humans, you get entirely different distribution. This is something for injectable nanomaterials and nanomedicines we have to start thinking of.
We also can handle this with physiological‑based models and we can actually create models and then look at the reticuloendothelial cell uptake. In this case we don't have to worry about allometry because our blood flows are species dependent. Then we deal with what happens to the corona. We've learnt already from doing some work on silver nanoparticles is that the rate of uptake and the kind of phagocytosis for small versus large particles is entirely different. That can be expressed as to how they actually access the reticuloendothelial cells. This allows you now to make easy extrapolations between species.
Then we did another work which I do not have the time to go into, trying to figure out can we use a mouse model to get to the human model and the answer is no. What we found out is that we can use a low dose rat model because we're not saturating the reticuloendothelial cells. This model basically takes the same structure but changes the blood flow and some tissue characteristics. Relatively easily we can use a pig model looking at that extrapolation and actually take rat data. This is using a mouse model, doesn't work. This is using the other models, it works relatively well and make predictions across species. It requires a lot of data. Part of the problems with the mouse in addition to its size is its sinusoidal structure in the liver is very different to other species. That really affects nanoparticle disposition.
Conclusions on the kinetics before I get into another five minute dialogue here about how to bring this into a regulatory system, is pharmacokinetics is a tool. That's it. I'm not selling any specific kinetic model. Right out of the box we usually are wrong, you need to validate data but then you learn something about the disposition. Great to integrate across populations and species. Each modelling approach has its strengths. Pharmacokinetic models evolve much faster than regulations changes. This is where the challenge goes to you, is that you really need to figure out when is it too old and when isn't it old. A lot of computers and software open up this field to practitioners but it also makes it very easy to make very fast mistakes very rapidly.
Gaps for widespread application of modern pharmacokinetics in animal health
Hopefully this doesn't happen in Australia where the regulator and the industry is facing different areas. I know it does in the US sometimes:
scientific statistical issues, economic issues, philosophical issues and regal regulatory issues
Science evolves. There's no way of changing that. We have different assays, we use different models, we use different breeds of animals. Phil said I should mention, we went to the Ag Research Farm in Fargo, went through this massive hall that was for dairy cattle but there was no dairy cattle in it. We asked, "How come there's no dairy cattle in it?" "Well, dairy cattle are too big now. They are essentially too long and the way this was built, you had the trough in the back and right now their feet would be inside the trough." If you actually look at a per cent body mass of an otter versus body mass of modern cows, it's entirely different than it was 30 years ago yet all of our drugs are approved based on cows that don't exist any longer.
There's a different error structure of the data when we go across different models. The target evolves due to microbial sensitivity, different environmental and nutritional standards, and treated animals actually have different diseases.
How do you integrate subatomic, molecular, subcellular, tissue, individual and population models? The error structure at every level is different. We gave a talk at AIBN in Brisbane last week and Mike was there looking at, how do we integrate, say, a molecular‑dynamic model of a nanoparticle using quantum mechanics to a surface interaction looking at binding? It can be done but guess what, the parameters are different, the statistics are different and you need to work that out. At every level you need to deal with that.
The other problem is sociological too, is that each discipline has its own standards of data, its own statistical testing and its own publication outlets. What is okay for a pharmacokineticist is not okay for an in vitro toxicologist, is not okay for an analytical chemist. We have to combine parametric and non‑parametric data integrating omics with observational work. What are false positives? What are false negatives? What are we actually looking at? Non‑linear networks can do this fairly well but these have never been applied in a regulatory environment and the error structures are entirely different.
A good example here is if you look at a drug treating a disease, essentially, and you start looking at what the effects could actually be, there is overlap and this is an interesting study to look at published about three or four years ago. You can look at drug effects versus disease effects and the biomarkers and everything are all connected. You may actually be treating a disease, let's say a renal disease, but then your compound causes nephrotoxicity like cyclosporine, you can't just have a simple biomarker, it's more complicated. Work that Nancy did on dietary supplements for dog kidney work came up pretty close. As you look at the expression of a lot of these genes, you essentially have responses that aren't one biomarker but two or three biomarkers.
We try to come up and say, one enzyme will go up. That might work in some of these dogs but in other dogs two different genes may be turned on that look at the same effect. This is common if you look at any toxicological profile now. It's not one marker going on, it's gene networks turning on. How do we define those kind of biomarkers? That's a challenge.
Sociological and cultural issues. Each discipline has defined normal and acceptable. It's really important to try and figure out what that is. What is important? What is the effect tested versus noise? A lot of the standard regulatory tox kind of testing and very structured disciplines will really be looking at a lot of factors. It may really just be noise and it might be a lot better to focus on what the actual outcome is rather than the entire system.
Adopting new tools takes time. Essentially the garbage‑in garbage‑out model, garbage data with a perfect model gives you garbage results or perfect data with a garbage model also gives you garbage results. If you're writing models and have no idea what's going on, beware the blind driver. You can make a lot of mistakes. The point is that people have to try and work together to do this.
To give you an interesting perspective on this, if you look at experimental design we are trained to block subjects between treatment. Most of, especially the agricultural studies, are based upon randomised block designs. This assumes we know what the heck to block on. What we learnt from population kinetics is that the factor that's really determining altered disposition in an animal is not something we knew about before we gave these drugs to the animals and so maybe we were actually blocking on something wrong.
Disease is nonlinear and multifactorial. Have we simplified too much on the way some of our models are going? Do we need to come up with more approaches but give a little bit more variability and flexibility in the responses?
We model for different reasons. For discovery, for prediction, for validation and regulation. I'm going to give you a conceptual idea of this. Here are four data sets:
0.8, 1.2, 1.0
1.8, 2.0, 2.2
2.8, 3.0, 3.2
3.6, 4.0, 4.4
How do you actually do a traditional experimental design? One is you can do an analysis of variance and do a means test:
1 vs 2 vs 3 vs 4
These are statistically different. But I could also do it by trying to fit a regression across this and in that case I determine, do I have a statistically significant slope? If I have a slope you essentially have a dose response relationship. It may fail this test but pass this test. The way the population TBPK modelling would do it is basically say, my model predicts 1, 2, 3 and 4, and essentially are 1, 2, 3 and 4 within a confidence interval predicted by the model? It may fail one specific slice of this but in reality, if my model prediction holds and I have more data to validate it, I have a much more robust model across a much better area. All of this is acceptable statistics but your regulation may be written using this type of an approach.
Regulatory procedures are obviously not based solely on science. You have science day today, you have a regulatory day tomorrow. They are structured to provide an open and level playing field, fair access to markets, fraud prevention, societal guarantee of safety and efficacy and a return on investment. All of these are important. This is the structure of a regulatory system, this promotes trade and commerce, this should be done. But as we all know, everything doesn't happen the way it should happen.
The problem I realise now is international harm minimisation aims to define acceptable common denominators and codifies it. This is really important to get everybody up to a base level that we know that's transparent. The problem is to do it, it freezes the science to not just when the agreement was reached but to what was sort of status quo back five, 10 years before where the agreement was reached and then it's fixed. Guess what, science keeps on evolving, we keep on learning how to do this stuff and it becomes a major problem of how to integrate that into regulatory science.
I also think goals of scientific discovery and regulatory approval are obviously in direct opposition to one another in many cases. Regulations are based on historic precedent, they're based on legal decisions, they're based on treaty decisions and that's based on science that might have been good 20 or 30 years ago. I'm not knocking now using Fortran programs to draw straight lines across tissues. That was state‑of‑the‑art 30, 40 years ago. Things like fixed metabolite rations, that was a breakthrough in 1970. It's not a breakthrough in 2015 and again, if that data went forward to some journals it would be actually rejected. How do we manage this trade off in a legal system?
Regulations can't anticipate new science and paradigms especially when the structure of the data and the endpoints are fundamentally different. I'm drawing extremes here but in 1970 if somebody showed you a heat map of something you'd have absolutely no idea what's going on. How can a regulation adapt, say, photogenomic aspect? New analytical methods, new hazards and safety endpoints, new data formats, longer time horizons. It's impossible to take a regulation written 30 years ago before any of this was known to actually figure out how to adapt this. My model of a regulatory system, and you can really get mad at me here but I've done this and showed FDA and they realise it, is how do you change it? It's a very structured program that took a lot of work to define what specific tests need to be done. What's interesting is a lot of this was based on computer and data input that's very old. Cards, discs, keys, this is gone now. Essentially everything is basically up in the cloud.
What happens when you have new science that is well established within their own disciplines, how do you plug it into that regulatory system? I'm not saying this is bad. I'm saying that's absolutely crucial. It's really crucial to control what's happening across different countries and across different jurisdictions but you also have to realise that the endpoint is evolving, the products are evolving and how do you create a more flexible regulatory system.
Toxicology Testing in the 21st century, National Research Council in 2007 published an approach that tries to start assessing this. You characterise the chemicals, you characterise the model of action, you look at dose response relationships in a relevant model and you tie this together with physiological‑based modelling and population modelling to see how well you an actually made a prediction. What does this say? It doesn't say laboratory animals are wrong but it says one, laboratory animals, mouse, may be a fantastic model for efficacy but an incredibly bad model for disposition. You might have to use a rat. You have to tie in together the populations and it's a continually evolving area.
For chemical risk assessment in the United States at least and I think in some other European countries, in 15 years from now this is the norm. The standard laboratory‑based testing which because of animal welfare considerations is going to disappear and it's going to be a combination of in silico, in vitro and very focused testing. The linkage between a lot of that is actually physiological‑based kinetics and in vitro models. That's going to happen. The key is just going to be how now in an area like animal health regulation do we keep up with it.
I hope I showed you that pharmacokinetic models are a tool that provides a structured framework to describe drug and xenobiotic disposition. Existing approval regulatory models and statistical tests are based on the state of the art but these could be decades old. How do we take these modern approaches that are mechanism‑, physiological‑, and population‑based versus older reductive approaches and move that into a regulatory environment? It's a challenge. How do you "regulatorize" these approaches? It's something that we need to deal with but I think more importantly, something that as you create new regulations, you need to create and implement and embed that flexibility into the regulations.
I want to thank a lot of people that did all the work. This is the Kansas group. This is various North Carolina groups over 20, 30 years. A whole bunch of people who paid for this, USDA, NIH, FDA, EPA, Kansas and a few pharmaceutical companies. Thank you for your attention.
Errors and omissions excepted; check against delivery.