Welcome to the “People Always, Patients Sometimes” podcast. Our guest is Aiden Flynn, the founder of Exploristics. We worked with Aiden and his team to look deeply into how Spencer, in a simulation study of 6,100 cardiovascular patients, would impact both adherence and engagement. We wanted to determine that if adherence were increased dramatically as seen with the Spencer platform if we could improve the endpoints of stroke and persistence across the medication study. The results were outstanding and we have a white paper detailing that. Now we’re excited to present Aiden as this is a very captivating interview. I know you’ll appreciate Aiden’s insights on “People Always, Patients Sometimes”.
Janet Kennedy (00:49):
Hi, my name is Janet Kennedy and I’m your host for “People Always, Patients Sometimes” a production of Spencer Health Solutions, we’re speaking with the Managing Director of Exploristics, Aiden Flynn. Exploristics provides analytics, statistics, exploratory data analysis, modeling, and simulation services. There is so much more to cover. So let’s get started. Welcome to “People Always, Patients Sometimes”, Aiden.
Aiden Flynn (01:13):
Thanks, Janet. It’s a pleasure to be here.
Janet Kennedy (01:15):
We’ve actually worked with Exploristics at Spencer Health Solutions. And we’re going to get into that in a little bit, but I’m sure your company does a lot more than the work you did with us. So do you mind taking a moment to tell us a little bit about yourself and about Exploristics?
Aiden Flynn (01:29):
I’ve been working in the pharmaceutical industry for close to 30 years now. I had an academic background. Then I worked at GlaxoSmithKline for 10 years, and then I set up Exploristics 11 years ago. We work with small and large biotech and pharma companies to help them optimize the designs of their clinical trials to make sure they’re generating the right data, to turn that data into the evidence that they need to apply for approval for the drug or to support further investment in development.
Janet Kennedy (02:06):
So are you working with companies in a pre-protocol stage?
Aiden Flynn (02:10):
Ideally? Yes. I think one of the bugbears of many statisticians is that they don’t get involved early enough in the protocol development stage. Our preference is to get in as early as possible on in doing that. We can actually influence many aspects of the protocol on not just the statistics section, where we might justify a sample size, for example.
Janet Kennedy (02:36):
So are you working with them to make sure that their drug is actually potentially viable or are you working with them to make sure that they’re asking the right questions and gathering right data in order to prove better how the drug is performing?
Aiden Flynn (02:50):
Yeah, it’s both. Ultimately everybody knows the kind of rate of attrition and clinical development, you know, more than 90% drugs feel to reach the market. Having already entered clinical trials on, I believe a large part of that is, you know, studies are not appropriately designed and I believe that statistics and statisticians have a big role to play in that. And I think the way we think the kind of logical and quantitative approach can impact lots of aspects of the clinical trial. So yeah, we get involved early to make sure the study is designed appropriately. We’re measuring the right things. We’re measuring them in the right way and making sure that we’re doing the right analysis. We also work at the end of the process as well. So we’ll take the data that are generated as part of a clinical trial and we’ll do our statistical analysis to demonstrate that the drug works or, or otherwise. And actually we complete the feedback loop. So anything that we get in terms of the results from a clinical trial, we try to feed that back into the design process so that we’re, we’re learning from the success and the failures of clinical trials so that we avoid making those failures in future.
Janet Kennedy (04:17):
Part of what you’re doing here is if you can get involved early enough, you’re not having a situation where they’re saying here’s our data, try to put this square peg in a round hole and prove that we did a good thing.
Aiden Flynn (04:28):
Yes, that’s exactly why we want to get involved early. I can’t say that that still doesn’t happen. You know, it is common that we will be approached by a customer who will say, can you recover something from this wreckage? And at that point often there’s really very little you can do
Janet Kennedy (04:49):
is that because they’re not gathering enough data, the right data, what would be the problem there?
Aiden Flynn (04:55):
It’s a whole range of issues. A common approach I find is our customer are a sponsor of the study is often very optimistic about the effects of their drug on response. They will always be too optimistic about that. And therefore they have not designed the study in such a way that, well, what happens if your drug isn’t quite as good as you think it’s going to be? Because, you know, when you go into a clinical trial and you start introducing issues such as adherence and persistence or missing data, or just some noise that is being introduced from the operational side of clinical trials, they haven’t accounted for that in the design process. And that adds to a lot of variability and the response and it at to a failed study. The other thing I find is just the endpoints that many people measure are measured in the way that there’s kind of ambiguity, they haven’t accounted for that. You know, I think there are a few reasons why these studies fail, and we try to work with customers to help them think through the issues and to make sure that they have a plan in place to manage them.
Janet Kennedy (06:11):
So even if a drug doesn’t perform as well, you can probably, if it’s set up appropriately on the front end, get some valuable data that will feed into the next process.
Aiden Flynn (06:23):
Absolutely. You know, I think it’s important that even if a study failed or is ambiguous that we try and learn as much as we can from that. And if the sponsor is in the fortunate position, in that they have enough funding to learn and then to look at designing the next study, then we will absolutely help them to do that.
Janet Kennedy (06:47):
Well, tell me exactly what a simulation service is.
Aiden Flynn (06:51):
Okay. Well, I can tell you what our own Kerus Cloud platform does. The word simulation is quite broad. So our Kerus platform, the way it works is it builds a large virtual or in silico patient population, a patient population that comprises the features that you believe to be important in a particular indication. So that will include things like what are the outcomes or the endpoints of interest, what are the patient-level risk factors that might impact on those outcomes, and also how they all relate to each other? So we’re building a kind of a complex data set where there are lots of interrelationships. But the point here is if you can get a good virtual patient population, you can mimic the way a clinical trial would work. And you can ask lots of “what if” questions, you know, “what if I design a study where I have this particular set of inclusion, exclusion criteria, how should I sample from that population”?
Aiden Flynn (07:59):
“How many samples do I need? Do I sample all at once? Or do I take a sample, do some analysis, and then adapt the study? What are the key endpoints that I need to measure? Should I analyze those and how should I define success for the study?” how should I analyze those and how should I define success for this study? And what we’ve shown from our simulation is if you get the right combination of study population, study design, endpoint analysis approach on decision criteria, you make a massive difference in the likelihood of success of the study or the cost of the study or duration of the study. And it’s not kind of multi-dimensional optimization that we are bringing to the table.
Janet Kennedy (08:44):
So it’s the seven P’s right? Proper prior planning prevents, et cetera, et cetera.
Aiden Flynn (08:50):
Indeed, indeed. And so it’s rather than thinking as statisticians as a group of people who will justify a sample size is thinking much more broadly than not I’m using the available data that can help decisions across all of those dimensions.
Janet Kennedy (09:09):
Where does the data come from?
Aiden Flynn (09:11):
That’s a good question. And they come from a range of sources. We will typically, for any indication, we will troll through the literature. We will pull population level statistics relating to endpoints and risk factors. So we’ll pull that data together. What you don’t get from that is typically the interrelationships between those variables or measurements. And so we’ll often supplement that with patient level data where we can get access to it. What we find is with many of our clients, they will have done previous clinical trials in this space, so we can get access to that patient level data and work out the interrelationships.
Aiden Flynn (09:55):
If they don’t have their own data, they will often know an investigator or they’ll have access to a real world registry where we can get the patient level data. And again, quantify the strength of those center relationships. Those are the two primary sources of information. We will also get expert opinion, particularly in relation to what is a meaningful treatment effect in this indication. And this relates to my comment earlier about many sponsors being overly optimistic about either treatment will perform in a clinical trial. So we’ll take all of those sources of information and we’ll integrate them to build this virtual patient population.
Janet Kennedy (10:44):
Now patients are well people, so numbers are great and all this data is fascinating, but people have a potential of throwing a monkey wrench into the best formulas. How do you account for that?
Aiden Flynn (10:58):
Yeah. And again, good question. So this relates to the ability to ask “what if” questions you really need to make sure that the study that you design is robust against some plausible scenarios on one of those plausible scenarios is absolutely what happens if something happens, some patients and you have outliers or whatever, and you need to make sure that what you’re doing in this study will not be derailed just because you’re getting these discordant measures or outlying values.
Janet Kennedy (11:35):
Because that’s inevitable, right?
Aiden Flynn (11:37):
It is. It’s, it’s rare that you have a study where, where you don’t have something like this, but the truth is when many statisticians are designing the study, they don’t account for it at all.
Janet Kennedy (11:49):
So they’re assuming in a perfect world, I’ll start with a hundred patients and I’ll end with a hundred patients?
Aiden Flynn (11:55):
Well, that’s one assumption that is often made, or there’s maybe a simple extension of that, where they will assume that 10% of the patient population might drop out. So they will just end flipped the, a number of patients recruited into the study by further 10%. But even in that instance, you are making an assumption that you know, that the dropout rate is balanced across the treatment groups, which isn’t necessarily the case, particularly if dropout is in relation to non-response.
Janet Kennedy (12:32):
So the challenge here is being able to predict a little better, how many patients you would need, how long they would adhere or be persistent with their medication and how many you’d end up with so that you still have a valuable data set in order to move forward.
Aiden Flynn (12:49):
Indeed. I think it’s important to start asking, well, what if you know this study, isn’t perfect. What’s the likely outcome in the event that, you know, we have lack of adherence and we have non-persistent patients. What sort of adjustments do we need to make in order to cover those eventualities?
Janet Kennedy (13:09):
Now I know you did a simulation study for Spencer Health Solutions. Can you describe that a little bit and what really did it show?
Aiden Flynn (13:16):
Yeah, sure. So in the work we did with, with Spencer, we looked at the impact of adherence and persistence on the likelihood of success in clinical trials. And we took a case study in stroke and we built a virtual patient population. And within that population, we start to introduce various rates of non persistence and nonadherence. And then we work out what’s the likelihood of success, given those underlying assumptions. And then the next step was, well, what sort of adjustments would we need to make in order to overcome those issues that are introduced because of lack of adherence and persistence and on how much would we need to increase the sample size in order to maintain the same level of likelihood of success. And you won’t be surprised to know, lack of persistence and adherence can make a massive difference in terms of the number of additional patients needed to maintain the same level of likelihood of success.
Aiden Flynn (14:24):
And one of the scenarios we looked at in the stroke case, you know, when you compare just looking at adherence alone, leaving persistence aside, looking at adherence and adherence is complicated because all it’s doing is changing the variability of response in a case where, you know, we compared a study where there was very good, high adherence as you might achieve by using something like Spencer versus a case where there was very low adherence. Actually to maintain the same level of success you needed to double the size of the study. And in the case of the stroke example, that accounted for something like a thousand additional patients recruited into the study, which that would cost a lot of money to try and account for that appropriately within the study.
New Speaker (15:17):
Oh, that’s amazing. I mean, that’s millions of dollars.
Aiden Flynn (15:20):
Well, you know, depending on the indication, the disease area, you know, it’s a 20, $30,000 a patient at times, you know, so yeah. You’re saving millions of dollars just by improving the adherence rate.
Janet Kennedy (15:37):
So it’s true that announcer prevention saves a pound of cure!
Aiden Flynn (15:41):
For sure. For sure. That’s what we keep telling people.
Janet Kennedy (15:44):
What’s different about version two of Karus cloud?
Aiden Flynn (15:49):
Version one came out 18 months ago and what we find was, as the simulations were becoming more and more complex, we needed to redevelop the way the software works. It runs on AWS. So the way version two works is it’s got a very clever way of handling the computational power that is needed to run a set of simulation. So it will do a, an initial check as to the complexity of the simulations that are being requested on. Then it will fire up lots of essentially parallel processing units in order to make sure that all of those simulations are run within, you know, within minutes rather than, you know, setting them off on a Friday and coming back after the weekend and hoping that they’re finished. That’s one key difference.
Aiden Flynn (16:48):
The other was the kind of interactivity that I user has with Kerus because we are presenting box quite a lot of results over a range of different scenarios. We felt that we users wanted to have a lot more interactivity to be able to drill down into specific aspects of the results. And so we made that much more interactive the look and feel of the software is very different as well. And we’ve, added new capabilities around being able to generate more realistic virtual populations. So we we’ve included subgroups within that population so that we can start to design studies that are related to precision medicine. It’s an area that I have a lot of interest in, but I felt that again, we talked a lot of precision medicine, but progress has been somewhat limited. And I believe a lot of that is to do with the limited ability of clinical product design tools to really account for the requirements of precision medicine studies.
Janet Kennedy (18:01):
Is this something that your clients are able to actually manage all themselves, or does it come with a level of consulting or instruction or analysis from your team?
Aiden Flynn (18:12):
It depends on the customer. If we work with a large pharma company, they will have a large statistics group. They generally speaking will have the capability in highest to do it themselves. At that said, they often ask us to work with them in order to deliver our broader suite of services in addition to the software. If we work with a smaller company, they typically don’t have the in house resource or skill set to run the software. So we will run it for them.
Janet Kennedy (18:49):
Excellent. Well, you’ve worked with a lot of different companies and you’ve been in the field for a really long time clinical trials. It’s in chaos now from where you are right now, what do you see and where do you think that the industry is going? Are they interested in using this opportunity for long term innovation or are they just trying to band-aid the situation and get through it?
Aiden Flynn (19:14):
I think it’s fair to say. We see all sorts. The COVID-19 world has created a lot of chaos. A lot of the projects that we were working on, non-COVID projects, a lot were delayed, postponed. I think it put some of our client companies into some difficulty because they had raised money to reach a certain milestone within a certain period of time and they may not be able to do that. So, actually they have been reacting to that, to work with us, to see what we can do, given that they might not be able to complete the study or they’re going to have lots of missing data and missing visits. So things like that, we’ve been doing a lot of work hand in the COVID world as well. We’ve been getting a lot of requests to support COVID studies, and of course, everything needs to be done urgently and needs to be finished yesterday.
Aiden Flynn (20:12):
But what I think, what is interesting is the world is looking at the industry right now and saying, okay, what can you come up with? And I think it has forced a change in behaviors, you know, and I’ll give you a good example in a COVID study that I’ve been working on, we’ve gone from a blank piece of paper, a blank protocol to getting it approved and getting it funded by the MHRA here in the UK for the regulatory in a period of six weeks and a large pharma company. That’s, unheard-of a protocol might take months, many months if not years, to develop. And I think just this urgency has been helpful. It’s shown the spotlight on the importance of the industry at a time like this. And I feel the industry really needs to take this opportunity to react and to deliver something back to the public.
Aiden Flynn (21:14):
What we’re seeing as well is that some of the larger companies, because a lot of their studies are on hold. They’re kind of taking stock a little bit to say, okay, what can we do? And the level of, what can we develop? Um, they’re looking at solutions like our software as a way to overcome the challenges they face in clinical trials. I’m hopeful that in this COVID world, the industry, I feel, has talked about innovation for a long time. It is not naturally a conservative industry. It doesn’t change quickly. It will talk about innovation, but I think it’s slow to adopt innovation. I think COVID has forced them to look at new ways of doing things. And I’m hopeful that some of that will stick as we come out of it. It will just become the way that we do things in the future.
Janet Kennedy (22:10):
Pharma could definitely take a look at the startup process and think about becoming more of an agile based investigation and embracing “fail fast”.
Aiden Flynn (22:22):
Absolutely. You know, it’s, it’s interesting that in a large company, just that the kind of things that get rewarded, the kind of behaviors that get rewarded don’t necessarily encourage the “fail fast” mentality.
Janet Kennedy (22:35):
Well, we’ll just cross our fingers and hope that we really are seeing a sea change in how clinical trials are innovated and modified or embraced new ideas going forward. And that this isn’t just a momentary stasis. Aiden, thank you so much for being here on “People Always, Patients Sometimes.” It was a great pleasure to talk to you and I am so much more knowledgeable now about what Exploristics does and, and how you worked with Spencer Health Solutions. It was very enlightening.
Aiden Flynn (23:05):
Thanks for giving me the opportunity to come and talk to you today. It was a pleasure.
Janet Kennedy (23:09):
Thank you for downloading this episode of “People Always, Patients Sometimes” podcast. If you’ve enjoyed our conversation, a review and a rating on iTunes would help us find more listeners. This podcast is a production of Spencer Health Solutions.
Janet M. Kennedy is a healthcare marketing and social media professional. Janet is the Senior Digital Brand Manager for Spencer Health Solutions and hosts multiple podcasts including Get Social Health and People Always, Patients Sometimes. She is a member of the External Advisory Board of the Mayo Clinic Social Media Network.