Precision Neuroscience Reimagined: The importance of transparency in working with RWD

In the latest episode of Precision Neuroscience Reimagined, Head of Governance, Ethics, and Patient Participation at Akrivia Health, Simon Pillinger, joined Tina to discuss how we can all help to mitigate and change the trajectory of research, covering the difference between anonymised and deidentified data, and the importance of transparency.
https://spotifyanchor-web.app.link/e/7OksUF3VPxb

Tina Marshall: Hello, my name is Tina Marshall, and this is Precision Neuroscience Reimagined. I’m joined here today by Simon Pillinger. Today does have slightly an Akrivia focus because Simon is our head of governance, patient participation and involvement, and ethics. Today we really want to dig into the importance of patients participating in research and how can we really protect patients.

For me, I’m going to be coming at this from a patient’s perspective as I really grill Simon to understand the differences between de-identification, anonymised, and how can we, as a community working in precision neuroscience for life sciences, how can we all help mitigate and change the trajectory of research so that we can essentially drive more income into the UK as a whole. Simon, thank you so much for joining me today. I really appreciate you giving up your time for this important discussion.

What first brought you actually into information governance?

Simon Pillinger: I started working in the NHS in about 2014. I went in, started working in patient experience, and that was at Oxford University Hospitals, I then went to work in the Patient Advice And Liaison Team. I was leading that team and also did some work on the formal complaint side of things as well. And then a gap came up for… A lovely chap called Tom Mansfield was leaving and he and his boss, Lula, put their heads together and thought, “Well, who do we know who might fill this or might be good at this,” and they both came up with me, which is very humbling. I then went into that role, and that was kind of my entry point into information governance, and then that led me into the preparatory work for the GDPR when it came in, and then working at health and social care ever since, really.

One of the reasons I really always enjoy speaking to Simon actually is so for me, of course, my background is quite varied, I work in the commercial world, and for me, information governance always seems as though it’s quite a dry topic. Simon, you’re not that way at all when it comes to information governance. How are you so passionate about the topic?

Simon Pillinger: So, personal data permeates everything, right? It’s in every aspect of our lives. Although I specialize in information governance and data protection and bits of law, actually I’m then, it’s a nexus, it’s a confluence for every other bit. So, it means that I get to dip into bits of employment law. It means I get to dip into bits of health research law. It’s a whole gateway for generalising in a whole load of different areas. It’s really interesting because it is helping us understand how our rights are upheld and how we protect our rights from it, and for me, that probably stems from a slightly philosophical interest, but actually looking at how that applies in the real world which is not something… Well, yeah, philosophy has tried to do it for a very long time, but it’s where philosophy about the rights of individuals meets the legal application. Now, that’s just a really fascinating area to work in.

Tina Marshall: I’m really keen to dig into that actually if we were to look at this from a patient perspective. For me, as a patient, oh goodness me, I can’t remember, I think it was like two or three years ago, some information came out regarding organisations selling patient data and patient data needing to be collected. What was that all about?

Simon Pillinger: Sell is a really interesting word, and the NHS provides, well, at a national level, so we have something called the Data Access Request Service that allows organisations to request data sets. That comes with a fee. There are a couple of different things it might refer to but in terms of when we think about selling something-

Tina Marshall: But what is that data?

Simon Pillinger: The NHS England will collect and NHS Digital will collect data sets for various things. There’s a mental health data set that is collected. There’s information around hospital statistics, so how many people are going into A&E, and a lot of these data sets are required to be collected, so NHS trusts have provided this data to a national organisation like NHS England, and that helps for a whole load of reasons. Because the NHS is a collection, almost like a federation of organisations, so there is no single NHS organization that operates every single trust. You have different organisations, different local NHS trusts covering acutes, physical health, covering community services, covering mental health services, and then there’s also primary care which is you’ll have groups of general practices working together. You’ll also have individual general practices working together. So, it’s a big federation of different organisations.

There needs to be a level of understanding of what’s happening at a national level to enable national policies to be made and to understand where the problems are because we don’t have that. It makes it very difficult to plan for the future, and we’ve seen in the COVID pandemic just how important having that national perspective is. And so, that’s the reason these data sets are collected, to help national planning. But the Data Access Request Service is actually by and large used by other public authorities to help with planning.

So, for example, if you are a local authority with responsibility for social care planning, so residential social care, there’s a really useful need for you to know how many patients are being discharged of a certain age so you can help fund care home places and care packages to ensure, and looking at is it… We know it’s cheaper and more cost-effective and people want to stay in their homes for as long as possible, so they can use that data to help them plan that, and that keeps costs down. It’s better for the public person. It’s usually better for people as well. There’s a whole load of different ways in which that data can be used for the public good, and that’s both public in the general popular sense, but also for individual benefit as well.

But when we talk about selling data, it’s not the same as if I go into DFS and buy a sofa. It’s not a piece of property that I owe own like a sofa is. There’s a contract that NHS England has with their providers, licensed is probably a better way of thinking about it. The application process is really rigorous for it, and it has to be used for a specific purpose. If you want access to that data, you’ve got to make a really strong use case. Like I say, and the NHS publishes lists of basically what the proposals are, you can go into the meat of this, you can really look as to why public authorities or any of the organisation is looking at this data, you can look at whether that’s being used for commercial purposes.

Tina Marshall: I see what you’re saying. What kind of data? Let’s just say if I go to the hospital for anything or somebody looks at my medical record, my medical record will obviously have my name, my address, my date of birth, it will have the history of my medications, any diagnosis that I’ve had, and it will have that personal information. I don’t want random people having access to that personal information.

Simon Pillinger: Everyone who has access to that information will have been through some fairly rigorous training. For example, if you’re considering that the staff who are providing you with direct care at a hospital, have to have information and go on the training every year, along with a whole load of other training around the new systems in the hospital, how to access information safely. There are multifactor authentication systems to ensure that access is only given to people who need to access that information. All NHS organisations are bound by the Caldicott principles. These are principles that were brought in, in about ’97, ’98 following a report by the late Dame Fiona Caldicott, and essentially these are designed to be rules that clinicians can use to ensure the confidential patient information is used in an appropriate way, so things make sure that you can justify what you’re accessing, accessing it for as little information as necessary. That’s at the hospital level, and then at the national level, once this information comes through to a national body.

Tina Marshall: And I say I’m fine with the hospital level because that’s going to absolutely impact my direct patient care.

Simon Pillinger: At a national level, you’re not going to have nearly as much information coming through as it is in your patient records. This is all structured information. What we mean by that is it’s not free text. So, it’s not your doctor writing a note on a piece of paper. It’s things like your diagnosis. It might be information about how long you’ve taken to get to treatment, and that’s important for understanding how quickly patients are being seen.

Tina Marshall: So, this is the wait times that we’re talking about.

Simon Pillinger: Yeah, absolutely. It will also contain potential information about medications that you’ve been taking, and that’s really useful in terms of understanding what medications are effective, making sure that in future patients can be given the best treatments as quickly as possible so you’re not putting patients on the wrong drug and speeding up treatment, but it will also contain other bits of information. But, the information is then collected, and the ways that it can be accessed in terms of by third parties are different. So, predominantly, the NHS wants to give as little personal data as possible and as little directly identifiable data as possible, and actually, if it can, it will give only anonymous data or anonymised data.

Tina Marshall: Before we dig into what the NHS gives:

Could you help me understand the difference between de-identified and anonymised?

Tina Marshall: Because that comes up a lot and there are differences, but it seems as though they’re nuanced differences.

Simon Pillinger: Most people will be skeptical when someone says anonymous or anonymised and they’re probably right to be a little bit skeptical. Personal data is defined in law, and this has been around for a very long time now, as data that relates to a living individual and is either directly or indirectly identifiable from data. That might be a single piece of data that identifies somebody. So, Simon Pillinger, born in whenever, lives at this address, that’s me. It’s unequivocally me, but it might just be this person who lives in Banbury which is where I live, that actually relates to an awful lot of people. So, you might need other information to then couple with that to relate that information to an individual. But that’s still personal data. When we talk about de-identified or pseudonymised data, we’re talking about data where you would need other information in order to relate it back to that individual.

Tina Marshall: Would it be like so anonymous data to relate it back to an individual? So, could it be my address and the fact that I’m a female?

Simon Pillinger: Yeah, I mean, to give a really good example is that if you go into say a sexual health clinic, they will often allow patients to work on pseudonyms. You obviously don’t want your name called out in a clinic saying, “Oh, Simon Pillinger, yeah, this is the gonorrhea clinic.” So, you might have something like, “Oh yeah, can Mickey Mouse come up?” I’m Mickey Mouse, that’s my pseudonym. I know I’m me, they know I’m me, but no one else in the clinic is able to know who I am. So, if we change an NHS number on a record to an artificial identifier, that’s part of it. There’s a whole range of scope depending on what kind of field your data set-

Tina Marshall: And that’s the de-identified part.

Simon Pillinger: Yeah. If we think about personal data as a scale, so on one end of that scale, you’ve got absolutely personal data. There is no shadow of a doubt that data relates to an individual. And on the other side of that scale, we have anonymous data. That’s data that could not in a month of Sundays be used to relate to an individual. Along that scale, we have de-identified and pseudonymous data where it’s still personal data, but it’s much harder to identify. You’d need more resources. You might need a special key in order to do that. So, hashing algorithms might be used in order to protect that, but if you’ve got the hashing key and you know all the inputs, you’d be able to sometimes repeat that process depending on how you’ve done it. And then further along that, you’ve got anonymous data, and the Information Commissioner’s Office has started calling this effectively anonymous data. Anonymised data, completely anonymous, might be statistically aggregated information, or information that’s never related to personal data anyway. If I typed them into my computer and put down the numbers 4, 5, 6, 7, 9, that’s just random. It doesn’t mean anything.

Tina Marshall: And that’s now effectively anonymous or is that different?

Simon Pillinger: That’s completely anonymous.

Simon Pillinger: So, whereas effectively anonymous or anonymised is information that was personal data which you put through so many processes that it’s effectively anonymous. You would need a disproportionate level of resources and time and knowledge, and there are a couple of tests in law in the weeds of the GDPR that talk about what resources you would need, what’s the reasonable likelihood that you’d be able to identify individuals from this data, and that’s looking at the time and resources, the specialist knowledge. As a general rule of thumb, I talk about the James Bond villain rule which is if you need the resources of a James Bond villain to identify an individual from that data set, it’s probably anonymous. If you need nation-state levels of resourcing in order to identify people, it’s probably anonymous. Anonymity doesn’t require an impossibility. It requires that it’s beyond the reasonable likelihood.

It’s really important that anyone who makes the claim that data is anonymous has assessed it to consider those factors, and that can include things like where it’s held, the environments it’s in, the processes it’s been through to get that data to that state. There are lots of instances where people have published papers on data sets that have claimed to be anonymous and have found those identifiable. There tend to be two kinds of dominating characters in those papers. One, they tend to be published by people who have nothing else to do than that. They are purposely trying to find the identifiability, that’s not representative generally of the population at large, and the second part is they are published data sets. As soon as you publish a data set at row level data, you are increasing exponentially your likelihood of re-identification because everyone can access it, whereas if you’ve got it in say a secure data environment and you’re controlling access very assiduously, the chances are obviously much, much lower.

Would my data from the NHS go into a secure data environment?

Simon Pillinger: This is a really, really good question, and we’re still seeing the development of this. We’re still seeing the policies being configured. At the moment, we have Ben Goldacre’s review back earlier this year of trusted research environments, and this builds on work that’s been going on for a couple of years around really built on what the Office for National Statistics who kind of kickstarted this back I think in the mid-2000s with the Five Safe principles which we’ll need to come back to because I can’t remember off the top of my head.

But by and large, this is about making sure that the right people have access to the right data in the right environment and that technical controls can be given in order to make sure that what people say they’re going to do is actually what they’re doing.

Simon Pillinger

Who determines who can access my data? Who determines the right person, who that right person is who can access my data?

Simon Pillinger: In the case of Data Access Request Service, which is probably one of the better known areas, they have an entire access protocol system for people to work through. This goes through governance review from people like myself who are specialists in information governance and data protection. It might need oversight from what’s called the Confidentiality Advisory Group. Data protection is based on data protection law. There’s also the common duty of confidentiality which comes out of common law. This is the sort of law that is not created by Acts of Parliament. It is created through judicial precedent, and confidentiality is a relatively well known and quite an old common law.

So where you’re using confidential patient information for purposes beyond direct care or you want to put confidentiality aside, and there are sometimes really good reasons for doing this like the National Cancer Registry is a really good example, then they will advise about whether that is proportional or not, but the onus is on the applicant to access that data to demonstrate that there is a public benefit to that, that there is a really good rationale. Data is not just given to anyone who wants it.

Tina Marshall: The NHS has access to my data, they can use my data for clinical purposes and they can use it for planning and improvements. Now I’m a patient.

What’s a pharma company going to do with my data, and why should they have my data?

Simon Pillinger: So, there’s an old joke in data protection in which the answer is always it depends, but it does depend. If a pharma company is acting as a sponsor to a clinical trial, they are almost very often what’s called the controller of that data, and that basically means that they are determining the means and purposes for how it’s used. They will often engage with recruitment sites. These might be NHS trusts who will act on their behalf, and very often, I mean, what we’ve seen is that they will help the pharma company recruit that patient, and these are studies that go through the health research authorities, the HRAs, research ethics committees, the RECs, loads of acronyms in this landscape. Once that study has been approved by the ethics committee, they will pick it apart and go, “Yeah, no, you need to change this bit or actually we’re happy with how this is working.”

Only at that point do the pharma companies start to engage with those NHS sites. They might not be NHS sites, they might be other healthcare organisations, but part of the patient information will often include consent for linking of records so that the pharma companies are able to link up the information they’re collecting as part of that trial with that patient’s records. Often that’s linked with safety monitoring, so making sure if you’re doing a drug with someone that you’re not having adverse reactions, or if you are having adverse reactions that you’re able to monitor and deal with them or report on them, and that flows into the responsibilities that pharma companies have under healthcare medicine and healthcare products and regulatory agency in terms of making sure they’re not killing people, which would be bad.

Tina Marshall: That wouldn’t be good at all. I know during COVID there was public concern around the contact tracing app, and that caused a lot of concern about data being shared. What’s being done to mitigate that? I mean, as a patient, I’m just concerned. Could somebody track me?

Simon Pillinger: Actually, the contact tracing app is a really good example of data protection done well. There’s this concept called privacy by design and default which was coined by a wonderful lady called Dr. Ann Cavoukian. You build it into your product. Whatever you’re making, if it’s using personal data, you don’t make data protection an add-on. You build it, it is in the foundation. And actually, that makes it generally easier to develop the product. It’s really good for business analytics and making sure you’re getting things right, but it also means you’re not bolting it on later. Your product is designed with that at heart.

The contract tracing app is a really good example of when you can do this well. The idea is actually that rather than collecting that information centrally, what you do is, your phone has a conversation, it’s spouting out random bits of text, and what it’s also doing is receiving bits of text from other people. Rather than it collating one big data set, what it’s actually doing is you are then able to, of your own volition, share the bits of code that your phone has received. If the other person does the same, that central database goes, “Oh, hold on. These two people have been near each other,” because they can measure the length of time and the contact. It’s not tracking where you’ve been. It’s not tracking what you’ve done. It’s just measuring one single thing and that’s your proximity.

Tina Marshall: Okay. It’s just, yeah, essentially measuring proximity from me to another person that could have tested positive for COVID at the time.

Simon Pillinger: Absolutely.

Tina Marshall: That’s very interesting, and it kind of does debunk a little bit around information governance and some of the issues that the public have with the concerns of that. I do want to touch on the patient participation involvement work that you have and really understand more from their perspective what industry organisations and the NHS can do to help with clinical trials.

I do think that COVID helped vastly with that. I think there’s a greater understanding now of clinical trials, but we do also see a lot of things in the world go back to the way they were before COVID, and arguably the patient participation and involvement in clinical trials is something that we need to keep:

How do we do that? I mean, from your work with your group, what are their thoughts?

Simon Pillinger: I think one of the things that we’ve seen is that public understanding of how your medicine gets from the lab to the little bottle you take home from the pharmacy is not particularly well known, and that is… Before I started working in Akrivia, I was probably in a state where I didn’t want to go and research an entire ecosystem. I think there are no less of organisations working in pharma to do more to educate the public about how medicines are developed, in the same way, that I think it’s good for people to understand how the burger on their plate gets from the farmyards through all those processes to where we are because ultimately we’re putting both those things in our bodies. I think we need to sort of understand that.

I think once we start, I mean, certainly from my own journey into understanding the pharmaceutical industry, having come from the NHS side of things, having worked with patients before and come full circle, it’s really been very powerful to actually see these are all the steps. There’s an awful lot of regulation that organisations have to go through, and I think the rate of failure is astonishing. I think actually that’s one of the things that people, the general public doesn’t necessarily-

Tina Marshall: I think it’s around 90%. I think that there is…

Simon Pillinger: Could you imagine if that was an airline? You wouldn’t get on an airplane with a 10% chance of survival.

Tina Marshall: Yeah. No, no, and I think that the drug failure through the clinical trial process, I think it’s higher than actually sending a man to the moon. I think it is higher than rocket science.

Simon Pillinger: I think there’s a public perception of, oh, pharmaceutical organizations are big fat cats and they’re making a load of money out of people’s suffering. But I think the problem is what we only ever see is the prices of drugs that succeed which has to be because it costs money to develop these drugs, and the cost is astronomical.

Tina Marshall: Has to mitigate the cost of the drugs that failed.

Simon Pillinger: Absolutely. So, why don’t we say that more often I think is probably the thing. We probably do. I think we’ve got to get better as an industry in communicating with patients.

Tina Marshall: It’s almost as though we’re scared of telling patients how much it all costs and letting them know that actually there are financials involved in this.

Simon Pillinger: I think so. I do wonder if it’s because we have a national health service which is free at the point of entry, where actually we kind of don’t want to face that reality maybe. But I think as we engage with that, we should, and we have to understand. We have NICE, the National Institute for Clinical Excellence who advises on what drugs are cost-effective. So, we do understand that, but I think we have to engage with that very honestly and openly as individuals, as members of the public, as people working in that industry, to be able to say, “This costs money. We have to understand at a national level that we have to pay for things.”

Things cost money.

Tina Marshall: Absolutely. And trials are absolutely key. To be able to get even one drug to market for somebody that’s really suffering will make such a huge difference.

Simon Pillinger: Yeah, absolutely, and what we see then is that costs do come down. Once we’ve got that drug developed. Actually developing that in some ways is the hard part, then we can start to refine that. We actually realise, “Oh, this drug can be used for other purposes as well.” How many drugs have there been that we use for more than a single purpose?

Tina Marshall: That’s very true actually. I think we see an awful lot more off-label use than you would anticipate that then can lead to a clinical trial.

The patient involvement in clinical trials I think is something, as we’ve spoken about, is been phenomenally underestimated, but we know it’s also one of the biggest challenges that we have in drug development is actually getting patients to be involved in clinical trials and understanding the importance of them, without them thinking that they’re selling their soul or their data or themselves or their background to big pharma, as it’s called. We need to accelerate this. I think the industry, the life sciences industry as a whole absolutely has to accelerate this because we have to continue to evolve.

One of the things that really does worry me and keeps me up at night, particularly at the moment when we’re looking at the number of people who are suffering, whether it’s through cancer or through mental health, bipolar, schizophrenia, we know that there are lots of really good treatments out there, but we know that many of them are only effective for a short period of time, and then the patient ends up going onto a cycle of the medication working, the medication doesn’t work, then the medication will work, the medication won’t work. So, we have to do better. We have to understand these patients, but to do that, we have to bring the patients in for clinical trials.

As the life sciences ecosystem, from your experience in working with patients, what can the ecosystem do to help this?

Simon Pillinger: I think one of the key things that we are much, much better positioned to do is be more transparent about how the data was used. The reason I say that, in old-fashioned data dissemination models where you give someone a data set or you do this, that, the other, you take a lot of effort to pseudonymise that data set to the point where actually it’s really, really hard to re-identify which is kind of the point, it protects privacy, but it means you’re in a position where patients go, “How are you using data for research? What research? We don’t know.” It makes it very hard to track it. And actually, from the conversations I’ve had with our PPI group with other patients over the years, people are actually generally quite happy for their information to be used for research. Not everyone does.

Tina Marshall: Are they?

Simon Pillinger: Yeah, not everyone does. That’s why we have a national data opt-out for people to opt out of their data being used for research and planning. But what I think people want to understand is what research is being used for. They want to understand and contribute to that, and data can be a part of that. It doesn’t have to be them giving blood samples, actually kind of participating in that. What we’ve seen more recently as well are concepts around consent to contact. Hospitals build, actively building up rather than kind of relying on implicit consent or relying on patients who have not opted out of this, actually getting patients to opt-in.

Tina Marshall: So, I have a question about all of this.

Why was it so easy to recruit so many participants for the COVID trials, but not for any others? What’s the difference?

Simon Pillinger: I think COVID has such a high profile as a disease area, and I’m going to be really cynical and say COVID affected everybody, but it also affected people who are particularly vulnerable, and I think as a society we often tend to want to, if someone’s really vulnerable, we want to help them. I don’t know a single person who did not have a relative affected by the pandemic in some sort of way. I was working in care homes at the time. They were hammered. So, it was such an important thing that we could contribute to that. But I think because of the high profile of it, it made people, it’s almost kind of got like a blitz spirit to it. People wanted to be involved. It was a kind of almost patriotic sense of it.

I think we kind of got to bring that back to clinical trials. We shouldn’t coerce people into this. This shouldn’t be a kind of guilt trip thing. This should be what we want to contribute because personally, I think that’s part of my personal ethical framework. My own personal philosophy is I want to contribute to making the world a better place. This is a way in which I can do that. But that’s got to go hand in hand with transparency. In other technologies, we got to do this. So, coming back to secure data environments is a really good way of doing that.

Tina Marshall: I’m really glad you brought up the secure data environments. We’ve talked about all the benefits of using patient data, which is great. It’s used for service improvements. It’s used for planning. It’s used to understand the history and patient care, and it’s used for potential drug development. All of that’s good.

How do we stop it from being used for bad? Do we stop the data from falling into the wrong hands for people that aren’t going to use it in such a positive, powerful way?

Simon Pillinger: We’ve got the technologies now. Akrivia have been leading in this field in many ways of developing these environments which are secure. These have certain what we call sometimes call gateways or airlocks where there are technical prohibitions to prevent people from committing certain actions. If you want to take a data set out of an environment, that’s got to be approved by an administrator, and setting up certain levels of standards for data coming out. If you want to take data out that might be statistical, aggregated data for supporting a paper to be published, that’s great. You want to take out a data set that says Simon Pillinger, he had his appendix taken out several years ago, no, you can’t have that. It’s about making sure that those environments exist, you can control it, and you can audit them nicely and easily.

What you can also do with those environments is provide them with all the analytic tools that data scientists and researchers want whilst enabling them to know what they’re doing. So, it keeps it secure and it keeps it transparent, you’re able to report on what’s being done. But the great thing about this is if a patient comes and says, if you have means to identify them easily and you can build that in, that’s part of privacy by design is they can say, “How’s my data being used?” we can say, “Oh yeah, Simon, your data’s been used to help with understanding,” I’ll take the appendix route, “how quickly people are seen for appendicitis, what is the route that people come in, and we’re using that potentially, having that transparency information, to help make sure that people are seen faster to reduce the chance of people having an appendix burst or looking at how long people are waiting in relation to what’s going on in other sites, so what is one NHS trust doing that another could do better or replicate.”

So, it’s all about those things. You start to see those use cases and they’re really powerful because they benefit patients. In the pharmaceutical industry, in fact, the health research industry, I don’t know a single person there who doesn’t want to help patients, and maybe I’m being overly optimistic, but I’ve yet to meet anyone who doesn’t want to help patients. That’s at the core of it, and that’s as true I think for people working in the R&D teams as it is for people on the front line of the NHS, and being married to a clinician, I feel like I can say that. People want to help. People want to reduce the suffering of individuals, and they want to do that while also holding their rights. They want to build privacy into that whole process.

Using SDES and being able to do the federated analysis so that we’ve got the technology now where you don’t have to move data between environments. You can use bits of technology called APIs to extract bits of information or aggregate stuff without actually moving the data. You can interrogate it. So, you can look at a national group of secure environments and you can ask a question, and you get how many patients are there who are on X drug for Y condition, whether that’s treatment-resistant depression, or how many patients have there been who have had postpartum hemorrhages if you’re looking at maternity services because you start to look at those trends in a way which preserves patient confidentiality.

I think this is at the heart of what I think innovation is. Innovation, if you just scrap the rule book, that’s not innovation. For me, that’s laziness.

Innovation is about doing things inside the rules and protecting people’s rights, but being so innovative, you can do both. I don’t think it’s an either or. You can do both. That’s one of the things I really like about Akrivia actually is that we don’t go, “Oh, we can do one or the other.” No, no, we’ll do both, because actually that’s the right thing to do.

Simon Pillinger

Tina Marshall: Simon, thank you so much for joining me today. It’s been a pleasure actually to hear your insights, and I hope that everybody else finds it very interesting today because, for me, the key thing that’s come through our discussion is how important transparency is, how important it is that actually we all try and treat clinical trials as though we did during COVID, that horrendous time, innovation was created, and there are some things that we can take from that and we can learn and move forward. Also, it’s really good to know that all of my data and people know where I live and that what I do is not being sold, and that innovation is really the key to allowing all of that to happen.

Simon Pillinger: Absolutely.

Listen to the latest episode of Precision Neuroscience and more here: https://spotifyanchor-web.app.link/e/7OksUF3VPxb

Newsletter Updates

Enter your email address below and subscribe to our newsletter