From today's edition of Morning Consult ...
21st Century Pharmacovigilance and the Role of Artificial Intelligence
Artificial intelligence has an unimaginable potential. Within the next couple of years, it will revolutionize every area of our life, including medicine — and pharmacovigilance. Usually, we make sense of the world around us with the help of rules and processes that build up a system. The world of Big Data is so huge that we will need artificial intelligence to be able to keep track of it.
With the evolution of digital capacity, more and more data is produced and stored in the digital space. The amount of available digital data is growing by a mind-blowing speed, doubling every two years. In 2013, it encompassed 4.4 zettabytes, however by 2020 the digital universe — the data we create and copy annually — will reach 44 zettabytes, or 44 trillion gigabytes.
In a world increasingly driven by outcomes reporting and Big Data, more patient-level information from individual consumers is not always synonymous with better information. The good news is that artificial intelligence will facilitate what the pharmacovigilance ecosystem lacks today — a coordinated and efficient systems for developing actionable evidence on safety and effectiveness.
Today, the absence of these capabilities significantly affects the public health by creating obstacles for patients and clinicians to receive the meaningful information they need to make informed decisions, perpetuating unnecessarily long delays and gaps in effective and timely safety communications and recall management, hindering the timely development of new and innovative treatment options, and increasing the overall costs and inefficiency of the healthcare system.
In considering the role artificial intelligence can play in the both the near and long-term future of outcomes centricity, we need to discuss the concept of Design Thinking which requires cross-examination of the filters used in defining a problem and to revise the potential opportunities before developing strategies and tactics. Design Thinking requires cross-functional insights into a problem by varied perspectives as well as constant and relentless questioning. In Design Thinking observation takes center stage. In the “Sciences of the Artificial,” Herbert Simon has defined “design” as the “transformation of existing conditions into preferred ones.”
Unlike Critical Thinking, which is a process of analysis, Design Thinking is a creative process based on the creation of action-oriented ideas. AI can be a revolutionary tool to develop those action-oriented ideas. And, just for the record, “action-oriented” and “pharmacovigilance” are not mutually exclusive terms.
There is so much data to utilize: patient medical history records, treatment data — and lately information coming from wearable health trackers and sensors. This huge amount of data must be analyzed not only to provide patients who want to be proactive with better suggestions about lifestyle, but also to serve providers with instructive pieces of information about how to design healthcare based on the needs and habits of patients, and provide regulators not just with more data, but better date in context. Can AI usage for adverse event reporting and prediction be far behind?
Artificial intelligence is already found in several areas in health care, from data mining electronic health records to helping design treatment plans, from health assistance to medication management.
Artificial intelligence will have a huge impact on genetics and genomics, helping to identify patterns in huge data sets of genetic information and medical records, looking for mutations and linkages to disease. There are companies out there today inventing a new generation of computational technologies that can tell doctors what will happen within a cell when DNA is altered by genetic variation, whether natural or therapeutic. Imagine the predictive capabilities for pharmacovigilance.
But making knowledge actionable requires the application of proven analytical methods and techniques to biomedical data in order to produce reliable conclusions. Until recently, such analysis was done by experts operating in centers that typically restricted access to data. This “walled garden” approach evolved for several reasons: the imperative to protect the privacy and confidentiality of sensitive medical data; concern about the negative consequences that could arise from inappropriate, biased, or incompetent analysis; and the tendency to see data as a competitive asset. Regardless of the specific reason, the result has been the same: widespread and systemic barriers to data sharing.
If we are to reverse these tendencies and foster a new approach to creating evidence, we must bear in mind that there must be a common approach to how data is presented, reported and analyzed and strict methods for ensuring patient privacy and data security.
Rules of engagement must be transparent and developed through a process that builds consensus across the relevant ecosystem and its stakeholders. To ensure support across a diverse ecosystem that often includes competing priorities and incentives, the system’s output must be intended for the public good and be readily accessible to all stakeholders.
For any of this to work — and especially in the world of pharmacovigilace, we must view artificial intelligence through the lens of 21st century interoperability: the idea that different systems used by different groups of people can be used for a common purpose because those systems share standards and approaches.
And as Philip K. Dick wrote, “Reality is that which, when you stop believing in it, doesn’t go away.”
Will our socio-economic “technology gap” lead to a more pronounced “adherence/compliance gap?” It’s an important question. That’s why it’s crucial we remember there is no one-size-fits all solution. But that mustn’t mean we disregard the reality of the growth and pervasiveness of apps, mobile apps. Let’s face it, when it comes to mobile phones, any gap is rather narrow.
As the American industrialist Walter O’Malley once opined, “The future is just one damned thing after another.” Much depends not just on infrastructure, but also on capabilities, and trust.
The end goal is the same for all stakeholders — ensuring optimal use of resources for health care systems; improving access to value-adding medicines for patients; and appropriate reward for innovation.
As management guru W. Edwards Deming once quipped, “Change is not required. Survival is not necessary.”
Artificial Intelligence is here. Fasten your seat belts.
21st Century Pharmacovigilance and the Role of Artificial Intelligence
Artificial intelligence has an unimaginable potential. Within the next couple of years, it will revolutionize every area of our life, including medicine — and pharmacovigilance. Usually, we make sense of the world around us with the help of rules and processes that build up a system. The world of Big Data is so huge that we will need artificial intelligence to be able to keep track of it.
With the evolution of digital capacity, more and more data is produced and stored in the digital space. The amount of available digital data is growing by a mind-blowing speed, doubling every two years. In 2013, it encompassed 4.4 zettabytes, however by 2020 the digital universe — the data we create and copy annually — will reach 44 zettabytes, or 44 trillion gigabytes.
In a world increasingly driven by outcomes reporting and Big Data, more patient-level information from individual consumers is not always synonymous with better information. The good news is that artificial intelligence will facilitate what the pharmacovigilance ecosystem lacks today — a coordinated and efficient systems for developing actionable evidence on safety and effectiveness.
Today, the absence of these capabilities significantly affects the public health by creating obstacles for patients and clinicians to receive the meaningful information they need to make informed decisions, perpetuating unnecessarily long delays and gaps in effective and timely safety communications and recall management, hindering the timely development of new and innovative treatment options, and increasing the overall costs and inefficiency of the healthcare system.
In considering the role artificial intelligence can play in the both the near and long-term future of outcomes centricity, we need to discuss the concept of Design Thinking which requires cross-examination of the filters used in defining a problem and to revise the potential opportunities before developing strategies and tactics. Design Thinking requires cross-functional insights into a problem by varied perspectives as well as constant and relentless questioning. In Design Thinking observation takes center stage. In the “Sciences of the Artificial,” Herbert Simon has defined “design” as the “transformation of existing conditions into preferred ones.”
Unlike Critical Thinking, which is a process of analysis, Design Thinking is a creative process based on the creation of action-oriented ideas. AI can be a revolutionary tool to develop those action-oriented ideas. And, just for the record, “action-oriented” and “pharmacovigilance” are not mutually exclusive terms.
There is so much data to utilize: patient medical history records, treatment data — and lately information coming from wearable health trackers and sensors. This huge amount of data must be analyzed not only to provide patients who want to be proactive with better suggestions about lifestyle, but also to serve providers with instructive pieces of information about how to design healthcare based on the needs and habits of patients, and provide regulators not just with more data, but better date in context. Can AI usage for adverse event reporting and prediction be far behind?
Artificial intelligence is already found in several areas in health care, from data mining electronic health records to helping design treatment plans, from health assistance to medication management.
Artificial intelligence will have a huge impact on genetics and genomics, helping to identify patterns in huge data sets of genetic information and medical records, looking for mutations and linkages to disease. There are companies out there today inventing a new generation of computational technologies that can tell doctors what will happen within a cell when DNA is altered by genetic variation, whether natural or therapeutic. Imagine the predictive capabilities for pharmacovigilance.
But making knowledge actionable requires the application of proven analytical methods and techniques to biomedical data in order to produce reliable conclusions. Until recently, such analysis was done by experts operating in centers that typically restricted access to data. This “walled garden” approach evolved for several reasons: the imperative to protect the privacy and confidentiality of sensitive medical data; concern about the negative consequences that could arise from inappropriate, biased, or incompetent analysis; and the tendency to see data as a competitive asset. Regardless of the specific reason, the result has been the same: widespread and systemic barriers to data sharing.
If we are to reverse these tendencies and foster a new approach to creating evidence, we must bear in mind that there must be a common approach to how data is presented, reported and analyzed and strict methods for ensuring patient privacy and data security.
Rules of engagement must be transparent and developed through a process that builds consensus across the relevant ecosystem and its stakeholders. To ensure support across a diverse ecosystem that often includes competing priorities and incentives, the system’s output must be intended for the public good and be readily accessible to all stakeholders.
For any of this to work — and especially in the world of pharmacovigilace, we must view artificial intelligence through the lens of 21st century interoperability: the idea that different systems used by different groups of people can be used for a common purpose because those systems share standards and approaches.
And as Philip K. Dick wrote, “Reality is that which, when you stop believing in it, doesn’t go away.”
Will our socio-economic “technology gap” lead to a more pronounced “adherence/compliance gap?” It’s an important question. That’s why it’s crucial we remember there is no one-size-fits all solution. But that mustn’t mean we disregard the reality of the growth and pervasiveness of apps, mobile apps. Let’s face it, when it comes to mobile phones, any gap is rather narrow.
As the American industrialist Walter O’Malley once opined, “The future is just one damned thing after another.” Much depends not just on infrastructure, but also on capabilities, and trust.
The end goal is the same for all stakeholders — ensuring optimal use of resources for health care systems; improving access to value-adding medicines for patients; and appropriate reward for innovation.
As management guru W. Edwards Deming once quipped, “Change is not required. Survival is not necessary.”
Artificial Intelligence is here. Fasten your seat belts.