Hogan Lovells 2024 Election Impact and Congressional Outlook Report
In recent years, AI has become a buzzword in life sciences and health care as the technology has developed to create applications from drug discovery to diagnostics to therapeutics. Most recently, AI has been used in the fight against the rapid spread of COVID-19. On 1 December, members of our Life Sciences and Health Care team from around the world met with Stacy Schultz, General Counsel, Research & Development at UnitedHealth Group for a discussion on whether or not new legal strategies are required to harness opportunities or mitigate risks associated with AI.
Fabien Roy, partner in the Hogan Lovells Medical Devices and Technology practice in Brussels, explained that in the EU, there is not official definition of artificial intelligence; however, it is generally understood that AI refers to the systems that display intelligent behavior by analyzing their environment and taking actions – with some degree of autonomy – to achieve specific goals. Meanwhile, in the U.S., AI is broadly defined by the Food and Drug Administration (FDA) as the science and making of intelligent machines, and can use different techniques, including machine learning, explained Jodi K. Scott, partner in the Hogan Lovells Medical Devices and Technology practice in the U.S. From a practical perspective, Schultz grouped AI into three buckets related to health care: technical data, systemic demand, and financial systems for allocating investment.
The panelists explored a case study in which hypothetical firm “PharmaCo” manufactures pharmaceutical compounds, and “AICo” specializes in computing expertise and machine learning. PharmaCo seeks to use AICo’s AI tools to save time on reviewing its proprietary pharmaceutical compound libraries. AICo has a machine learning tool that can create a knowledge map, and enters a collaboration agreement with PharmaCo for use of that technology. AICo’s tool finds five compounds for the treatment of a rare disease in PharmaCo’s library that its AI determines should be safe and effective for medical use.
In the hypothetical, PharmaCo looks to patent the output of AICo’s machine learning, and wants to know more about AICo’s algorithms and selection process, but AICo does not want to share its strategies. Imogen Ireland, senior associate in the Hogan Lovells Intellectual Property, Media, and Technology practice in London, explained that there were a number of patent law aspects to consider. Firstly, it would be important to work out who the “inventor” was in this scenario and who, therefore, would be entitled to patent the AI’s output. Generally, this is a question of who has made a “contribution that goes to the inventive concept.” However, Ireland explained that where AI is involved, this is not always an easy question to answer. Ireland went on to explain that there is a second patent law issue to consider, which is whether or not PharmaCo has access to the data it needs to be able to explain (and therefore) patent its invention. There could be a problem if that data is owned by AICo. Finally, Ireland pointed out that PharmaCo should also have an eye to the future – as the AI had suggested known compounds for new medical uses, PharmaCo needed to think about what patent and regulatory exclusivities it could obtain in this scenario.
Ireland concluded that these are issues that should have been assessed at the outset and, where possible, agreed to under the parties’ contractual agreement. Dr. Matthias Schweiger, partner in the Hogan Lovells Litigation practice in Munich, emphasized the need for the parties to plan for compensation in their initial contracts, noting that agreements often include a clause that anticipates how compensation disputes would be resolved. Dr. Schweiger suggested that parties would often want to avoid using the public court system for dispute resolution to ensure the confidentiality of their data.
Patrice Navarro, counsel in the Hogan Lovells Corporate & Finance practice in Paris, noted the General Data Protection Regulation (GDPR) in the EU would apply in this case, presuming that there is personal data involved. The GDPR would impose obligations on PharmaCo as a “data controller” to know how the AICo selection process works, and PharmaCo will have to ensure that there is no bias in the reasoning and output process. Navarro said that as a “processor” under the GDPR, AICo will be limited in how it can use PharmaCo’s data moving forward; yet, Navarro outlined several ways around these restrictions.
Adding to the hypothetical, the panelists considered what would happen if PharmaCo realized that one of the drug compounds found by AICo is similar to one of its existing drugs that had been found to have adverse effects. If PharmaCo would like to run AI on its large data set from the existing drug, there would be limits on the company’s ability to use the data in a way that differs from the original study’s purpose. Navarro explained that PharmaCo must have informed all of the patients in its clinical trial about how the data could be used moving forward; however, presuming it had done so, and put in place adequate safeguards for use of the patient data, PharmaCo can repurpose the original study data. However, each EU state has unique rules surrounding the use of patient data that should be considered, Navarro cautioned.
From an in-house perspective, Schultz explained that this question is not an issue of ownership, but more of an issue of rights and permissions, bringing into play privacy law; thus, for PharmaCo to repurpose the original data, the patients would need to have agreed to the new use. Schultz pointed out that selection bias would also need to be ruled out from the original study data in order for it to be validly applied to a new data set. Ireland noted that as the pace of technology moves faster than the law, this will be a question that will need to be revisited by the courts.
The panelists next considered how the parties could lawfully collaborate with a hypothetical third party, “AppsRUs,” on the development of a software application for physicians to use with the new drug that would help them identify patients that would fare well on the new drug, compared to other patients for whom the new drug may be riskier. Scott explained that in the U.S., providing information to a health care provider – whether it’s information risk-rating patients, predicting outcomes, or applying rules-based logic – is highly regulated. FDA has a complex scheme for this that regulates software by its function while providing carve-outs for some products and granting other low risk AI-based apps “regulatory enforcement discretion” even where the product meets FDA’s definition of a “medical device,” and as such, will not impose stringent review and postmarket compliance requirements.
Before being sold in the EU this application must be CE marked according to relevant medical device or in vitro diagnostic medical device (IVD) legislation. Roy outlined the CE marking steps, which include assessing whether software is a medical device or an IVD, preparing the related technical documentation to demonstrate compliance with the EU requirements and preparation of a Declaration of Conformity. For applications regulated as companion diagnostics under the IVD Directive, a notified body does not currently need to be involved in the conformity assessment. A self-assessment process before affixing the CE mark is sufficient. However, this will no longer be the case with the application of the IVD Regulation from May 2022. Accordingly, Roy advises life sciences companies to enter the market as soon as possible with the aim to generate data to better support their submission with their notified body under the IVD Regulation. Such submission should be made by the second half of 2021 at the latest.
Dr. Schweiger warned that failure to comply with the appropriate regulatory requirements could result in administrative fines or even criminal liability in the EU. Additionally, in both the U.S. and the EU, Scott pointed out that regulators can force a company to recall a product from the market, which can be incredibly painful for a business.
In the U.S., marketing authorization can apply protection for companies from civil litigation, because a Premarket Approval (PMA) reflects an FDA finding that the product is safe and effective, which affords it pre-emption, Scott explained. Companies can also secure market authorization via a 510(k) premarket clearance or the “de novo” pathway, both of which are faster and easier, but do not provide a company with civil litigation protection via pre-emption. Dr. Schweiger described how the level of litigation risk will depend on the jurisdiction, with some jurisdictions applying strict liability for certain activities of economic operators, and with each hypothetical party – PharmaCo, AICo, and AppsRUs – facing scrutiny from liability point of view for their roles in the panelists’ case study.
Concluding the discussion, the panelists speculated on whether regulatory bodies are prepared for the entry of AI into the market. Regardless of whether we are ready for AI, it is already here, Scott stated, adding that much work remains to figure out how to best regulate this technology. Ireland agreed with this, also saying that legal tests and thresholds tend to be evaluated from a human’s perspective (as opposed to an “AI’s”) so it could be important to keep humans close to the innovative process. Roy said regulators are not yet ready for AI, due to persisting regulatory, privacy and ethical concerns. Navarro asserted that now is the time for artificial intelligence, because the world is in need of new tools in medicine. Navarro cautioned that while we need to regulate AI, rules that are too harsh can stifle innovation.
Dr. Schweiger asserted the importance of companies involved in AI having risk mitigation strategies in place to safeguard against liability risks and of keeping an eye on potential future amendments such as the proposed “AI operator’s liability.” Schultz said that while the world is “not really” prepared for “AI,” most of what the market calls AI is not fully-realized artificial intelligence that needs to be regulated as such. Nevertheless, Schultz agreed with Scott that AI use cases are growing faster than the regulatory bodies’ and courts’ capacity to anticipate and advise on legal questions.
Coming in early February, 2021: Managing supply chain disruptions: Lessons learned in an unstable world