Hogan Lovells 2024 Election Impact and Congressional Outlook Report
As part of our Consumer Horizons series, we conducted a webinar on the legal implications of artificial intelligence in the consumer business. Our speakers James Denvil, PJ Kaur, Florian Unseld and Leo von Gerlach looked into the most relevant use cases and their respective challenges; they discussed the IP law implications of scraping third party content or using AI generated output; they also addressed the data privacy implications as well as the most important aspects of AI Governance, including the legislation that currently emerges around this topic.
Florian set the stage for the entire presentation and gave an outline of why the legal aspects of AI applications are so highly relevant for practically any consumer business, from F&B to luxury goods, household appliances and retail, to name just a few. One way or the other, any business will benefit from a deeper understanding and analytics of data, be that customer, product or production related. Yet none of those can be fully appreciated without mastering the legal intricacies in play.
Leo, by way of example, looked deeper into some typical applications of generative AI models in fashion and retail – how they are being used to create, or help create items for personal use, advertising campaigns, customer attention and the new world of fully individualized e-shopping experiences. In this context he explained in broad terms how the underlying AI models work, including Generative Adversarial Networks to create images/footage as well as the so called Transformer Models to create comprehensive texts or dialogues.
PJ gave a snapshot of the many implications that IP law will have on AI applications. The vast amount of data used to train AI tools will inevitably mean that the data will involve works subject to third party rights. This presents pitfalls (in particular IP infringement risks) for both AI companies and organizations using those AI tools. Until there is specific regulation, how issues of ownership, liability and risk with respect to AI-generated output are addressed in contracts will be key. Finally, organizations should consider how they can maximise the chances of AI-generated output being protected by IP rights and reduce the potential infringement risks (e.g., by adding intellectual human input to AI output).
James first addressed some potential privacy implications of AI tools, for example unauthorized disclosure of personal data to third-parties providing AI Tools, and the challenges for privacy notices to explain how AI decisions are made, and the purposes for which AI operates He also highlighted some of the important considerations of AI governance when organizations deploy AI tools by outsourcing decision making and actions to digital agents. Outsourcing, whether to human or digital agents, involves the risk that decisions and actions may be inaccurate, unfair, unsafe, unreliable, or unlawful.
Motivated, at least in part, by the perceived potential for AI tools to propagate without reasonable controls being in place, policymakers are scrutinizing and moving to regulate AI deployments. Organizations therefore should consider establishing AI governance programs that establish principles for AI deployment, identify potential AI risks and reasonable controls, and monitor the performance of AI tools.
For more information in this ever-evolving area please access our Global artificial intelligence trends guide and join our upcoming 27 July webinar Generative AI and IP: A New Era of Intellectual Property Issues and Practical Strategies to Address Them.
Authored by Florian Unseld, James Denvil, PJ Kaur and Leo von Gerlach