Rosenthal_David

AI and the Law

by claudia la via

Artificial Intelligence technology is revolutionizing the way we work, interact, and drive innovation. However, integrating AI into business processes also entails legal challenges and risks, ranging from data protection and confidentiality to fair competition law, intellectual property rights, and the EU’s “AI Act”. Moreover AI has lately become a very powerful tool for professionals, reshaping the legal landscape by providing invaluable support across various roles in law firms and legal departments. Rather than replacing legal professionals, gen AI enhances efficiency, accelerates tasks, and enables lawyers to focus on applying their expertise at the best. MAG interviewed David Rosenthal, partner at Vischer and one of the leading Swiss experts in the field of Data and Technology law. “If you as a lawyer don’t know how to use AI features and tools, it can be an issue today. Hence, most important is not having the right tool, but an understanding of how to handle the technology itself. In the past you had to learn how to search on the Internet, and now you need to learn how to prompt. It’s all about literacy”, says David Rosenthal. Rosenthal’s background is atypical: he graduated in law at the University of Basel and worked as a software developer, legal advisor and journalist before joining one of Switzerland’s top law firms, where he built up and managed the data protection department, among other things. Today his team is focused on helping companies who struggle with innovation and Artificial intelligence and still find some issues about compliance, integration and security. “Technology, especially generative AI, has come to a point where it can help us to better handle our day to day tasks, from brainstorming, to all sorts of translations and transformations of text and other content, searching texts for information or creating brand new content”.

How important is AI today for lawyers and their daily tasks?

I believe it will become so normal that in a few years nobody will talk about it anymore, but just use it. The same has happened to the Internet and to emails. And even though people today say that AI makes us more efficient, you should not believe that it will lead to us having less work to do. It will first and above all increase the pace and expectations of how we do our work. Much like the Internet and email did.

Are there tools today specifically conceived for the legal market?

There are many companies that try to do big business by selling “AI” tools to the legal market. Two thirds of them I wouldn’t even spend a minute with, because either the quality or capabilities are greatly overstated, or because I simply see no business case for us lawyers. At our firm, we use three types of AI solutions: First, the “common tools”, meaning large language models such as GPT4 for direct interaction, much like ChatGPT or Copilot, except that we have built our own front end for getting better legal protection and lower costs. Second, we use a number of specialized, very focused service providers, such as translation solutions or services of companies that let us generate training videos with our own avatars. And third, we build our own special legal solutions, for example for analyzing standard contract types or drafting risk assessments, taking advantage of what a large language model does best, which is to search and process text in a flexible way. Others have built document search solutions that automatically redact documents or chatbots that provide a natural language interface to company policies.

How can AI be integrated in the legal practice while respecting privacy, ethics and a highly professional engagement?

AI is just another tool in your toolbox, and it all depends on how well you know how to use it. You are the one holding the hammer in the hand, so use it diligently and responsibly. And not everything is a nail. If you manage to trick ChatGPT into producing copyrighted texts despite all the guardrails, then don’t blame OpenAI for that. And if you ask the tool to produce a court decision that serves your case, then don’t wonder if the tool will invent one if this is the statistically best response: that’s what the tool has been designed to do. This is why I believe it is so important to understand what AI can and what it cannot, even if you don’t understand all the technical details. The Swiss bar rules state that attorneys should not subject themselves to the influence of third parties who are not themselves subject to professional supervision. This applies here, too.

In which way?

Retain control, and check the output. There are practical problems, though. For example, many of the providers are not really transparent about what they do with your data, or they have poor protections or contracts – even the two leading companies such as OpenAI or Microsoft. But maturity will increase over time, and this has nothing to do with AI as such. It is just a young market that is developing very fast.

How to handle AI governance responsibility?

Most of the compliance questions of AI concern traditional legal areas such as data protection, copyright and responsibility for your own acts and omissions. For example, having an AI service approved at a company is a standard data protection task where you have to check the contract for certain requirements, make sure there is adequate security and understand what the provider will do with your data and inform those affected by your use of AI. Inside a company people need to be instructed on how to use the tools, how to react in case of problems and to remain in the loop – don’t let AI work without oversight, much like you wouldn’t let a trainee work without supervision. None of this, however, is really new.

How do you help companies?

We advise them to have a policy issued on AI that promotes the use of AI, but provides guidance and guardrails. For example, tell people which AI tools they can use with sensitive data, and where they should not do so. Finally, we have had very good experience in doing workshops with companies to discuss the areas in which they wanted to go beyond the strict legal requirements, for example in terms of transparency.

How does Risk assessment for AI projects work?

First, you should make sure you have a good set of risk scenarios. This means having an understanding of the various situations where things might go wrong, and try to take a holistic view. Second, make sure you have a structured approach. Go through each risk scenario, and think of the damage it could cause and how probable that would be. And then consider what you could do to avoid it, and have these measures implemented, where they make sense. Third, understand that risk assessments are entirely subjective. There are, of course, ways to increase their quality. The primary goal is not the final assessment or to get the one and only right risk figure. In Vischer we created our genAI risk assessment tool called “GAIRA” to help our clients but we also made it available as open source for free and it is today widely used.

click here to download the magazine and continue reading the interview

benedetta.miarelli@lcpublishinggroup.com

SHARE