8 minute read

UNVEILINGTHEIMPACTAND KEYLEGALCONSIDERATIONS WHENUSINGARTIFICIALINTELLIGENCE

By Jennifer Wu, TMT Partner, Pinsent Masons

Artificial Intelligence (AI) is the latest buzzword. We often hear about virtual assistants like Siri and Alexa, more recently the AI-powered language model ChatGPT, but how much do we actually know about this latest technology? With so many AI options out there, are we able to make informed choices about these latest AI tools?

AI is special because it makes it possible for machines to learn and make decisions to perform human-like tasks. These AI tools are usually supported by algorithms and computational models. Large amounts of data are often fed into these models, whether at the training or operational stages, where the data will be processed and analysed to uncover patterns or solve problems.

What can AI offer and why does it matter

The impact of AI is far and wide. AI has a wide range of applications across various industries. For instance, in healthcare, AI can assist in diagnosing diseases and analysing medical images. In finance and commerce, AI algorithms can be used for risk assessment and fraud detection.

AI systems are malleable and can be tailored for different purposes, depending on how the algorithms are written and designed For businesses, AIs could be integrated into different stages of production to perform different functions With AI’s assistance, businesses are able to automate repetitive tasks, improve decisionmaking, enhance productivity, and uncover new opportunities.

AI is becoming more and more sophisticated. The growth of data and advancements in computing power in recent years have fueled the development of AI algorithms. With this development, authorities globally are contemplating the best approach towards regulating AI and how to balance the need for consumer protection whilst not curbing innovation.

Before AI-specific regulations come into play, businesses should be aware of the legal risks and concerns in data privacy, intellectual property and contractual liability as AI continues to advance.

Part I: Key considerations when using AI

(1) Data privacy

As more companies begin to apply AI to their business operations, data privacy risks start to come to the fore.

Privacy is a significant concern in the AI landscape Whilst AI adoption and working with AI is an inevitable future, companies should be mindful of some privacy risks that are associated with the use of data in AI.

This is because most AI systems learn how to simulate human intelligence by relying on vast amounts of data, as part of the training process These data can be gathered from various sources For example, some data is directly provided by users, such as contact information or purchase history, while other data is collected through behind-the-scenes methods like cookies and tracking technologies Because of the vast amount of data available online, companies nowadays are able to obtain data from different sources - whether with or without the individuals’ consent or knowledge.

ChatGPT is a form of generative AI, an AI chatbot that uses natural language processing to create human-like conversations, trained by human feedback and reward models for reinforcement learning. With regard to ChatGPT, some data protection experts suspected that the data that was used to train the model was obtained by trawling the internet. Even though the specifics for training ChatGPT have not been disclosed, these experts warned that it may not be legal to obtain training data this way, revealing doubts about data protection and potential misuse of personal data in AI.

Another common concern with this type of AI is that these systems may learn from user prompts (i e users’ questions and instructions) and add such user prompt to their database This is particularly concerning when users are not always aware that the information they provide to the AI language model could potentially be used to answer questions of another individual enquiring about the matter For businesses, the risk here is the potential leakage of confidential information or commercially sensitive information.

At present, it is said that the language model AI, including ChatGPT, does not automatically incorporate information from a query into its model for other users. However, the company providing the language model may still have visibility on the information shared by its users in the prompts.

AI developers typically store these prompts because these data could be used as training materials to enhance future versions of the language model. However, this will be a risk to the business and the users as the information they divulge in the user prompt can potentially be shared with the system developer, provider, or their partners and contractors in the supply chain, who may further incorporate the prompts and the information they learned into future iterations.

Before using AI, the AI systems’ terms of use and privacy policy should be thoroughly understood before users can use such tools on sensitive tasks There is also a need to educate employees within the business on the types of task they can use AI for and the types of task they should not.

A question may be sensitive because of the data contained in the query, or because of who is asking the question and when This will be especially relevant for business managers who make business decisions everyday. This is because information from multiple queries using the same login name could be aggregated. For example, if a manager asks “how best to fire an employee?” whilst also indicating that the employee was a lawyer and had covid in early 2020, these prompts may have already provided enough information as to make some deductions

Tips for businesses – Businesses should be proactive in reviewing the AI systems’ terms of use and privacy policy When using AI as part of the operation, businesses need to ensure that data collection, processing and storage are compliant with their internal policy as well as compliant with the applicable data protection laws Businesses that wish to use AI should consider the following:

  • Consent or purpose of data collection: Is there a new purpose to collection of data?

  • Data usage: How will the data be used?

  • Data sharing: Is data shared in isolation or in aggregation with other organisations? Is the data available to the vendor’s researchers or partners?

  • Data security: What are the protections and measures adopted? Is encryption used?

  • Cybersecurity: Is there a cybersecurity protocol and crisis management plan?

  • Data accuracy: How to correct outdated or inaccurate information?

  • Anonymisation: Can personal data be anonymised?

There will also be a need to update the website data privacy notices depending on how AI is adopted into the business.

(2) Intellectual property rights

Another concern is the ownership and protection of intellectual property rights in works created with AI’s assistance. This can be a tricky issue because most existing legal framework surrounding intellectual property rights are unable to keep up with the rapid development of AI and are unable to provide clear guidance in this respect.

Apart from language models, there are also AI tools that generate art works – such as images and music. Similar to how language model operates, AI-generated images and art works are typically created using generative AI systems trained on huge pre-existing data sets available on the internet. These data sets may contain existing intellectual property rights and any unauthorised usage may pose potential intellectual property infringement risks.

Tips for business - Businesses should consider the extent of liability in intellectual property rights infringement, especially copyright infringement, because of the way these AI systems gather and use training data as well the way they generate “creative” works Businesses should also consider ownership and protection of their own intellectual property rights in works

(3) Contractual liability

AI systems are not perfect and contractual disputes may arise when AI systems fail to perform as expected. When that happens, the terms of use or any contractual agreements will be relied on to account for the roles, responsibilities, and liabilities of AI.

Tips for business - If businesses would like to use or adopt AI systems in their operations, issues of accountability and liability should be carefully considered to prevent and minimise its legal liability. When entering into contracts, it is important for the contracts to be negotiated to ensure there is a fair(er) balance of the risks

Part II: Takeaway and practical points

  1. When using AI in business, always consider the following steps:

  2. Review AI’s privacy policy and terms of use to ensure compliance with data protection laws. At the same time, review and update the company’s website data privacy notices. Ensure that the company’s intellectual property rights are protected and any unauthorised usage is addressed promptly.

  3. Maintain human oversight in and employee training on AI processes, especially in sensitive areas, to address errors and unexpected outcomes At the same time, implement internal guidelines and standards for AI applications to ensure fairness, transparency, and accountability

About Pinsent Masons

Pinsent Masons is an international law firm with offices in Asia Pacific, Europe, Africa and the Middle East, serving client demands in our 5 core sectors, including Technology Science & Industry Our Technology and Data team handles commercial, regulatory and dispute matters Contact Jennifer Wu for more information or to discuss technology or data related matters in Asia Pacific: https://www pinsentmasons com/people/jennifer-wu

This article is from: