
1 minute read
really
from Monday 5 June 2023
by cityam
need good AI rules
[Re: Let’s be honest, we shouldn’t take those predicting the AI doomsday too seriously, June 1]
The reality is that if left unchecked, generative AI could prove damaging in completely unintentional ways. Sensible regulation is critical – but policy-makers are struggling to know where to start.
Naturally the onus in developing regulation falls onto governments and regulatory bodies, but as the business community will be leveraging the monumental benefits of generative AI, it should also shoulder some of the responsibility for helping develop appropriate guardrails. We advocate for two core principles to be observed as soon as possible. The first one is to only apply large language models (LLMs) to closed data sets to ensure the confidentiality of all data. The second one is to ensure the development and adoption of generative AI always has a “human in the loop” to maintain accountability and fairness.
These principles will serve as a starting point. But going forward regulation needs to prioritise the most pressing issues – better documentation of processes to prevent bias and safeguard intellectual property, fair labelling and authentication to identify AI-generated content, and restrictions placed on AI models to prevent the technology from becoming uncontrollable. Regulators and the business community working together can ensure that generative AI contributes positively to society while safeguarding against potential risks.
Rohit Kapoor Vice Chairman and CEO, EXL