Dear AI, predict what I want better, but don’t use my data!
With organisations leveraging on artificial intelligence (AI) to enhance and reshape business operations, Helen Tringham (pictured), Partner at Mills & Reeve, shares her thoughts on the use of data to train AI models.
Most companies are now using some element of AI, whether that’s through targeted advertising, chat boxes or, for more tech-savvy companies, generating content to give users a head start.
While some welcome the use of AI and others are more sceptical, the expectation on what AI can do is getting higher and higher.
But how does AI get better? It learns. How does it learn? Usually from being fed (a lot of) data.
Naturally, a tension arises where there is a still a lack of trust and understanding around how AI works and how it uses personal data.
Recent headlines around the use of user data to train generative AI by platforms like LinkedIn and Meta (formerly Facebook) are not surprising.
In recent months, both companies have faced scrutiny over their data practices.
- LinkedIn has been using user data to train its generative AI models (for example, to create first drafts of resumes or messages to recruiters/candidates), which has led to concerns about user consent and transparency by the public and the Information Commissioner’s Office (ICO). The platform initially auto-enrolled users into this data usage without explicit consent, prompting backlash and calls for more transparent practices. LinkedIn has since updated its user agreement to include an opt-out option, emphasizing that user data helps improve their AI services.
- Meta has also been in the spotlight for leveraging user data (including posts, comments, photos and captions) to enhance its AI capabilities, which has raised questions about the extent of data usage and the transparency of these practices.
While both companies are moving forward with their plans to use user data for AI training, they are doing so under close scrutiny from the ICO, which aims to ensure that user rights and privacy are adequately protected.
So, what can be learnt from these recent developments?
- Transparency is crucial: Companies must be transparent about how they use user data for AI training. This includes providing clear information and options for users to opt out if they do not wish to participate.
- Regulatory compliance: Adhering to data protection regulations, such as the UK GDPR, is essential. Companies must ensure that their data practices comply with legal requirements to avoid penalties and maintain user trust.
The AI regulation landscape is rapidly evolving, and the EU AI Act recently introduced a comprehensive legal framework to ensure AI is used safely and ethically.
While the UK is developing its own AI regulations and still relying on existing law (such as the UK GDPR, consumer law and intellectual property law), the EU AI Act is likely to influence UK policies and practices.
Our expert lawyers are well-positioned to provide the necessary legal expertise to help companies navigate the regulatory landscape and leverage AI responsibly. We can help companies ensure that their data practices comply with relevant regulations, stay ahead of legal developments and avoid potential pitfalls.