AI, privacy and work - Risks for businesses to manage

29 Jun 2023
Author: Andrea Twaddle
 

The Office of the Privacy Commissioner has released Guidelines and warned businesses about the potential consequences of using generative artificial intelligence (AI). The pace of change of generative AI is rapid. The Guidelines provide practical advice to businesses about data protection issues associated with its use.   

What is Generative AI? 

Generative AI is a type of AI in which computer algorithms are used to learn from mass data sets, typically ‘scraped’ from the internet, to create content that resembles human created content, including text, images, processes, graphics, music and code.   

Prominent generative AI tools including OpenAI’s ChatGPT, Microsoft’s Bing search or coproducts which leverages GPT-4 and Google’s Bard. However, there are many tailored Generative AI tools available which focus on industries or a specific purpose, which are being constantly revised – for example in recruitment, healthcare, marketing, education, finance and environmental modeling. 

What are the benefits of Generative AI? 

There is no question that there are many potential benefits from the use of AI at work, in particular, the potential impact on efficiency and productivity arising from the automation of tasks. Generative AI offers advantages to businesses including easily customising or personalising marketing content; generating ideas, designs or content; writing, checking and optimizing computer code; drafting templates for reports or articles; creating virtual assistants; training; analysing data; and streamlining processes. 

What are the risks of Generative AI? 

Generative AI presently, has significant issues that require consideration. For example:  

  • Misinformation. The potential to spread misinformation, malicious or sensitive content, which could cause profound harm to people and businesses. Generative AI has a lack of fact checking. Presently, information generated is without verification or a research/reference trail – algorithms can and do create (false) content. Overseas, this has led to defamation claims where journalists relied on alleged ‘false and malicious’ accusations fabricated by ChatGPT [2];  
  • Bias. Generative AI draws from all material in its database. This includes extreme information, leading to bias amplification. Ethical issues clearly follow;
  • Personal information (including image and voice) used without consent. The potential to use another person's voice audio and image in generated content without their consent. This is an issue presently at the heart of global actor strikes, and in New Zealand, potentially without protection given harm from ‘deep fake’ technology may not be covered by the Harmful Digital Communications Act. Scams overseas have seen businesses make significant financial payments on the belief they were acting on instruction from their client, to later discover the content was AI generated, and losses were potentially not recoverable; 
  • Privacy. A lack of protection for the AI generated data analysis which links individual data points from everyday data usage. Consider the smart devices used daily (smart phones, smart watches, CCTV, smoke detectors). Many companies collecting this information are based overseas, making access to information and enforcement of any breach difficult. The collation of this information in AI generated aggregated data analysis can effectively act as a tracking tool or device, but does not require a warrant as would otherwise be required under the Search and Surveillance Act, thereby creating a potentially unregulated body of personal information, which is used and sold for commercial gain; 
  • Copyright infringement. Generative AI does not respect copyright. Information is used without referencing, so if repeated, there is risk of copyright infringement. The European Union has proposed copyright rules for generative AI that would require companies to disclose any copyrighted materials used to develop these tools, in the hope this will create transparency and ethics, and minimise risk of misuse, or infringement of confidentiality and intellectual property rights. Creatives have raised concern that their work has been used without consent to ‘teach’ AI how to create imagery and other content of a similar style, with no protection or acknowledgment afforded to the underlying creator whose work was used for this purpose. 
  • Lack of data sovereignty for Maori. Some generative AI is focused on languages. There is a real risk that while technology could help preserve language, harvesting data without consent risks abuse, distorts indigenous culture and deprives minorities of their rights. The risk that Maori don’t have sovereignty over their own data may be seen as a modern form of re-colonisation, given its use has the potential for significant commercial use/gain. Plagiarism, unethical sourcing of data and cultural appropriation are real concerns. 


Advice from the Privacy Commissioner 

The Privacy Commissioner is clear that generative AI’s use of personal information is regulated under the Privacy Act.[1] Accordingly, it is expected that all agencies using systems that can take the personal information of New Zealanders to create new content needs to be considering the consequences of using generative AI.  

Alongside the general risks arising from generative AI, businesses should consider: 

  • Training data used by generative AI. Generative AI typically requires a user to input information for the AI tool to operate. For example, a sample response in order to train the tool to address future requests. Businesses risk confidential/commercially sensitive and personal information being shared, without secure or appropriate data and privacy protections. 
  • Information entered into generative AI risks confidentiality. AI does not recognise jurisdictions nor copyright, therefore using information without reference risks confidentiality breaches and copyright infringement. 
  • Privacy risks. Businesses may not be able to satisfy their obligations under the Privacy Act in relation to access and correction of personal information. 

 

Advice for New Zealand businesses in using generative AI 

The Privacy Commissioner has provided guidance that it expects the following from businesses considering implementing a generative AI at work: 

  • Senior leadership gives full consideration of risks and mitigation of adopting a generative AI tool and explicitly approve its use. 
  • Before adopting a generative AI tool, review whether it is necessary and proportionate, or whether an alternative approach that may provide greater privacy protections would be more appropriate. 
  • Conduct a privacy impact assessment to help identify and mitigate privacy risks. 
  • Be transparent about the use of generative AI in the business. Ensure that employees, customers/clients are informed about how personal information will be used, and how privacy issues will be addressed.   
  • Engage with Maori about the potential impact. Develop procedures in cooperation with tangata whenua. 
  • Develop procedures about accuracy and access of information. 
  • Have humans reviewing outputs created by generative AI before they are used. Where issues arise, develop mitigation, e.g. address biased or inaccurate information generated.  
  • Take care about what data is shared. Confidential information should not be shared without explicit commitment from the AI provider that it will not be retained or disclosed. If this assurance is not provided, remove confidential and personal information before any data is uploaded. 

Good leadership and design will be important for businesses to ensure that generative AI is used responsibly. 

Used well, AI can help us innovate and revolutionise how we do things. The ability to work alongside AI, with the right training and protections will be important, but businesses need to take care to ensure that in a desire to progress with technology, appropriate protections are put in place to address the very real risks arising from generative AI, including significant privacy implications. 

For advice regarding business, employment and privacy related matters, you are welcome to contact our team of specialist lawyers on 07 282 0174 or reception@dtilawyers.co.nz



 

[1] https://www.privacy.org.nz/publications/guidance-resources/generative-artificial-intelligence-15-june-2023-update/

[2] In New Zealand, defamation action arising from generative AI content is untested. There may be questions about whether ChatGPT publishes the information, but the broad approach to cases involving defamation and the internet to date are that defamation laws will be applied without amendment to the online environment – making the person who participates in or contributes to the publication of a defamatory statement is, on the face of it, liable as a publisher. However, liability might be restricted if publication by the internet was entirely passive.

PrintBack
 
 
AI, privacy and work - Risks for businesses to manage
About the Author
Andrea Twaddle
Andrea is an experienced specialist employment lawyer and Director at DTI Lawyers. She advises on contentious and non-contentious employment law issues, including privacy, and health and safety matters. Andrea is AWI-CH qualified, and undertakes complex workplace investigations. She is a member of the national Law Society Employment Law Reform Committee, a former Council Member at the WBOP District Branch of the Law Society, and Coordinator of the WBOP Employment Law Committee. Andrea is a sought-after commentator and speaker on employment law issues at client and industry seminars. She provides specialist, strategic advice to other lawyers, professional advisors and leadership teams. You can contact Andrea at andrea@dtilawyers.co.nz