"Generative AI has the potential to change the anatomy of work, augmenting the capabilities of individual workers by automating some of their individual activities." - McKinsey
The rapid development of generative AI models such as ChatGPT has opened up impressive possibilities in recent months. McKinsey examined 63 possible uses of generative AI in 16 different business areas. Activities such as content creation, image generation or the creation of simple software code can be automated. According to McKinsey, such activities take up 60 to 70 per cent of our working time.
To ensure that we can use this powerful tool responsibly, our experts have developed specific rules for use at Steadforce, which we would like to share with you in this blog post.
The tips and rules described below are intended to provide a framework for action and awareness of how to use tools such as ChatCPT and do not constitute legal advice.
Data leaks occur when a company's confidential information is made public. This can happen in a number of ways, such as a cyber-attack, vulnerabilities in systems, etc. Similarly, a request to generative AI models that contains confidential information can result in a data leak.
The AI models are trained based on large amounts of data, including user input. If these contain confidential information, there is a possibility that it could be reflected in generated content and extracted from the model.
In case e.g. sensitive personal data is leaked out, there is a risk that high GDPR fines will be imposed.
So to minimise the risk of data leaks when using generative AI, you should not include any proprietary, personal or comparably sensitive information in your requests.
In addition, you should explicitly disagree to the use of your entered data as a training basis for future models. According to OpenAI, the data is nevertheless stored for a short period of time and can be viewed by humans in order to exclude misuse.
For ChatGPT, the setting can be made via the web interface (as of July 2023: Settings -> Data Controls -> Chat History and Training -> Off).
Content generated by generative AI models is not subject to German copyright law, as this only applies to natural persons. However, it cannot be guaranteed that the generated content is free of copyright-protected components. The database of the AI models is not known and it is possible that parts of the generated content are protected by copyright and have been reproduced. Generated content should therefore not be reused unprocessed.
Make sure you have the necessary rights and licences before using or sharing generated content.
Generative AI models learn by analysing large amounts of training data and recognising patterns to generate new content. But in some circumstances, these models can produce unexpected or incorrect results. The models may even invent facts and sources that appear to correspond to reality. This is called hallucination of the AI models.
A manual check by experts should therefore always be carried out to ensure that the generated content is correct.
Generative AI models offer us enormous potential to increase productivity and add value by automating activities.
However, there are also challenges that come with using generative AI models. The capabilities of the models are still being developed further to reduce these challenges
In order to use the potential of generative AI models responsibly while limiting the drawbacks, it is important for companies to establish clear rules for their use.
By protecting sensitive information and reviewing generated content, we can ensure that generative AI models help us find innovative solutions without crossing legal boundaries and while ensuring accuracy.