ChatGPT poses a security risk to businesses

The new AI models can be a serious security risk for companies. IDA calls for a set of clear recommendations from the Danish authorities.

Danish companies should as soon as possible lay down clear guidelines for employees' use of AI models such as ChatGPT.

This must ensure that the employees do not inadvertently reveal personally sensitive information or business secrets to, for example, OpenAI, which is the company behind ChatGPT, writes Jørn Guldberg, IT security expert for IDA.

He fears that amid the massive interest in the new AI models, we will see breaches of companies' data policies and the processing of confidential information, which could have serious consequences.

"The new AI models have enormous potential and can make many processes easier in individual workplaces. But we must learn to use them wisely. It is tempting to get help from ChatGPT to rewrite a strategy plan or to optimize some programming, but with that you also accept that the information will be handed over to OpenAI", explains Jørn Guldberg.

Protect vital information

Recently, Korean tech company Samsung has experienced three cases where employees uploaded sensitive information to ChatGPT to, among other things, get help investigating errors in the source code for the company's computer chips.

But with this, OpenAI has de facto gotten possession of Samsung's critical business secrets, which can also be used to train the language model in ChatGPT.

"Right now the AI ​​models are surrounded by a lot of hype, and many employees probably feel the need to become familiar with the new technology and the possibilities it holds. But it is important to maintain your critical sense when you get on board the artificial intelligence. You must not share anything that could reveal vital information about your workplace", says Jørn Guldberg.

Powerful technology

The Italian data authorities recently took the drastic step of banning AI language models such as ChatGPT because it is unclear whether citizens' GDPR-secured rights are being respected. The ban has just been lifted while the European Data Protection Board (EDPB) set up a task force to investigate whether ChatGPT complies with the rules on the protection of personal data.

In Denmark, a number of higher education institutions have for the time being banned the use of ChatGPT for exam use.

"Fundamentally, we have to be very aware of what we put into it and what results come out. It is a powerful technology which – if used incorrectly – can pose a huge threat to both companies and citizens. But if we learn to use it properly, it can be a fantastic tool", says Jørn Guldberg.

He encourages Danish workplaces – both large and small – to introduce clear provisions on the use of AI language models, so that data security is not breached, or confidential information is shared.

"But especially the smaller companies need guidance. It should therefore be an urgent task for the authorities to prepare a simple company guide which can initially ensure a wise and responsible use of the new technologies", says Jørn Guldberg.