News

Workplaces lack guidelines for AI

One in two employees who have used AI assistants like ChatGPT in the workplace have not been given clear guidelines for their use. Deeply problematic, says IDA.

Many employees lack clear and unambiguous guidelines when using generative language models, a new IDA study shows.

AI assistants like ChatGPT and Copilot are seriously making their way into Danish workplaces.

But two years after ChatGPT saw the light of day, many companies have yet to write down clear guidelines for how employees may use the artificially intelligent assistance tools.

In a new population survey conducted by Analyse Danmark for IDA, 39 percent of those who have used an AI assistant at work say that they have not received any guidelines from their employer at all.

Another 13 percent find that the guidelines are unclear, while 10 percent do not know if they have rules for AI tools in the workplace.

And that is problematic, says Laura Klitgaard, president of IDA.

'Considering how much hype and how many warnings there have been about the new AI tools since ChatGPT came to the market two years ago, it is worrying that so many companies have not yet set out clear guidelines for employees. It is absolutely crucial for both IT security, business and GDPR compliance that these things are under control,' says Laura Klitgaard.

The critical sense

In the survey, one in three of employees who have tried using AI assistants in a professional context responded that they have access to a paid license at their workplace.

But 60 percent use the free versions at work, even though they are less reliable and allow the provider to use the data and information you use in your prompts to train their AI model.

'It is important to maintain your critical sense when working with artificial intelligence, and you must be aware of the risks involved when sharing knowledge and information with free AI tools. If you as a company are investing in AI, there is much greater security and value in a license-paid model that protects the information you feed it,' says Laura Klitgaard.

Guide for companies

She points out that in January 2024, the Danish Digital Agency published a guide for companies on the responsible use of generative AI.

Companies should at least familiarise themselves with this guide, believes Laura Klitgaard, who emphasises that the entire issue of copyright to the information that the AI ​​assistants are trained on is still unresolved in the courts.

'There are a number of pitfalls that employers need to address and communicate to employees. It's not rocket science, but the basic guidelines are about never feeding a generative AI with personal information or trade secrets, and there will always be a risk of factually incorrect answers. Therefore, there should always be human validation of the results from an AI,' says Laura Klitgaard.

Most use ChatGPT

According to the IDA survey, which is based on 2,285 respondents, well over every second Dane has tried using a generative language model, and ChatGPT is by far the most used.

A full 93 percent say they have used ChatGPT, while 22 percent have used Microsoft's Copilot, and 13 percent have tried Google's Gemini.

The study also shows that the use of AI tools is increasing, and that AI assistants are primarily used to seek inspiration, write texts, make presentations and minutes, and write code.

'We can see that there are a lot of people who use AI assistants, and that in many areas they function as a good sparring partner. But both companies and employees win by establishing frameworks for what they can and cannot be used for,' says Laura Klitgaard.