Swedish Prime Minister Ulf Kristersson admitted in an interview with Danish media that he regularly consults AI chatbots, including ChatGPT and the French service Le Chat, to get a “second opinion” when making decisions.
This statement sparked a wave of criticism. “I use it myself quite often,” Kristersson said. “At least to get a second opinion. What have others done? And should we think completely differently? These are the kinds of questions.”
Experts have expressed concern about this practice.
Virginia Dignum, professor of responsible artificial intelligence at Umeå University, said: “The more he relies on AI for simple things, the higher the risk of overconfidence in the system. It’s a slippery slope. We must demand guarantees of reliability. We didn’t vote for ChatGPT.”
AI consultant and enthusiast Jakob Ohlsson, in an article for Expressen, called Kristersson’s approach “amateurish,” noting that the prime minister is feeding his political thoughts into a language model he does not understand, owned by a company he does not control, with servers in a country whose democratic future is in question.
Although a spokesperson for Kristersson later stated that “confidential information” is never used in AI work, Ohlsson believes that even indirect data can reveal the government’s strategic thinking, and that this data ends up in the hands of American tech giants whose influence already exceeds that of many states.
The use of AI in politics is a controversial topic. On the one hand, technology can help with data analysis, but blind trust in algorithms without understanding their limitations is indeed dangerous, especially at this level.