ChatGPT maker OpenAI has been investigated by US regulators over AI risks.

Iran PressAmerica: The risks posed by artificially intelligent chatbots are being officially investigated by US regulators for the first time after the Federal Trade Commission launched a wide-ranging probe into ChatGPT maker OpenAI.

In a letter sent to the Microsoft-backed company, the FTC said it would look at whether people have been harmed by the AI chatbot’s creation of false information about them, as well as whether OpenAI has engaged in “unfair or deceptive” privacy and data security practices, Financial Times reported.

Generative AI products are in the crosshairs of regulators around the world, as AI experts and ethicists sound the alarm over the enormous amount of personal data consumed by the technology, as well as its potentially harmful outputs, ranging from misinformation to sexist and racist comments.

In May, the FTC fired a warning shot to the industry, saying it was “focusing intensely on how companies may choose to use AI technology, including new generative AI tools, in ways that can have actual and substantial impact on consumers”.

In its letter, the US regulator asked OpenAI to share internal material ranging from how the group retains user information to steps the company has taken to address the risk of its model producing statements that are “false, misleading or disparaging”.

The FTC declined to comment on the letter, which was first reported by The Washington Post. Writing on Twitter later on Thursday, OpenAI chief executive Sam Altman called it “very disappointing to see the FTC’s request start with a leak and does not help build trust”.

He added: “It’s super important to us that our technology is safe and pro-consumer, and we are confident we follow the law. Of course we will work with the FTC.”

Lina Khan, the FTC chair, on Thursday morning testified before the House judiciary committee and faced strong criticism from Republican lawmakers over her tough enforcement stance. When asked about the investigation during the hearing, Khan declined to comment on the probe but said the regulator’s broader concerns involved ChatGPT and other AI services “being fed a huge trove of data” while there were “no checks on what type of data is being inserted into these companies”.

She added: “We’ve heard about reports where people’s sensitive information is showing up in response to an inquiry from somebody else. We’ve heard about libel, defamatory statements, flatly untrue things that are emerging. That’s the type of fraud and deception that we’re concerned about.”

Khan was also peppered with questions from lawmakers on her mixed record in court, after the FTC suffered a big defeat this week in its attempt to block Microsoft’s $75bn acquisition of Activision Blizzard. The FTC on Thursday appealed against the decision.

Meanwhile, Republican Jim Jordan, chair of the committee, accused Khan of “harassing” Twitter after the company alleged in a court filing that the FTC had engaged in “irregular and improper” behaviour in implementing a consent order it imposed last year.

Khan did not comment on Twitter’s filing but said all the FTC cares “about is that the company is following the law”.

Experts have been concerned by the huge volume of data being hoovered up by language models behind ChatGPT. OpenAI had more than 100mn monthly active users two months into its launch. Microsoft’s new Bing search engine, also powered by OpenAI technology, was being used by more than 1mn people in 169 countries within two weeks of its release in January.

Users have reported that ChatGPT has fabricated names, dates and facts, as well as fake links to news websites and references to academic papers, an issue known in the industry as “hallucinations”.

214

Read More:

Anthropic introduces new AI chatbot, a ChatGPT rival

How AIs would destroy humanity with nukes

1st global AI regulation summit to be held later this year