The Federal Trade Commission has opened an investigation into OpenAI, the artificial intelligence start-up that makes ChatGPT, over whether the chatbot has harmed consumers through its collection of data and its publication of false information on individuals.
In a 20-page letter sent to the San Francisco company this week, the agency said it was also looking into OpenAI’s security practices. The FTC asked OpenAI dozens of questions in its letter, including how the start-up trains its AI models and treats personal data, and said the company should provide the agency with documents and details.
The FTC is examining whether OpenAI “engaged in unfair or deceptive privacy or data security practices or engaged in unfair or deceptive practices relating to risks of harm to consumers”, the letter said. The investigation was reported earlier by The Washington Post and confirmed by a person familiar with the investigation. OpenAI declined to comment.
The FTC investigation poses the first major US regulatory threat to OpenAI, one of the highest-profile AI companies, and signals that the technology may increasingly come under scrutiny as people, businesses and governments use more AI-powered products. The rapidly evolving technology has raised alarms as chatbots, which can generate answers in response to prompts, have the potential to spread disinformation.
Sam Altman, who leads OpenAI, has said the AI industry needs to be regulated. In May, he testified in Congress to invite AI legislation and has visited hundreds of lawmakers, aiming to set a policy agenda for the technology.