MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Tuesday, 19 November 2024

AI could be next big systemic risk to financial system, says Gary Gensler, chairman of SEC

The chairman has been studying the potential consequences of artificial intelligence for years

New York Times News Service New York Published 08.08.23, 07:01 AM

Wikipedia

Gary Gensler, the chairman of the SEC, has been studying the potential consequences of artificial intelligence for years. The recent proliferation of generative AI tools like ChatGPT has demonstrated that the technology is set to transform business and society.

Gensler outlined some of his biggest concerns in an interview with The New York Times.

ADVERTISEMENT

AI could be the next big systemic risk to the financial system. In 2020, Gensler co-wrote a paper about deep learning and financial stability. It concluded that just a few AI companies will build the foundational models that underpin the tech tools that lots of businesses will come to rely on, based on how network and platform effects have benefited tech giants in the past.

Gensler expects that the US will most likely end up with two or three foundational AI models. This will deepen interconnections across the economic system, making a financial crash more likely because when one model or data set becomes central, it increases “herding” behaviour, meaning that everyone will rely on the same information and respond similarly.

“This technology will be the centre of future crises, future financial crises”, Gensler said. “It has to do with this powerful set of economics around scale and networks.”

The meme stock frenzy driven by social media and the rise of retail trading on apps highlighted the power of nudges and predictive algorithms. But are companies that use AI to study investor behaviour or recommend trades prioritising user interests when they act on that information?

The SEC last month proposed a rule that would require platforms to eliminate conflicts of interest in their technology. “You’re not supposed to put the adviser ahead of the investor, you’re not supposed to put the broker ahead of the investor,” Gensler said. “And so we put out a specific proposal about addressing those conflicts that could be embedded in the models.”

“Investment advisers under the law have a fiduciary duty, a duty of care, and a duty of loyalty to their clients,” Gensler said. “And whether you’re using an algorithm, you have that same duty of care.”

Precisely who is legally liable for AI is a matter of debate among policymakers. But Gensler says it’s fair to ask the companies to create mechanisms that are safe and that anyone who uses a chatbot is not delegating responsibility to the tech. “There are humans that build the models that set up parameters,” he said.

Spying on staff

Employers should not be allowed to use computers and AI to spy on their workers without their consent, a powerful group of British MPs has said.

The Commons committee raised fears that remote monitoring of staff puts people’s data at risk.

In a report published on Monday it warned that the increased use of robotics in workplaces across Britain could have “negative impacts” on employees. MPs said such monitoring, which could include keeping tabs on those working from home, “should be done only in consultation with, and with the consent of” staff.

New York Times News Service and

The Daily Telegraph, London

Follow us on:
ADVERTISEMENT
ADVERTISEMENT