It appears that Samsung won’t allow its employees to use generative AI tools like ChatGPT on its internal networks and company-owned devices because of security risks, Bloomberg News reports. A memo has been sent to the staff and the move is a temporary one to ensure the company is able to “create a secure environment” to safely use generative AI tools.
According to Bloomberg, the company is concerned that data transmitted to AI platforms is stored on external servers, making it difficult to retrieve and delete.
ChatGPT has become popular across the globe and people are using the tool to summerise reports but that also means, sharing sensitive information, which OpenAI may have access to. But the privacy factor depends on how a user accesses the service. According to The Verge, if a company is ChatGPT’s API, then conversations with the chatbot are not visible to OpenAI’s support team and are not used to train the company’s models. But that’s not the case if a person inputs the text into the general web interface using its default settings.
And Samsung is not alone in taking this step. JPMorgan, Bank of America, Citigroup, Deutsche Bank, Goldman Sachs and Wells Fargo are keeping an eye on the use of such tools (and have taken various steps). New York City schools have banned ChatGPT over misinformation fears.