Apple is developing its own AI tool and is limiting its employees’ use of ChatGPT and similar AI tools out of fear of leaks.
According to a document obtained by The Wall Street Journal, Apple has limited the use of ChatGPT and other similar artificial intelligence (AI) tools for certain employees as the company develops its own similar technology.
Data sent to developers
Apple is concerned that employees using these AI programs could disclose confidential data and has instructed employees not to use Microsoft’s GitHub Copilot, according to the document.
When large language models like ChatGPT are used, data is sent to the developers for further development of the program. This could unintentionally lead to the sharing of protected or confidential information.
In April, OpenAI, the creator of ChatGPT, introduced the option to turn off chat history, which blocks the ability to train and improve the AI tool.
More concerns about the technology
Now, several organizations have started to worry about the technology as employees have begun using it for tasks ranging from writing emails to creating marketing materials.
The Wall Street Journal reports that JP Morgan Chase and Verizon have blocked the use of ChatGPT, and Amazon has encouraged its engineers to use the company’s internal AI tools for coding assistance instead of ChatGPT.
Apple is known for protecting its future products and information with rigorous security measures, and on Thursday, OpenAI introduced the ChatGPT app for Apple’s operating system in the United States.
Early adopter but fell behind
According to individuals familiar with the matter, Apple is working on its own large language models. Apple has acquired several AI startups, and the company’s AI efforts are led by John Giannandrea, who was recruited from Google in 2018.
Apple was an early adopter of consumer AI with the launch of the Siri voice assistant in 2011 but has since fallen behind others like Amazon’s Alexa.