Apple has placed restrictions on its employees’ use of generative AI tools like OpenAI’s ChatGPT and GitHub’s Copilot, citing data security concerns. According to an internal communication, Apple believes that “Generative AIs, while powerful, can potentially collect and share confidential data, leading to a breach of our security protocols.”
The Wall Street Journal reports that due to data security concerns, Apple has placed limitations on its employees’ use of generative AI tools like GitHub’s Copilot and OpenAI’s ChatGPT. Apple has made it clear that its employees are not to use such tools for work purposes, even though the official ChatGPT app for the iPhone was released and is intended to use artificial intelligence to answer a variety of user questions.
This decision is consistent with those made by other tech and financial giants, underscoring growing worries about the threat of confidential data leaks in the AI industry.
“The confidential data of our employees and our company are of paramount importance,” an internal Apple memo stated, according to the Journal. “Generative AIs, while powerful, can potentially collect and share confidential data, leading to a breach of our security protocols.”
The concern centers on these AI platforms’ ability to gather and process user data, a vital component of their capacity to learn and develop. For instance, ChatGPT, which is supported by Microsoft, collaborates with programmers to improve its AI models.
A glitch with ChatGPT earlier this year gave users access to the chat histories of others. While the problem was quickly fixed with a feature that allowed users to delete their chat history and choose not to participate in the training of AI models, it increased worries about the security of user data.
Tim Cook, the CEO of Apple, acknowledged the difficulties while praising the potential of generative AI during an investor call, but noted that there are still “issues that need to be sorted,” before widespread integration of the tech.
The ban by Apple is not a unique occurrence. Businesses like JPMorgan Chase and Verizon have restricted access to these platforms for their staff members. Wall Street Journal sources claim that Amazon has urged its engineers to use their own internal AI tools rather than external ones.
John Giannandrea, who joined Apple from Google in 2018, is rumored to lead the development of the company’s own AI model. According to a recent report by 9to5Mac, it’s also rumored to be testing a new technology, code-named “Bobcat,” aimed at enhancing Siri’s natural language capabilities. It’s unclear when this will be accessible to the general public.
Read more at 9to5Mac here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan