Apple restricts employees from using ChatGPT (report)

Apple CEO Tim Cook.

As AI apps proliferate across the digital world, a growing number of corporations are becoming worried they could pose a security risk to their business.

That’s because AI apps typically collect records of their use for future learning and development. Businesses are increasingly growing worried that if employees use ChatGPT or similar apps at work, their business secrets could end up on the servers of the AI apps’ developers.

In recent months, a number of large companies have banned their employees from using ChatGPT and other large language model (LLM) AI apps internally. According to a report by the Wall Street Journal, Apple is now one of these companies.

The WSJ reviewed an internal Apple document that reportedly showed the Cupertino, California-headquartered company has restricted the use of ChatGPT, as well as Copilot, an AI app developed by Microsoft-owned Github.

Apple is concerned that employees using these apps could result in confidential data ending up on the AI apps’ servers, WSJ reported.

According to a tweet from Mark Gurman, a tech correspondent for Bloomberg News, this ban isn’t brand new; it has been in place for months.

All the same, the WSJ report landed at an awkward moment – a day after ChatGPT developer OpenAI launched a version of the app for Apple’s operating system, iOS.

And in Apple’s case, the concern about its business secrets landing on an AI app developer’s servers may be particularly acute, because the company is reportedly working on developing AI tech of its own.

It’s unknown what that tech may be, but the fact that it’s in the works seems highly likely, given that the company has a Senior VP for Machine Learning and AI Strategy – John Giannandrea, whom Apple hired away from Google in 2018.

During an earnings call earlier this month, CEO Tim Cook was vague about the company’s plans for AI technology, and sounded a note of caution regarding its rollout.

“It’s very important to be deliberate and thoughtful on how you approach these things,” Cook said during Apple’s Q2 earnings call on May 4, as quoted at Business Insider.

On its careers page, Apple currently has 87 job listings that mention AI, and more than 600 listings that mention “machine learning.” Those listings suggest that the projects Apple is working on are more than just a refresh to its Siri virtual assistant – which itself was an early foray into the AI field when Apple launched it in 2011.

Among the jobs Apple is hiring for is a Machine Learning Video Engineer, a job that involves developing “ML [machine learning]-based video approaches for current and future Apple products.”

Apple is also hiring a Senior Machine Learning Engineer, Creativity Apps, who will “build state-of-the-art algorithms, partner with our multi-functional teams and deliver end-to-end features to power the next-generation tools for creators.”

There is also a role for a Machine Learning Engineer – Generative AI, who will “design and implement [machine learning] algorithms that process data in different Apple products.”

“It’s very important to be deliberate and thoughtful on how you approach these things.”

Tim Cook, Apple

Apple is far from the only company to prohibit or discourage its employees from using ChatGPT as part of their work. Earlier this month, Bloomberg reported that Samsung banned staff from using AI on the job. The company is reportedly working on its own AI tools.

Prior to that, Amazon prohibited its employees from sharing any proprietary code or confidential information with ChatGPT, after apparently discovering that some ChatGPT responses resembled internal Amazon information.

A number of major banks, including Citigroup, Deutsche Bank, Goldman Sachs and JPMorgan Chase have also restricted or prohibited the use of AI chatbots.

Earlier this month, British intelligence agency GCHQ warned that large language model (LLM) apps like ChatGPT store the queries they are sent, and that data will “almost certainly be used for developing the LLM service or model at some point. This could mean that the LLM provider (or its partners/contractors) are able to read queries, and may incorporate them in some way into future versions. As such, the terms of use and privacy policy need to be thoroughly understood before asking sensitive questions.”

Some AI chatbot developers are taking these sorts of warnings to heart. Last month, ChatGPT creator OpenAI announced it will allow users to turn off the chat history on their app.

“Conversations that are started when chat history is disabled won’t be used to train and improve our models, and won’t appear in the history sidebar,” the company said.Music Business Worldwide

Related Posts