Do you shudder at the thought of your personal information being vacuumed up and fed into the hungry maw of Artificial Intelligence? If so, Apple’s latest move in the iOS App Store will likely be music to your ears. They're taking a stand, but is it enough, or is it too little, too late?
Apple has just laid down the law for app developers: "You must clearly disclose where personal data will be shared with third parties, including with third-party AI,” the company stated emphatically. Furthermore, and perhaps even more crucially, all apps must “obtain explicit permission before doing so.” Think of it as a digital consent form, ensuring users are fully aware and in control of their data.
This updated language, a landmark moment as Apple's first official guidance specifically addressing third-party AI, is embedded within the App Review Guidelines. These aren't mere suggestions; they're the rules of the road for getting your app approved and staying in Apple's ecosystem. The introduction to these guidelines leaves no room for ambiguity: compliance is non-negotiable.
To underscore the point, Apple adds a rather colorful warning later in the guidelines. "We will reject apps for any content or behavior that we believe is over the line," they declare. "What line, you ask? Well, as a Supreme Court Justice once said, 'I'll know it when I see it.' And we think that you will also know it when you cross it." Translation: don't try to be sneaky. Apple's watching.
This update, quietly rolled out last week, is significant because it marks the first time AI has even been explicitly mentioned in these crucial guidelines. This is notable because Apple, especially under CEO Tim Cook, has displayed a certain wariness towards AI. They've been comparatively slow to integrate AI features into Siri, and Cook himself has often favored the term "machine learning" over "AI" in public addresses. Why the hesitation? Some speculate it's a concern over privacy, while others believe Apple is simply taking a more cautious and deliberate approach.
But here's where it gets controversial... The very act of sourcing data to train these powerful AI models has become a legal minefield in Silicon Valley. (Full disclosure: Ziff Davis, the parent company of Mashable, has even filed a lawsuit against OpenAI, alleging copyright infringement in the training and operation of their AI systems.) The legality of scraping data from the internet to train AI is actively being debated in courts and boardrooms.
And even Apple, the self-proclaimed AI laggard, isn't immune to the controversies. It is reportedly planning to use Google Gemini to power Siri soon, so it is obviously involved in the AI space, even if it is through partnerships.
Last month, Apple faced two separate lawsuits alleging improper use of copyrighted material for its own AI training. Two neuroscientists and two authors separately claimed that Apple had used data from "shadow libraries,” or pirated content freely available online. The allegations are that Apple essentially trained its AI on illegally obtained data.
While Apple's official response is still pending, the legal precedent isn't exactly in their favor. AI giant Anthropic recently settled a similar class-action lawsuit over shadow library usage for a staggering $1.5 billion. This clearly shows that the legal ramifications of using copyrighted material for AI training are very real and very expensive.
Regardless of how these lawsuits play out, Apple can now legitimately assert that it's actively protecting its users from having their data scraped by AI within its app ecosystem. This is a positive step, but is it enough? Is it too late? It's a question that will likely be debated for years to come.
And this is the part most people miss... While Apple's move is commendable, it only addresses data collection happening within apps downloaded from the App Store. It doesn't prevent websites or other platforms from scraping publicly available data or using data obtained through other means. So, while it's a welcome layer of protection, it's not a foolproof shield.
What do you think? Is Apple's new policy a genuine step towards protecting user privacy, or is it merely a PR move? Will this actually deter developers from using your data to train AI? And, perhaps more importantly, what other steps should Apple (or other tech companies) be taking to address the ethical concerns surrounding AI data collection? Share your thoughts in the comments below!