Privacy Implications of AI Agents & Data Processing

Jan 22, 2025By Yvette Schmitter
Yvette Schmitter

As consumers/end users, we can no longer afford to sit idly by and let others run roughshod over our data.

It’s our data and we own it.

So, why are we so willing to let go and let others use it under the guise of helping to make your life easier or to be more productive? While that is all good, because who wouldn’t want to offload mundane routine tasks so you can get more “real stuff done?” But that helpfulness and increased productivity shouldn’t require unfettered access to your data and then allow others to do whatever they see fit with it, from using it to “learn” more about your habits and “needs” to help you or selling it to 3rd parties for profit so they can sell you more “stuff.” 
 
In the advent of impending agentic AI, which is going to be here sooner than you think, as it’s reported that OpenAI in the next coming weeks will announce a next level breakthrough that unleashes PhD level super agents to do complex human tasks. On the books is a closed door meeting scheduled with US government officials in Washington on January 30, 2025. 
 
Coupled with the deployments of new AI assistant systems like Google’s scam call protection and Apple Intelligence (refer to my previous post), both put AI EVERYWHERE on your phone – even in the middle of your private messages. Again, refer to my previous post on how to disable Apple Intelligence on your iPhone.

So, have you really thought about your privacy?  
 
The looming question continues to be this - what are the privacy implications of AI? This keeps me up at night especially due to the recent EU debate over mandatory content scanning laws that would require machine learning systems to scan virtually every private message you send.  So, what happened to end-to-end encryption and better yet, what does it mean for it in the future? 

A 101 Primer on End-to-End Encryption

For the past 10 years or so, it was all the rage as we saw the explosion of end-to-end encrypted communication platforms. Remember prior to 2011, most cloud connected devices uploaded their data in plaintext, and you know the hot mess that ensued from there. For many, their private data was exposed like laundry out to dry on the line for everyone from hackers, subpoenas, warrants or exploited by the platforms themselves. New data storage approaches started to evolve around 2011, and we saw messaging apps like WhatsApp and Apple’s iMessage rolling out default end-to-end encryption for private messaging.

How does this work?

Simply put, this technology changed key management to ensure that a server would never see the plaintext of your messages. Cool, right? Not too long afterwards, phone designers starting encrypting data stored locally on your phone. The recurring theme for both advancements? Nothing needed to be processed by a server.
 
So, here’s the rub: For data that requires intense processing options are limited. Sure, for cloud backups and private messages it’s easy to hide content from servers. But when you want to do serious compute, the typical phone has a limited amount of processing power, including limited amounts of RAM as well as battery capacity. Case in point, did you see how your battery life significantly took a hit if you didn’t disable Apple Intelligence? Even if you wanted to do something like photo processing on the most tricked out iPhone, it still needs to be plugged in to avoid draining the battery. Not to mention, there’s a cornucopia of variation in phone hardware. There are some tricked out phones with onboard GPUs and neural engines, but they aren’t cheap. Most start around $1,400 and go up from there. Not only are these phones not typical, but they are also cost prohibitive and what’s left will range in their processing capability. 
 
Now with the primer on the evolution of end-to-end encryption, how does AI come into play? If you weren’t paying attention because the cat memes got you on Instagram, in 2024 there was a new frontier model released every 2.5 days. These new powerful models have cutting edge capabilities ranging from LLMs that can generate and understand human text to new image processing models that can produce content on their own from text, images, audio, code, 3D and more.

We are entering unchartered waters, and no one really knows where this can go or actually knows how useful these models can be to their customers but that hasn’t stopped any and every tech company to find applications for this tech – including phone and messaging companies. The collective rally cry and race to the bottom is – models are the future. Personally, I believe the onslaught of integrating AI into our daily lives via our mobile phones is likened to the amuse bouche. I’m a self-proclaimed foodie so, let me explain. Have you ever been to a restaurant and the chef sends out something called the “amuse bouche?” It’s a French term for 'mouth amuser' and chefs serve this to diners to enjoy while waiting for their food orders to arrive as well as a way of saying 'welcome' to guests and pampering them with something special.  Well, consider this the amuse bouche for the main entrée to come – the deployment of AI Agents.  
 
Theoretically, with these new agents you don’t need to bother reading and responding to messages on your phone because they’ll read your email and text messages and answer them for you. They can order your food, find you the best deals on shopping – basically anticipate your every want or need. And how can they do this? Unfettered unrestricted access to all your private data coupled with a considerable amount of compute power to process it. 
 
So, let’s go back to the end-to-end encryption and data storage primer. The key here is that nothing needed to be processed by a server. But now, these agents will require more compute power than most phones have so they can run these powerful models. And that means, much of the processing as well as the data that needs to be processed will have to be offloaded to remote servers. 
 
Huh? Yep, so what happens to all that privacy? 
 
This is why I implore everyone to be very careful before enabling all of this cool tech to do cool stuff to that makes your life easier, because the privacy implications are enormous. All of this slickness promised requires compute that will be done off your device which means sending considerably more of your private data to systems that will examine and summarize your data, produce compact summaries of your LIFE. These systems will eventually know everything about you and your friends, because they will have access to our most intimate and private conversations. We are standing at the precipice of the complete death of privacy and now is the time to ask the hard questions about these systems. Especially existential questions like: are they really working for us? 
  
When it comes to end-to-end encryption, current messaging systems issue very specific technical guarantees. By definition, an end-to-end encrypted system is designed to ensure that plaintext message content in transit is not available anywhere except for the end-devices of the participants and anyone the participants or their devices choose to share it with. I want to lean in on this because this is super important. If you are 1:1 texting cool but what if you start a WhatsApp group chat and someone else in the group decides to turn on some service that uploads (your) received plaintext messages to WhatsApp. Can you say the message you received accurately describes how WhatsApp treats your data?  

This is where things get even more complicated because now we are talking about informed consent. Consent is both a moral and legal concept and the law is different in every corner of the world.  
 
Getting back on my soapbox about being intentional about owning your data and protecting your privacy, some companies will do an admirable job at letting their users know exactly what’s happening as a way to earn trust. Others, well, will bury your consent inside of an inscrutable lengthy riddled with legalese “terms of service” document you never read but click “accept” so you can go about your business. In the EU they’ll probably just invent a new type of cookie banner. 
 
We are on an accelerated path to a world where you should expect more and more of your data to be processed by AI and that processing will most likely be done off your device. Altruistic end-to-end encryption services will actively inform you when this happens which presents you an option to opt out. But, if it’s everywhere then our options will probably be limited.  
 
I’ve said this and will continue to say it. I am not afraid of technology, I actually geek out over all the new emerging technology coming out. But what worries me and keeps me up at night? These models are not inclusive, they are woefully absent of regulations on ethical development, deployment, and use.

It doesn’t really matter what technical choices we make around privacy. It does not matter if your model is running locally, or if it uses trusted cloud hardware — once a sufficiently powerful general-purpose agent has been deployed on your phone, the only question that remains is who is given access to talk to it.  
 
And the question you should think about is - will it only be you?