The three biggest players in the AI space (OpenAI, Google, and Microsoft) have announced recently the new features of their generative AI systems.
Apart from the disappointment of fans expecting GPT-5, most people seem quite impressed with the multimodal capabilities of ChatGPT-4o and Google’s Astra.
Oddly, nobody seemed to ask the question that crossed my mind as I saw the demos for these two products:
Are people not concerned about the possibility of privacy invasion and corporate surveillance when giving video and audio access to AI tools?
Google’s demo was quite telling.
In it, someone walks around an office with the phone’s camera on (is it being recorded?) and points at stuff. She may be asking particular questions, but the AI captures data of whatever is in the frame. This differs from the usual selfie videos posted on social media because it involves corporate settings with sensitive information.
Can you see the implications here?
According to Google’s Privacy policy for its Gemini apps (including Astra), some conversations are stored and used for feedback. They say they won’t sell the data, but it can be hacked or requested by the government. How much do you trust Google?
On the other hand, Microsoft announced a new feature called Recall, which is now “on” by default in Windows 11. This feature will regularly take screenshots of your computer. Privacy experts are already sounding the alarm about the implications.
Tech companies and governments have made the public believe that sharing their information is okay as long as they have nothing to hide (e.g., they are not doing anything illegal).
Nothing further from the truth.
In this issue, I will explain why you should care about your digital privacy and how AI increases the risk of privacy invasion and surveillance.
When you’re done, you’ll have a different way of thinking about AI and privacy.
Keep reading with a 7-day free trial
Subscribe to Thinking About AI to keep reading this post and get 7 days of free access to the full post archives.