In our AI-infused, data-scraping world, the old kindergarten maxim to "Share everything" is no longer sound advice. Kids of all ages need to learn to protect their privacy and security.
Yes, those quizzes are insidious, aren’t they? They’re low-cost social engineering tools for scraping personal data. Identity thieves and scammers don’t need to pay a data broker for personal info that people give up for free in exchange for a bit of ‘fun’.
Karen, I’m happy to see you thinking about this and taking action. It starts with awareness, and perhaps the new generation has a shot at protecting privacy if they are vigilant, but there will need to be a major cultural shift. Lord knows there is plenty of data out there that for decades, most people were enthusiastically sharing about their lives and their children’s lives. I always felt guilty for not posting birthday tributes and photo montages for my kids and husband, but the truth is, I never felt comfortable. But I can’t kid myself: it’s all out there, and even a LinkedIn photo is fair game.
Recently, an AI image generation startup left a massive database of over a million images and videos publicly exposed—most of them “nudified” deepfakes created without consent. There’s a good summary available: https://substack.com/@mrcomputerscience/p-181197188.
The genius of Zuck, Google, and others was normalizing sharing at scale and building a data extraction pipeline to be used at will--not just for Big Tech, but for every startup scrambling to train an AI model and for bad actors armed with increasingly easy-to-use manipulation tools.
Good that you trusted your gut and didn’t share all of those personal images on social media all those years, Celeste. Thanks for sharing that link; I’ll check it out.
Thanks for this, there definitely needs to be more awareness about this. Most people were already unaware of how to protect their kids’ or their own privacy on social media (birth announcements with their kids full name and date and place of birth etc.) and now with AI in the picture people have completely checked out… the amount of people I’ve had to unfollow because they’ve encouraged their followers to enter their kids photos into ChatGPT to make cute coloring books for Christmas 😞
Oh wow. Yeah, many people don’t seem to realize that uploading a photo to a genAI tool generally means giving them the photo to use as the company wishes — not just for making that one fun thing they want to use it for.
Those viral quiz posts are basically free data collection for training sets. What's wild is how people don't connect the dots between answering "what street did you grow up on" publicly and having those same answers as password recovery questions. The AI angle makes it worse since scrapers can now build pretty complete profiles automatically instead of requiring manual stalking.
💯 Exactly, The AI Architect. I mean, these kinds of social engineering hacks have been around for so many years on earlier social sites like FB. But now, scammers have powerful AI tools to exploit those answers. People are too trusting. How can we help them see past the fun factor to the risk?
Once or twice I answered a quiz years ago until I realized it was to gather data. I still have friends posting their answers. Eeks
Yes, those quizzes are insidious, aren’t they? They’re low-cost social engineering tools for scraping personal data. Identity thieves and scammers don’t need to pay a data broker for personal info that people give up for free in exchange for a bit of ‘fun’.
Karen, I’m happy to see you thinking about this and taking action. It starts with awareness, and perhaps the new generation has a shot at protecting privacy if they are vigilant, but there will need to be a major cultural shift. Lord knows there is plenty of data out there that for decades, most people were enthusiastically sharing about their lives and their children’s lives. I always felt guilty for not posting birthday tributes and photo montages for my kids and husband, but the truth is, I never felt comfortable. But I can’t kid myself: it’s all out there, and even a LinkedIn photo is fair game.
Recently, an AI image generation startup left a massive database of over a million images and videos publicly exposed—most of them “nudified” deepfakes created without consent. There’s a good summary available: https://substack.com/@mrcomputerscience/p-181197188.
The genius of Zuck, Google, and others was normalizing sharing at scale and building a data extraction pipeline to be used at will--not just for Big Tech, but for every startup scrambling to train an AI model and for bad actors armed with increasingly easy-to-use manipulation tools.
Good that you trusted your gut and didn’t share all of those personal images on social media all those years, Celeste. Thanks for sharing that link; I’ll check it out.
Thanks for this, there definitely needs to be more awareness about this. Most people were already unaware of how to protect their kids’ or their own privacy on social media (birth announcements with their kids full name and date and place of birth etc.) and now with AI in the picture people have completely checked out… the amount of people I’ve had to unfollow because they’ve encouraged their followers to enter their kids photos into ChatGPT to make cute coloring books for Christmas 😞
Oh wow. Yeah, many people don’t seem to realize that uploading a photo to a genAI tool generally means giving them the photo to use as the company wishes — not just for making that one fun thing they want to use it for.
Those viral quiz posts are basically free data collection for training sets. What's wild is how people don't connect the dots between answering "what street did you grow up on" publicly and having those same answers as password recovery questions. The AI angle makes it worse since scrapers can now build pretty complete profiles automatically instead of requiring manual stalking.
💯 Exactly, The AI Architect. I mean, these kinds of social engineering hacks have been around for so many years on earlier social sites like FB. But now, scammers have powerful AI tools to exploit those answers. People are too trusting. How can we help them see past the fun factor to the risk?