Everything Caregivers Need to Know about AI and Deepfakes
Artificial Intelligence – AI – is developing rapidly. It can create efficiency with certain tasks, including content creation and research. It’s also creating new, extremely difficult problems related to intellectual property and exploitation, including children.
The Pope, The White House, and a global committee of 18 countries have all pushed for guidelines and mandates for the future of AI. But it’s evolving much faster than policy and government.
One of these terrifying evolutions is deepfake technology. It uses “deep” AI to create fake voices, images, videos, and conversations. The fake content is so realistic, that we have a new word: deepfake, which was first coined on Reddit in 2017.
And deepfakes are quickly getting out of control. For example, Alibaba researchers created a new AI tool that turns static images into dancing videos – called “Animate Anyone.” One of the top comments on the program’s GitHub is, “…can’t wait to use this for porn.”
Or, consider the New Jersey teens who created deepfake nude images of female classmates. We’re afraid this is just the beginning. Revenge porn and child sexual abuse content (CSAM) will both be taken to another, sinister level.
The challenge with deepfakes is that there isn’t a strong technical solution like a filter to prevent them. There are multiple steps parents can take to mitigate risk and we share those below. But we also desperately need a regulatory solution to reign in the technology. And governments seem woefully caught off-guard by these advances.
What is a Basic Explanation of AI?
To showcase the power of AI and its ease of use, we asked ChatGPT (one of the most popular AI language models) to write an introduction explaining what AI is. In about 5 seconds, it wrote a complete post. We’ve only included a few excerpts:
“Here’s an introduction to AI:
AI involves the development of algorithms and computational systems capable of performing tasks that typically require human intelligence. AI systems are designed to adapt and improve their performance over time, enabling them to tackle complex challenges with efficiency and accuracy. The ultimate goal is to create machines that can exhibit traits such as reasoning, problem-solving, perception, learning, and language understanding – capabilities traditionally associated with human intelligence.
This is the power of AI. Within seconds, ChatGPT generated content that sounded appealing, researched, and well-structured. It’s both amazing and frightening to see just how fast and easy it is to create quality content using AI.
How is AI Impacting Kids?
The initial launch of tools like ChatGPT caused cheating concerns. With a few prompts, kids had ChatGPT generating grade-A papers instead of reading, learning, and writing themselves. Schools initially responded by banning AI-writing tools. But many soon realized that kids do need to learn how to use it responsibly. Other schools took the prudent step of creating an “Ethical Use of AI” policy.
Then AI chatbots started showing up in search engines like Google and Bing, and in social media. The most prominent is Snapchat’s “My AI,” which has been deemed super creepy by most young people. While shopping online and interacting with customer service “chat,” many of us have now experienced the power of AI responding to our questions.
It was only a matter of time before such powerful technology would be used to fuel exploitation. Although explicit, fake images of celebrities have been around for years, they were clunky and clearly fabricated. But once the technology became scalable and scary realistic, Pandora’s Box was opened. Now anyone in a “normal” picture or video can become a victim of extremely realistic fake imagery or video content.
Including fake porn.
Or, consider the story of the mom who received a fake kidnapping call that used her daughter’s AI-generated voice.
We’re now going to explain deepfakes, which include nudifying technology, and virtual girlfriends (yes, you read that correctly).
What are Deepfakes?
I like this definition from techtarget.com: “a type of artificial intelligence used to create convincing image, audio, and video hoaxes.” Another definition mentioned: “[deepfakes] are typically used maliciously and intended to spread false information.” It’s a form of AI that uses deep learning, which is where the name originates.
A few recent examples of deepfakes:
- A fake Tom Hanks was shown in a full commercial selling dental insurance.
- Videos of fabricated atrocities from the Israeli and Hamas conflict are prominent on TikTok.
- Keanu Reeves, Tom Cruise, and Robert Downey, Jr. have fully fabricated social media channels of their likeness.
It’s almost impossible to tell the difference at first glance! These social media accounts are run by Metaphysic, the pioneer of deepfake technology:
Deepfakes can easily alter facial expressions, manipulate speech, and even take someone’s face and put it onto another person’s body, like above. You can start to see how this could be used to cause harm.
There’s also an entire class of apps and websites that “nudify” pictures. Just upload any actual photo of a girl (almost all of the technology has been trained on female images – not males) and the app will remove her clothes, showing a version of her naked. This is just another form of deepfake nudity – placing a real person’s face on a fake body in an image or video. But the technology is so realistic, making it almost impossible to know it’s fake.
A group of over 30 teen girls from a New Jersey high school were the victims of deepfake nudes generated by classmates. One girl stated:
“We’re aware that there are creepy guys out there…but you’d never think one of your classmates would violate you like this.”
What are the Dangers of Deepfake Technology?
Identity theft becomes much easier when you own someone’s face! Especially when you consider biometrics.
False information can be created at scale, especially during an election year. The opposition can deepfake a political speech, making any public figure say anything. This means reputations can be ruined instantly in our “quick-to-cancel culture.” Consider the true story of NY students who created a false video from their principal with racist and explicit themes.
Fake voices could be used by bad actors to prey on elderly individuals. A Colorado mom received a call from an unknown number and heard her daughter crying, desperately explaining that she’d been kidnapped by men and was being held for ransom (it was all untrue).
Like the girls from New Jersey, students can be sexually harassed and victimized by classmates spreading fake explicit images – which can have a real impact on opportunities for higher education and career paths.
Non-consensual pornography can now easily be created by adults of other adults.
Deepfake and nudify websites are changing daily. This means most parental controls are always going to be behind blocking these sites. We tested 43 deepfake, nudify, and virtual GF sites against popular parental controls like Covenant Eyes, CleanBrowsing, Apple’s “Limit Adult Websites,” Bark, and Google’s SafeSearch. Only a couple did well. The rest performed very poorly. All but two have already corrected the gaps. But new sites are being added daily.
AI-generated child sexual abuse content (AIG-CSAM) can be created by pedophiles at scale, fueling a global CSAM epidemic and issues like sextortion, where teens could be convinced to send money to avoid having fake photos circulated (but the teens can’t tell they’re fake).
On a human level, there are other harms we must consider. For example, there are thousands of minors in poor countries who are feeding images and content into AI models – training them – so that they’re more accurate.
Also, consider the impact of fully customized virtual girlfriends. It’s exactly what it sounds like. You can generate an image or video of any female with whatever features you request. She can text your phone, call you, send you nudes, and carry on a fake, but very convincing virtual relationship.
Here’s an observation. Most of the deepfake and AI-generation sites we visited have a Discord server. Discord has a strong male gamer user base. Stereotypically, an audience that is a bit more socially awkward, so a virtual girlfriend might be easier. Consider this perverse language used on one website which is targeting young males:
“Imagine wasting time taking her out on dates, when you can just use Undress AI to get her nudes.”
What can Parents do to Help Prevent Deepfakes?
As we stated in the opening, there aren’t many technical solutions that can prevent this issue. Everyone has a smartphone and can take pictures of anyone. But here are 14 ways to address the risk of deepfakes:
- Relationally, let your teen read this post. Have a curious and calm conversation about how AI can benefit humanity. Then have a conversation about risks and the consequences. You’re not waving a disappointed, “boomer” finger at AI. But this version of “the internet” that is more powerful than the “broken” version of the internet we have today is advancing so fast. Faster than our ability to make it do less harm to humans.
- Delay, delay, delay. A smaller digital footprint creates a lower risk of digital harm. Always. #delayistheway
- If your child has an Android phone, in addition to using Family Link’s filtering, use Bark’s Premium software or Covenant Eyes to detect explicit text and pornography on the screen. Why? Because the URLs (websites) for deepfakes are changing daily. A “never allow list” that tries to block these sites is good but must be updated constantly (which is untenable).
- If your child has an iPhone, use the “Limit Adult Websites” filtering, but nothing can analyze the screen due to iOS constraints. Bark, Canopy, and Covenant Eyes can see some text and some background activity and are essential. However, parents will have to rely heavily on conversation and connection since few technical solutions work on iPhones. Note: we recently shared the results of our deepfake URL testing with companies, including Apple, and all are addressing the gaps.
- We would like a child’s first taste of social media to be on a Bark Phone because parents will have the best chance of intervention if a child is exploring the websites or is the victim of sextortion.
- For both iPhones and Androids parents should control all app downloads. Deepfake and face swap apps are prevalent and age-rated as young as 4+, which is horrible.
- Agree on a code word or unique question you can ask if you receive an unusual phone call from your child. Ensure your child also knows how to handle calls from people they don’t know (tell, block, delete! Show them this video).
- Talk to your tween and teen children about sextortion. Soon! Our post explains everything.
- If your child is on Instagram, TikTok, BeReal, or Snapchat, keep their social media following very small. Less than 100 people, if possible. Make them imagine handing every photo to every person they’re following.
- Parents and caregivers must also be very careful with the photos we share online. Maybe most of the photos of your children are only shared with family in a text.
- If your child has Instagram, be very careful with what they share in their public profile. Never include their Snapchat profile.
- If your child has Instagram and/or TikTok, make sure their account is private (this should be the default if the correct birthday was used).
- Discuss fake news. Does your child have a few techniques for spotting it? (e.g., multiple sources). If it sounds too good to be true then it probably is.
- Remove phones from schools, because they are just one more spot where photos can be taken. We’ve written extensively about this topic.
- I know not all of our followers are individuals who pray. So, this last step won’t apply to everyone but I must mention it for those who do. This is an evil issue. Please treat it as such.
What if I have more questions about deepfakes? How can I stay up to date?
Two actions you can take!
- Subscribe to our tech trends newsletter, the PYE Download. About every 3 weeks, we’ll share what’s new, what the PYE team is up to, and a message from Chris.
- Ask any AI and deepfake-related questions in our private parent community called The Table! It’s not another Facebook group. No ads, no algorithms, no asterisks (and no AI!). Just honest, critical conversations and deep learning! For parents who want to “go slow” together. Become a member today!
*Disclosure: Some of the links in this post are affiliate links, meaning, at no additional cost to you, we will earn a commission if you click through and make a purchase. We constantly test products to make sure we only recommend solutions that we trust with our own families.
Chris McKenna, Founder: A man with never-ending energy when it comes to fighting for the safety and protection of children. Chris practices his internet safety tips on his four amazing children and is regularly featured on news, radio, and podcasts for his research. His 2019 US Senate Judiciary Committee testimony was the catalyst for draft legislation and ongoing discussion that could radically change online child protection laws and earned PYE the NCOSE Dignity Defense Alert Award in 2020. The PYE team has performed over 1,700 presentations at schools, churches, and nonprofits and was featured in the Childhood 2.0 movie. Other loves include running, spreadsheets, nature, and candy.