If you’re on social media, it’s highly likely you’re seeing your friends, celebrities and favorite brands transforming themselves into action figures through ChatGPT prompts.
That’s because, lately, artificial intelligence chatbots like ChatGPT are not just for generating ideas about what you should write ― they’re being updated to have the ability to create realistic doll images.
Once you upload an image of yourself and tell ChatGPT to make an action figure with accessories based off the photo, the tool will generate a plastic-doll version of yourself that looks similar to the toys in boxes.
While the AI action figure trend first got popular on LinkedIn, it has gone viral across social media platforms. Actor Brooke Shields, for example, recently posted an image of an action figure version of herself on Instagram that came with a needlepoint kit, shampoo and a ticket to Broadway.
People in favor of the trend say, “It’s fun, free, and super easy!” But before you share your own action figure for all to see, you should consider these data privacy risks, experts say.
One potential con? Sharing so much of your interests makes you an easier target for hackers.
The more you share with ChatGPT, the more realistic your action figure “starter pack” becomes — and that can be the biggest immediate privacy risk if you share it on social media.
In my own prompt, I uploaded a photo of myself and asked ChatGPT to “Draw an action figure toy of the person in this photo. The figure should be a full figure and displayed in its original blister pack.” I noted that my action figure “always has an orange cat, a cake and daffodils” to represent my interests in cat ownership, baking and botany.
But these action figure accessories can reveal more about you than you might want to share publicly, said Dave Chronister, the CEO of cybersecurity company Parameter Security.
“The fact that you are showing people, ‘Here are the three or four things I’m most interested in at this point’ and sharing it to the world, that becomes a very big risk, because now people can target you,” he said. “Social engineering attacks today are still the easiest, most popular way for attackers to target you as an employee and you as an individual.“
Tapping into your heightened emotions is how hackers get rational people to stop thinking logically. These cybersecurity attacks are most successful when the bad actor knows what will cause you to get scared or excited, and click on links you should not, Chronister said.
For example, if you share that one of your action figure accessories is a U.S. Open ticket, a hacker would know that this kind of email is how they could fool you into sharing your banking and personal information. In my own case, if a bad actor tailored their phishing email based on orange-cat fostering opportunities, I might be more likely to click than I would on a different scam email.
So maybe you, like me, should think twice about using this trend to share a hobby or interest that is uniquely yours on a large networking platform like LinkedIn, a site job scammers are known to frequent.
The bigger issue might be how normal it has become to share so much of yourself to AI models.
The other potential data risk is how ChatGPT, or any tool that generates images through AI, will take your photo and store and use it for future model retraining, said Jennifer King, a privacy and data policy fellow at the Stanford University Institute for Human-Centered Artificial Intelligence.
She noted that with OpenAI, the developer of ChatGPT, you must affirmatively choose to opt out and tell the tool to “not train on my content,” so that anything you type or upload into ChatGPT will not be used for future training purposes.
But many people will likely stick to the default of not disabling this feature, because they do not fully understand it’s an option, Chronister said.
Why could it be bad to share your images with OpenAI? The long-term implications of OpenAI training a model on your image are still unknown, and that in itself could be a privacy concern.
OpenAI states on its site: “We don’t use your content to market our services or create advertising profiles of you — we use it to make our models more helpful.” But what kind of future help your images are going toward is not explicitly detailed. “The problem is that you just don’t really know what happens after you share the data,” King said.
Ask yourself “whether you are comfortable helping Open AI build and monetize these tools. Some people will be fine with this, others not,” King said.
Chronister called the AI doll trend a “slippery slope” because it normalizes sharing your personal information with companies like OpenAI. You may think, “What’s a little more data?” and one day in the near future, you are sharing something about yourself that is best kept private, he said.
We Don’t Work For Billionaires. We Work For You.
Support HuffPost
Already contributed? Log in to hide these messages.
Thinking about these privacy implications interrupts the fun of seeing yourself as an action figure. But it’s the kind of risk calculus that keeps you safer online.
Read the full article here