If you've been scrolling through Instagram lately, you'll find that many of your friends have been posting selfies that are digital avatar versions of themselves thanks to a popular artificial intelligence app.
The social media trend involves users downloading the Lensa app and paying $7.99–up from $3.99 just days ago–to access its "Magic Avatars" feature, which manipulates selfie images into digital art with themes ranging from pop, to princesses to anime.
But the wildly popular app created by photo-editing tech company Prisma Labs has incurred backlash from critics who believe it can violate users' data privacy and artist rights.
Some users have become concerned about similar photo-altering apps and how they retain their data after submitting their photos.
\u201cI wrote about the dangers of AI-art & facetuning apps like lensa for @WIRED. the implications are far more sinister than you can imagine. https://t.co/j1LQq0tf3z\u201d— \ud83c\udf2cDoctrix Snow (she/her) (@\ud83c\udf2cDoctrix Snow (she/her)) 1670424066
According to Prisma’s terms and conditions, users:
“retain all rights in and to your user content.”
Anyone using the app is essentially consenting to give the company ownership of the digital artwork created through the app.
Prisma's terms further states:
“Perpetual, revocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable, sub-licensable license to use, reproduce, modify, adapt, translate, create derivative works.”
Additionally, apart from Lensa's terms and conditions, its privacy policy states it:
"Does not use photos you provide….for any reason other than to apply different stylized filters or effects to them.”
Critics were concerned the company deletes generated AI images from its cloud services but not until after using them to train its AI.
\u201cA few months and years later when you all see your faces used in AI art as data sets, I wonder how you will feel\u201d— (sab) \u777f\u71c4 \u2022 Their Totality (@(sab) \u777f\u71c4 \u2022 Their Totality) 1670017901
\u201c@LaurynIpsum A lot of these generators are trained on Danbooru. So here's a terrifying thought: what if you put a child's face into it? \n\nIf you search "novelai" on pixiv, you'll see an endless sea of photo-realistic CP, generated by a company in Delaware and sold at a profit\u201d— Lauryn Ipsum (@Lauryn Ipsum) 1670293635
The viral app uses Stable Diffusion, a text-to-model image generator essentially "'trained' to learn patterns through an online database of images called LAION-5B," according to Buzzfeed News.
"Once the training is complete, it no longer pulls from those images but uses the patterns to create more content. It then 'learns' your facial features from the photos you upload and applies them to that art."
For many artists, the source of the knowledge is the point of contention as LAION-5B uses publicly available images without compensation from sites like Google Images, DeviantArt, Getty Images and Pinterest.
\u201cIf you've recently been playing around with the Lensa App to make AI art "magic avatars" please know that these images are created with stolen art through the Stable Diffusion model.\u201d— meg rae (@meg rae) 1670020138
An anonymous digital artist told the media that "This is about theft."
"Artists dislike AI art because the programs are trained unethically using databases of art belonging to artists who have not given their consent."
Many Twitter users voiced their concerns online.
\u201c\u201cthe ai is just using millions of images of already-existing art to make its amalgamation it\u2019s not stealing from artists\u201d \n\nSO WHERE DOES THE ART COME FROM????\u201d— D\u03b9\u0273\u03b9 L\u03c5\u0273\u03b1 \u2728\u270d\ud83c\udffe (@D\u03b9\u0273\u03b9 L\u03c5\u0273\u03b1 \u2728\u270d\ud83c\udffe) 1670163629
\u201cFreelance artist today watching everyone share their Lensa drawings\u201d— Eva Styles (@Eva Styles) 1670030130
\u201cThese are all from posts from friends on my timeline. It didn\u2019t take much searching to find portraits with signature fragments. Most people\u2019s sets have at least one.\u201d— Lauryn Ipsum (@Lauryn Ipsum) 1670293635
\u201cI\u2019m cropping these for privacy reasons/because I\u2019m not trying to call out any one individual. These are all Lensa portraits where the mangled remains of an artist\u2019s signature is still visible. That\u2019s the remains of the signature of one of the multiple artists it stole from.\n\nA \ud83e\uddf5\u201d— Lauryn Ipsum (@Lauryn Ipsum) 1670293635
\u201cI was confused about this too. Y\u2019all, the AI isn\u2019t compiling images to make unique art. It\u2019s stealing art, removing the details, & putting ur face over it. Those are someone\u2019s pre-existing drawings. That AI is an art theft machine.\u201d— Honey Ma (@Honey Ma) 1670087255
\u201cHey folks. I think the time for plausible deniability about AI art has passed. Artists have told you that it\u2019s harmful and exploitative, and I still see people going \u201cI have mixed feelings about AI art but\u2026 here\u2019s what I fished out of a generator!\u201d\u201d— Spirit Margaret Hall-Owen \ud83c\udf83\ud83d\udc80 (@Spirit Margaret Hall-Owen \ud83c\udf83\ud83d\udc80) 1670082995
\u201cPpl are really determined to argue w/ artists, who are often themselves hustling to get work/pay bills, that it\u2019s ok for ai companies to steal art/data all so you can pay $8 for unoriginal uglass animated selfies you\u2019ll use as a dp for 2 weeks before hopping on the next fad.\u201d— iris \ud83c\udf31 (@iris \ud83c\udf31) 1670170403
Another problematic area is the tendency for the AI to create images sexualizing women and ones that are racially inaccurate–basically anglicizing people of color.
The CEO of Prisma Labs emphasized that it:
“does not have the same level of attention and appreciation for art as a human being.”
Here are some examples of where similar apps can actually perpetuate stereotypes and cause harm by doing so.
\u201cIs it just me or are these AI selfie generator apps perpetuating misogyny? Here\u2019s a few I got just based on my photos of my face.\u201d— Brandee Barker (@Brandee Barker) 1670108697
\u201cAlso the fact that it depicts femme bodies completely naked with ZERO prompt is creepy and can absolutely be detrimental to you one day when someone claims the AI art is based on a n*de you sent them idk man\u201d— Aki Simp \ud83d\udd1c Holmat (@Aki Simp \ud83d\udd1c Holmat) 1669943772
\u201cAlso the AI art app is clearly racially biased it literally cannot make anyone a shad lighter than wonder bread. Don't use that thing . Don't give it data \ud83d\ude2d\u201d— Breana (Williams) Navickas (@Breana (Williams) Navickas) 1670079314
Cybersecurity expert Andrew Couts–a senior editor of security at Wired and oversees privacy policy, national security and surveillance coverage–told Good Morning America it's impossible to know what happens to the images uploaded to the app.
"It's impossible to know, without a full audit of the company's back-end systems, to know how safe or unsafe your pictures may be."
"The company does claim to 'delete' face data after 24 hours and they seem to have good policies in place for their privacy and security practices."
Couts added he wasn't too worried about the photos since most people's faces are already available on their social media pages.
However, he added:
"The main thing I would be concerned about is the behavioral analytics that they're collecting."
"If I were going to use the app, I would make sure to turn on as restrictive privacy settings as possible."
He further encouraged users to tighten restriction settings on their smartphones.
"You can change your privacy settings on your phone to make sure that the app isn't collecting as much data as it seems to be able to."
"And you can make sure that you're not sharing images that contain anything more private than just your face."