Does Microsoft Have a DeepFake Problem?
Now Sales and Marketing have adopted Deepfake profiles to automate more work on platforms such as LinkedIn.
A new report says fake accounts with computer-generated faces are now a thing on the professional networking website, LinkedIn. Two Stanford researchers have found widespread use of fake Linkedin accounts created with artificial intelligence-generated (AI) profile photos.
Companies on LinkedIn are starting to use deepfake images in their sales and marketing strategies as the proliferation of fake accounts online continues. So what’s going on here?
It turns out the Deepfake problem has been growing on LinkedIn for quite some time. NPR says LinkedIn has deleted about 15 million fake LinkedIn accounts so far.
Over 1,000 deep fakes were found to be in use by companies on LinkedIn for sales and marketing, just by the media companies investigating this issue.
Something just didn’t look right about the profile image, and that’s because the person didn’t really exist.
The B2B Sales Era of Deepfake Adoption
I really like companies like LinkedIn with real KYC policies, where a real identity can be verified. But what happens when this trust point starts to break down? Deepfakes, realistic but AI-generated profiles where once mostly linked to the world of scammers and hackers, but now are increasingly being used intensively by companies in an effort to increase sales.
I’m conducting a Poll about this here.
I receive a lot of spam in my inbox on LinkedIn, mostly from Deepfakes especially from foreign actors out of China. Have you noticed them on LinkedIn?
I’ve met a lot of Shannons’ in my time. All of us have, we just haven’t really noticed.
Deepfake Creators are the New Social Media Bots
NPR found that many of the LinkedIn profiles seem to have a far more mundane purpose: drumming up sales for companies big and small.
With marketing and sales using Deepfakes on LinkedIn, primarily a B2B Advertising network, it begs the question what will be done about it.
In Asia digital personas are already brand ambassadors, News broadcasters and influencers of various kinds.
Deepfake profiles can be socially engineer to be more attractive. They might just be the next side-gig in the Creator Economy. Deep fake technology has gotten so good at creating human faces that humans no longer know when they are talking to a real person. Even more disturbing, a recent study revealed that humans trust a fake face more than a real face.
Why not just hire less sales and marketing folk and create more Deepfake bots? I can see the appeal for companies pushing their B2B products to other companies.
By using fake profiles, companies can cast a wide net online without beefing up their own sales staff or hitting LinkedIn's limits on messages. Demand for online sales leads exploded during the pandemic as it became hard for sales teams to pitch their products in person.
A.I. Deepfakes Cause Existential Cybersecurity and Reputation Issues
Fake accounts have historically also been used by companies like Facebook to boost and inflate their real numbers in terms of accounts on their platform. However, as these these computer-generated LinkedIn profile photos become more sophisticated and harder to detect, they illustrate how a technology that has been used to propagate misinformation and harassment online has made its way to the corporate world. This is a major cybersecurity problem for Microsoft and the future of corporate sales and marketing.
If a company outsources some of their “marketing” activities, anyone could be creating deepfakes for profit. From a business perspective, making social media accounts with computer-generated faces has its advantages: It's cheaper than hiring multiple people to create real accounts, and the images are convincing.
According to NPR, which looked into the matter, when these real people were asked about the deepfakes, they said they knew only that “outside marketers” had been hired. But they had no idea computer-generated images were being used as a kind of digital cold caller. Just like there are VPNs, I’m sure Deepfake creators and companies can outsource their services in ways that are harder to trace.
Ironically, LinkedIn Learning even has a course on understanding the impact of Deepfake Videos. LinkedIn which boasts over 800 million professionals recently has to axe the entire LinkedIn China segment. With at least 15 million “fake profiles” deleted, the cybersecurity war over our trust is now fully online in 2022.
The process is simple: a bot with an AI-generated profile photo contacts an unsuspecting Linkedin user and, if the target shows interest, they get passed on to a real salesperson to continue the conversation.
The Rise of the Synthetic Internet
A recent study found faces made by AI have become "indistinguishable" from real faces. People have just a 50% chance of guessing correctly whether a face was created by a computer — no better than flipping a coin. However account details, name choices and other things on a LinkedIn account (like if the account doesn’t have a believable account history or is too recent), as well as doing an image search can help detect a fake. There are red flags from those like myself who get bombarded by them each day in my LinkedIn inbox and connection requests.
"If you ask the average person on the internet, 'Is this a real person or synthetically generated?' they are essentially at chance," said Hany Farid, an expert in digital media forensics at the University of California, Berkeley, who co-authored the study with Sophie J. Nightingale of Lancaster University.
As the Metaverse evolves, A.I. generated identities and content will begin to become more common and frequent than real human content. If we already have trouble distinguishing clickbait, fake news and sentiment amplification online to hack our attention and algorithms, what happens in such a world of sophisticated cloning of humans by A.I.? Clearly the Synthetic internet provides a lot of opportunities for fraud, phishing and other cybersecurity attacks on the human level.
The Synthetic Internet can create human personas we immediately trust more. You don’t have to go to LinkedIn’s Creator school to behave like a trustworthy human spouting posts about empathy, inclusion and feel-good stories. You could create a synthetic clone and follow their best practices and fool LinkedIn’s own recommendation algo. In theory, Microsoft could itself do this and not ever have to pay real creators.
Their study also found people consider computer-made faces slightly more trustworthy than real ones. Farid suspects that's because the AI sticks to the most average features when creating a face. The corporate-spamming problem is confusing for LinkedIn moderators to handle at the scale that this will be hitting us as soon as in 2023.
"That face tends to look trustworthy, because it's familiar, right? It looks like somebody we know," he said.
Metrics online in advertising have been abused before, most notably with how Facebook reported video metrics. But what happens when the basic tenets of trust are abused on a trustworthy platform like LinkedIn? Where can we go?
Screenrant, assures us that deepfake tech has positive implications though as well when they say, for example, deep fakes are expected to benefit education, as the use of multimedia and interactive lessons continue to grow in the classrooms as part of the hybrid-learning, post-pandemic world. Not to mention, they can help to breathe new life into history, or to predict weather patterns with greater accuracy. I’m not really buying that. In Asia, instead of paying a brand ambassador why not just engineer one your customers will like? That’s a much more believable application that’s already occurring.
Companies are Using Deepfakes to Cheat Microsoft
Using AI to Cut Down on Hiring Costs
Companies use profiles like these to cast a wide net of potential leads without having to use real sales staff and to avoid hitting Linkedin message limits. In the sales driven environment of LinkedIn, more leads and more views is better and the appeal of synthetic AI and deepfakes is too great.
Just like Twitter bot networks were once used to spam and retweet any message. Our social platforms and information systems won’t be immune to new discoveries in Deepfake technology which will definately come as well in the form of content automation with OpenAI’s own GPT-4 that will arrive in the next few months.
In this way, I think we are going to see RPA and no-code platforms also partially automate all the roles we see on LinkedIn where HR, Sales and marketing folk used to run the streets. In the future, it’s more likely do be A.I. doing it. That’s the synthetic A.I. world that’s coming.
Unfortunately this is just the beginning for deepfake technology. There’s nothing illegal about using such images, and now that deepfake images can fool most people who are not actually looking for a flaw, the practice is a cheap way of drumming up business. Nonetheless, LinkedIn, owned by Microsoft Corp., removed the profiles after being informed about them. But I cannot spend too much time each day reporting them.
If you enjoy stories about A.I. you can join my Newsletter AiSupremacy where I cover those types of articles and trends in more depth.
To spot on A.I. Deepfake profile on LinkedIn:
Do an Image search
Check the profile and account’s history for believability and a valid history
If the inbox text message does not feel real, it likely isn’t.
Red flags also if you have no mutual connections and they aren’t in the same field as you are.
Advances in Cybersecurity Won’t Keep up with A.I.
I think it’s safe to assume that advances in cybersecurity aren’t keeping up with advances in artificial intelligence and this gap is probably only going to widen. Also how A.I. will scale no-code platforms could change the dynamics of many things in how businesses find value and automate certain tasks.
Thanks for reading! Have a good week. To unlock paid content go premium.