Michael Saylor revealed an alarming trend involving the proliferation of artificial intelligence (AI) generated deepfake videos featuring him.
Saylor disclosed that his team is tirelessly working to remove approximately 80 such fake videos daily, most aimed at promoting various Bitcoin scams.
Deepfake Videos Promoting Bitcoin Scams
Warning There is no risk-free way to double your #bitcoin, and @MicroStrategy doesn’t give away $BTC to those who scan a barcode. My team takes down about 80 fake AI-generated @YouTube videos every day, but the scammers keep launching more. Don’t trust, verify. pic.twitter.com/gqZkQW02Ji — Michael Saylor
(@saylor) January 13, 2024
Saylor reiterated his warning, urging the community, “Don’t trust, verify.” stating that “the scammers keep launching more.”
This revelation follows a surge in reports from X users who encountered fake AI-generated videos featuring Saylor promising to double viewers’ money.
Fake @saylor ads made with AI on @YouTube promising to double people’s money instantly.
Why is @Google consistently so so bad at stopping scams? pic.twitter.com/aKNfK0iZN9
— Bobby Shell
(@iBobbyShell) January 9, 2024
These videos prompt unsuspecting viewers to scan QR codes, directing their Bitcoin to scammer-controlled addresses.
Amid these recent advancements, it has been reported that Michael Saylor sold a portion of MicroStrategy stock, totaling between 3,882 and 5,000 shares, from January 2 to 10. This move potentially resulted in a profit of nearly $20 million.
Saylor had initially planned to sell a maximum of 5,000 shares daily from January 2, 2024, until April 26, 2024, totaling 400,000 shares valued at nearly $200 million.
Deepfake Threat Escalates
This incident is reminiscent of a similar situation in 2022 when fake videos of Elon Musk surfaced on various platforms, promoting cryptocurrency platforms with enticing returns. The rise in deepfake content has become a cause for concern in crypto. Solana co-founder Anatoly Yakovenko also fell victim to such manipulative videos earlier this year.
In an interview with The Verge, Austin Federa, head of strategy at the Solana Foundation, expressed concern over the substantial increase in deepfakes and other AI-generated content.
Jerry Peng, a researcher at 0xScope, added that AI could play a crucial role in creating more realistic deepfakes, posing a significant threat to unsuspecting crypto users.
U.S. law enforcement officials issued a warning on January 9, stating that advances in AI may facilitate hacking, scams, and money laundering by lowering the technical know-how required for such crimes.
However, Rob Joyce, director of cybersecurity at the National Security Agency, argued that AI could also aid authorities in tracking down and combating illegal activities more efficiently.
The post Michael Saylor Takes Down 80 AI-Generated Deepfake Videos of Himself Every Day appeared first on CryptoPotato.