mt logoMyToken
RTP
$166,296,730,436.41 +0.04%
24H LQ
$321,291,798.02 -0.31%
FGI
0%
ETH Gas
Spot
Exchanges

Behind the Screens: BitMind’s Co‑Founder on Battling the $200M Deepfake Scam Surge

Favorite
Share
bitmind

Q1. What inspired you and your co‑founders to launch BitMind, and how did your backgrounds lead you to focus on AI developer tools and deepfake detection?

The initial inspiration came from the concept of decentralized AI and a desire to build a world free from the pitfalls that plagued social media and other technology sectors. AI felt existential to get right. We were observing the massive progressive generative AI was making while huge world events were going on such as the 2024 elections.

My undergraduate studies in AI, experience at Amazon doing recommendation systems, and crypto experience at NEAR made decentralized AI a very good skill fit. My co-founder Dylan, who was my college roommate and a computer vision engineer, along with our network of other amazing AI engineers provided a strong foundation to explore the unsolved problem of deepfake detection.

Q2. The “Q1 2025 Deepfake Incident Report” recorded 163 deepfake‑related scams and over $200 million in losses between January and April. In your view, what factors have driven such a dramatic rise in these attacks?

The primary factor is the increased accessibility of generative AI tools. This has made it essentially frictionless both from a technical and economic standpoint to produce sophisticated deepfakes, creating a new attack vector for scammers that is still relatively new and unexplored. These types of new attacks with AI are much harder to detect than standard email phishing attacks that have been around for decades.

Q3. Report data shows 41% of incidents target celebrities and politicians, but 34% involve ordinary citizens. Why are private individuals now nearly as attractive to scammers as high‑profile figures?

The internet enables fraud and abuse to scale similarly to social media or news dissemination. No one is immune, and everyone must remain vigilant. Small scams that are widely distributed can be just as profitable for criminals as scamming high-net-worth individuals.

Q4. One scheme netted $25 million by duping a Singapore firm with a CFO deepfake video call. Can you walk us through how this scam likely unfolded and why existing corporate defenses failed?

This was a really interesting case which involved highly sophisticated social engineering combined with deepfakes. The criminals groomed the victim over an extended period before initiating a panicked call that prompted quick, irrational decisions leading to the massive loss. These operations are not one-off events, and can be extremely sophisticated and prolonged.

Q5. BitMind’s deepfake detection solutions leverage AI—can you explain, at a high level, how your algorithms identify manipulated media and stay ahead of increasingly sophisticated forgeries?

There’s a variety of sophisticated computer vision techniques that are used to make our identify manipulated data and stay ahead but typically employ large Convolutional Neural Networks (CNNs) which analyze different segments of image/video frames to identify patterns which are indicative of either real or AI-generated content. They stay ahead by dynamically competing in an open source competition on Bittensor that continuously introduces new real and generative data.

Q6. Beyond detection, BitMind offers cutting‑edge AI tools for developers. How do these products complement your fraud‑prevention suite, and what unique value does this end‑to‑end ecosystem deliver?

BitMind has achieved great progress from building on Bittensor and developed powerful infrastructure to scale our services to over 50k monthly active users. We productized this infrastructure so other amazing Bittensor and AI teams can scale their services effectively.

Q7. Any powerful detection system risks false positives. How do you strike the balance between aggressive screening for deepfakes and minimizing erroneous flags that could erode user trust?

We optimize for being the most accurate detection solution in the world. We expose APIs and information that allow applications to make the decision on how to display the information. False positives remain an inherent risk, and it is crucial not to censor content but to empower consumers and product owners with choices. We currently display the overall classification and a confidence score to provide the most useful information to the end users in our applications.

Q8. Governments and platforms are scrambling to regulate synthetic media. What regulatory or policy measures would you recommend to curb deepfake abuse while preserving legitimate AI innovation?

This remains a hotly debated topic, particularly as AI intersects with copyright and intellectual property laws in complex ways. Copyright and IP frameworks, designed for human creators, struggle with AI due to ambiguities around authorship. Governments should regulate AI in accordance with existing laws and regulations. For example, governments should apply existing laws to AI outputs (e.g. copyright), enforce AI-related crimes on defamation, fraud, abuse, and non-consensual imagery to combat deepfake harms like scams. It is too early for sweeping AI-specific regulations because the space is still rapidly changing and risks hindering progress and regulatory capture.

Q9. Fighting deepfake fraud is a collective endeavor. Which sectors (financial services, social media platforms, government agencies) has BitMind partnered with, and what joint efforts are proving most effective?

BitMind is used extensively across social media and news platforms. We have begun to explore different partnerships with financial services such as KYC, institutional investment services, and financial fraud services. We have not signed any government contracts yet, as our initial go-to-market strategy focuses on direct-to-consumer approaches.

Q10. Looking ahead, what key features or innovations are on BitMind’s product roadmap to ensure you stay at the forefront of deepfake detection and AI tooling?

We have several exciting developments in the pipeline rolling out in Q3. First, we plan to introduce segmentation features so users can identify specific modified parts of images or videos. We are also launching a native mobile app that allows users to scan content directly from any social media platform using BitMind. In the longer term, we aim to expand into additional classifications and modalities, such as proof-of-human verification and audio detection.

Disclaimer: This article is copyrighted by the original author and does not represent MyToken’s views and positions. If you have any questions regarding content or copyright, please contact us.(www.mytokencap.com)contact