Close Menu
    What's Hot

    Crypto Meets Private Banking: UBS Weighs New Offering

    January 24, 2026

    Coinbase CEO says Big banks now view crypto as an ‘existential’ threat to their business

    January 24, 2026

    Stablecoin Yield Bans Under CLARITY Act Could Push Capital Offshore

    January 24, 2026
    Facebook X (Twitter) Instagram
    Trending
    • Crypto Meets Private Banking: UBS Weighs New Offering
    • Coinbase CEO says Big banks now view crypto as an ‘existential’ threat to their business
    • Stablecoin Yield Bans Under CLARITY Act Could Push Capital Offshore
    • The Next Impulse Wave To Watch Out For
    • Binance Founder CZ Projects Bitcoin Supercycle for 2026, Denies Trump Relationship
    • Could EU Sell US Debt if Greenland Deal Falls Through?
    • Gold Becomes Whale Safe Haven As Bitcoin Takes A Back Seat
    • Can Stablecoins Break Free From the US Dollar?
    Facebook X (Twitter) Instagram
    Tokatik – Latest Crypto News, Market Insights & Crypto Products
    • Home
    • Shop
    • Altcoins
    • Bitcoin
    • Ethereum
    • Exchanges
    • Market Updates
    • NFTs
    • DeFi
    • Regulations
    Tokatik – Latest Crypto News, Market Insights & Crypto Products
    Home»Regulations»Postmortems Can’t Stop AI-Powered Crypto Fraud
    Regulations

    Postmortems Can’t Stop AI-Powered Crypto Fraud

    8okaybaby@gmail.comBy 8okaybaby@gmail.comNovember 4, 2025No Comments5 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Postmortems Can’t Stop AI-Powered Crypto Fraud
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Opinion by: Danor Cohen, co-founder and chief technology officer of Kerberus

    In 2025, crypto risk is a torrent. AI is turbocharging scams. Deepfake pitches, voice clones, synthetic support agents — all of these are no longer fringe tools but frontline weapons. Last year, crypto scams likely hit a record high. Crypto fraud revenues reached at least $9.9 billion, partly driven by generative AI-enabled methods.

    Meanwhile, in 2025, more than $2.17 billion has been stolen — and that’s just in the first half of the year. Personal-wallet compromises now account for nearly 23% of stolen-fund cases.

    Still, the industry essentially responds with the same stale toolkit: audits, blacklists, reimbursement promises, user awareness drives and post-incident write-ups. These are reactive, slow and ill-suited for a threat that evolves at machine speed.

    AI is crypto’s alarm bell. It’s telling us just how vulnerable the current structure is. Unless we shift from patchwork reaction to baked-in resilience, we risk a collapse not in price, but in trust.

    AI has reshaped the battlefield

    Scams involving deepfakes and synthetic identities have stepped from novelty headlines to mainstream tactics. Generative AI is being used to scale lures, clone voices and trick users into sending funds.

    The most significant shift isn’t simply a matter of scale. It’s the speed and personalization of deception. Attackers can now replicate trusted environments or people almost instantly. The shift toward real-time defense must also quicken — not just as a feature but as a vital part of infrastructure.

    Outside of the crypto sector, regulators and financial authorities are waking up. The Monetary Authority of Singapore published a deepfake risk advisory to financial institutions, signaling that systemic AI deception is on its radar.

    The threat has evolved; the industry’s security mindset has not.

    Reactive security leaves users as walking targets

    Security in crypto has long relied on static defenses, including audits, bug bounties, code audits and blocklists. These tools are designed to identify code weaknesses, not behavioral deception.