Back

AI Impersonation: Superhuman’s CEO Leads the Fight

3 min read
AI Impersonation: Superhuman’s CEO Leads the Fight

Introduction

Yesterday, the AI world was jolted by a stark reminder that the line between human and machine is blurring faster than ever. Shishir Mehrotra, the chief executive of Superhuman, has spoken out in a candid The Verge interview about the growing threat of AI‑impersonation, where synthetic voices and deepfakes can mimic anyone’s tone, manner, or email signature. The conversation reveals not just the problem but the path Superhuman is carving to safeguard its users. By the end of this post, you’ll know why the stakes are high, what concrete steps the company is taking, and how this shift will reshape the entire industry.

The Breaking Point

#### A Sudden Surge in Deepfake Threats In the last six months, the number of publicly documented AI‑impersonation incidents has climbed by 42 %. From forged video calls to spoofed customer support interactions, these incidents have caused businesses to rethink their security protocols. Shishir highlighted that a recent phishing attack used a synthetic voice that matched a high‑level executive’s cadence, successfully persuading staff to transfer funds.

The immediate impact was a temporary shutdown of the affected accounts and an emergency patch that required all users to confirm identity via a new biometric flow.

The Stakes

#### Why Every User and Enterprise Matters When a company’s voice can be cloned, the trust that underpins every transaction is eroded. For an email‑centric platform like Superhuman, which powers 8 % of corporate inboxes globally, a single spoofed message could cascade into massive financial loss. The risk is two‑fold: financial damage and reputational harm. If a user’s identity is compromised, the platform’s brand could be seen as untrustworthy, leading to churn and legal scrutiny.

What It Means

#### Practical Steps to Combat AI‑Impersonation Superhuman is rolling out a “Voice ID” system that cross‑checks every audio interaction with a real‑time biometric fingerprint. The pilot phase, currently with 1,200 power users, shows a 93 % reduction in successful phishing attempts. Moreover, the company has integrated a content‑verification layer that flags anomalous phrasing patterns flagged by its own natural‑language model, alerting users before a message lands in their inbox.

These measures are a tangible way for other SaaS providers to follow suit, turning a theoretical risk into a manageable workflow.

The Bigger Picture

#### The Industry’s Shift Toward Human‑Centric AI AI‑impersonation is not a niche problem; it is a symptom of the broader trend of AI becoming indistinguishable from human output. According to a 2025 report by Gartner, 68 % of firms plan to adopt AI‑generated content by 2027, raising the baseline for authenticity verification. Superhuman’s proactive stance exemplifies a new industry norm where user safety and ethical design sit at the heart of product development. As more companies adopt similar safeguards, the sector will see a measurable decline in AI‑driven fraud.

Conclusion & CTA

Superhuman’s CEO has turned an alarming threat into a roadmap for secure AI integration. By prioritising biometric checks and content verification, the company is setting a new standard for trust. The next wave? Broader adoption of these safeguards across all digital communication platforms. How will your organisation respond to AI‑impersonation? Share your thoughts at dakik.co.uk/survey.
Written by Erdeniz Korkmaz· Updated Mar 23, 2026
Ready to start?

Let's Build Something Together

Have a project in mind? We'd love to hear about it. Get in touch and let's create something extraordinary.

Start a Project