Massive Blue and Others Respond as JD Vance and Theo Von Discuss AI Threats

massive blue theo von (1)
Reading Time:
7
 minutes
Published June 24, 2025 7:44 AM PDT

Share this article

When Vice President JD Vance appeared on comedian Theo Von's "This Past Weekend" podcast on June 7, 2025, their conversation quickly turned to one of the most pressing technological challenges of our time: the rapid proliferation of AI-generated content and its devastating impact on creators, artists, and public figures. What began as a discussion about federal AI legislation evolved into a sobering examination of voice cloning, deepfakes, and the urgent need for protective measures in an era where financial losses from deepfakes are rising quickly, from $12.3 billion in 2024 to $40 billion by 2027.

The conversation highlighted a critical intersection between policy, technology, and protection that companies like Massive Blue are actively addressing through AI-powered threat detection and content protection platforms.

Federal AI Investment Versus State Protection Tensions

The podcast discussion centered on a contentious provision in federal legislation that would allocate $500 million over the next decade to modernize government systems with AI and automation technologies. However, the bill contained a controversial element that caught both Theo Von and Vance's attention.

"The ban is tucked into a section of the bill that would allocate 500 million over the next 10 years to modernize government systems with the help of AI and automation technologies," Theo Von read during the show. "The ban would not only prevent new state-led regulations of AI, but would also block dozens of states from enforcing preexisting AI regulations and oversight structures."

Vance acknowledged the complexity of the issue, noting tensions between federal efficiency and state-level artist protection. "It feels scary," Vance admitted, before explaining the competing interests at play. "Tennessee obviously has a lot of musicians, and Tennessee wants to protect those musicians from having basically AI steal the production of their artists."

Tennessee has already taken concrete action on this front. On March 21, 2024, Governor Bill Lee signed the Ensuring Likeness Voice and Image Security (ELVIS) Act into law, making Tennessee the first state to specifically protect musicians' voices from unauthorized AI replication. The legislation updates Tennessee's Protection of Personal Rights law to include protections for songwriters, performers, and music industry professionals' voices from the misuse of artificial intelligence.

The act prohibits usage of AI to clone the voice of an artist without consent and can be criminally enforced as a Class A misdemeanor, while the bill passed the Tennessee House and Senate with a unanimous, bi-partisan vote including 93 ayes and 0 Noes in the House, and 30 ayes and zero noes in the Senate. Tennessee's music industry supports more than 61,617 jobs across the state, contributes $5.8 billion to our GDP, and fills over 4,500 music venues, making protection of this economic sector a significant priority for the state.

Voice Cloning Threats Target Major Artists

Theo Von and Vance's discussion took on tangible urgency when Vance illustrated the threat with concrete examples that resonate with millions of Americans. "Because one of the big problems with AI is you're going to be able to take somebody's voice and then Taylor Swift's voice or in anybody else's voice, and basically say, oh, okay, well, based on this one song that Robert Plant did 35 years ago, we're going to make a whole new Led Zeppelin song using artificial intelligence, and they want to protect people from that kind of thing happening."

This scenario is far from hypothetical. Consumer Reports reviewed six voice cloning apps and reports that four of those apps have no significant guardrails preventing users from cloning someone's voice without their consent. The technology has become remarkably accessible, with sometimes only 3 seconds of audio needed to make a cloned voice that is 85% similar to the original. Such accessibility gaps are what companies like Massive Blue and others who focus on threat detection, disruption, and deterrence are working to close through advanced AI-powered protection.

Theo Von's discussion also touched on the broader implications of AI-generated content, with Theo Von mentioning how "they made a bunch of little babies of all, a lot of podcasters, and now they're doing it with everybody. They got Dang, Aaron Rogers baby, they got left eye from that Milli Vanilli woman or whatever they have in there."

Growing AI Threat Patterns

Despite their casual conversational tone, Theo Von and Vance were addressing a severe and growing problem. Recent data reveals an alarming escalation in AI-powered fraud and content manipulation:

  • Deepfake fraud grew by over 10 times from 2022 to 2023
  • 6% of phishing emails now use AI technology in some form
  • 500,000 voice and video deepfakes were shared on social media in 2024
  • People can only identify deepfakes with 57% accuracy, much lower than the 84% accuracy of top AI detection tools

The entertainment industry faces particular vulnerability. A recent high-profile example occurred in May 2025 when actress Jamie Lee Curtis discovered an AI-generated fake ad using her likeness without consent on Meta's platforms. The fraudulent advertisement repurposed footage from an MSNBC interview Curtis gave during the Los Angeles wildfires, using AI to alter her speech to promote a product she never endorsed. After unsuccessful attempts to reach Meta through proper channels, Curtis publicly called out CEO Mark Zuckerberg on Instagram, prompting the company to remove the ad within hours. This detection gap creates opportunities for widespread manipulation and fraud, with Curtis noting that such misuse "diminishes my opportunities to actually speak my truth" and compromises her reputation for integrity.

The Human Cost of Digital Deception

Beyond financial losses, AI-generated content threatens the fundamental trust structures that underpin modern media and entertainment. Detection tools are themselves powered by algorithms, which are trained on a necessarily limited number of fake audio files. But when asked to detect a type of content that was generated using a technique that isn't covered within their training data, they struggle to yield accurate results.

Real-world consequences can be devastating. Baltimore experienced this firsthand when a principal received death threats and required security protection after an offensive voice recording went viral on Twitter, appearing to capture the school's principal making racist and anti-Semitic remarks about students and staff in a private conversation. The recording was later determined to be AI-generated, but not before causing significant personal and professional damage. Industry experts continue to document and analyze these emerging threat patterns to better understand the evolving landscape of AI-powered deception.

How Massive Blue and Others Are Fighting AI Threats

Such urgent realities have driven innovation in protective technologies across the industry. The low barrier to entry means AI voices could essentially bypass outdated authentication systems, creating multiple risks, including data breaches, reputational concerns, and financial fraud.

Massive Blue, a New York-based company founded in 2023, develops AI-powered threat detection solutions that address the types of concerns raised in the Theo Von-Vance conversation. The company's approach to combating AI-enabled threats illustrates one example of the technological responses emerging in an era where traditional security measures prove insufficient.

Founded by CEO Brian Haley with a focus on combating human trafficking through AI technology, Massive Blue has expanded its mission to include protecting artists and content creators from AI-enabled exploitation. The growing startup ecosystem continues to attract attention from investors and industry experts seeking innovative security solutions.

Massive Blue's Comprehensive Threat Detection Approach

Massive Blue's PRISM platform provides a comprehensive approach to AI threat detection that addresses many of the concerns raised in the Theo Von-Vance conversation. The company's technology can detect manipulated content across multiple modalities, including voice, image, and video deepfakes. According to company materials, PRISM has achieved takedown success rates of 99% on major platforms like Twitter, Instagram, and TikTok, and 100% on platforms that previously ignored all takedown requests, demonstrating significant operational effectiveness in content removal.

The platform's key capabilities include:

  1. AI Fusion Model - Assigns confidence scores to determine the likelihood of infringement and manages large-scale identification of infringement rings
  2. Multi-modal Detection - Identifies voice cloning, deepfake videos, and AI-generated images across platforms
  3. Real-time Monitoring - Continuously scans social media, websites, and online marketplaces for threats
  4. Automated Response - Provides takedown notices and legal workflow automation for rapid content removal

Such capabilities prove particularly relevant given that online media is the most targeted sector, with 4.27% in 2023, followed by Professional Services (3.14%) and Healthcare (2.41%). The company's leadership team, including President Mike McGraw, brings extensive experience in AI and security technologies to address these evolving challenges.

The Regulatory Puzzle

The federal legislation discussed by Theo Von and Vance reflects the complex balancing act facing policymakers. While federal coordination and investment in AI capabilities are essential, state-level protections for artists and creators serve vital functions. Tennessee's ELVIS Act serves as a model for this state-level approach, specifically prohibiting the use of AI to clone artists' voices without consent and establishing both civil and criminal penalties for violations.

Vance acknowledged this tension during the podcast: "I don't want California's progressive regulations to control artificial intelligence. I also agree with Marsh and Bill that you want to protect country artists in Nashville from having their crap stolen by AI."

The challenge lies in creating frameworks that foster innovation while protecting individual rights and creative property. After former President Biden's voice was cloned using AI in fake robocalls discouraging voting in the New Hampshire primary, the Federal Communications Commission unanimously outlawed the use of AI-generated voices in scam robocalls.

Companies focusing on ethical AI development and protection, like those whose mission and values prioritize responsible technology deployment, are working to bridge this gap between innovation and safety.

Technology Solutions Meet Growing Challenges

Theo Von and Vance's conversation ultimately highlighted a fundamental truth: the pace of technological advancement has outstripped both regulatory frameworks and public understanding. AI technology presents a double-edged reality where the same advances that enable unprecedented creativity and productivity also create new vectors for fraud, manipulation, and exploitation. The entertainment industry, in particular, faces an existential challenge where the line between authentic and artificial content continues to blur.

However, the solution may lie not in restricting AI development but in deploying equally sophisticated defensive technologies. Companies like Massive Blue are working to demonstrate that AI can be used to combat AI-enabled threats, creating detection systems that adapt alongside the threats they're designed to counter.

69% of enterprises believe that AI is necessary for cybersecurity as threats increase in volume, suggesting that defensive AI applications will become increasingly critical. Emerging security platforms and solutions are being developed to address these sophisticated threats with equally advanced protective measures.

Companies like Massive Blue are developing defensive technologies that aim to adapt alongside emerging threats, though the effectiveness of such solutions remains to be proven at scale. Theo Von and Vance's conversation marks just the beginning of what promises to be an ongoing national dialogue about the role of AI in society and the measures needed to harness its benefits while mitigating its risks.

Deepfake technology becomes more sophisticated and accessible each day, narrowing the window for implementing effective protective measures. For those seeking comprehensive information about emerging solutions, additional resources and platforms continue to develop as the industry responds to these challenges. A conversation between a comedian and a vice president may have seemed casual, but it addressed issues that will define the relationship between technology and human creativity for decades to come.

generic banners explore the internet 1500x300
Follow CEO Today
Just for you
    By CEO TodayJune 24, 2025

    About CEO Today

    CEO Today Online and CEO Today magazine are dedicated to providing CEOs and C-level executives with the latest corporate developments, business news and technological innovations.

    Follow CEO Today