Travis Schreiber on the Fight Against AI-Generated Reputation Damage

commercial finance and artificial intelligence
Reading Time:
2
 minutes
Published July 29, 2025 6:27 AM PDT

Share this article

In a world where AI generates your first impression, reputation has become a machine-readable asset, and a fragile one. Travis Schreiber, Director of Operations at Erase and founder of Oswald23, knows this better than most.

At Erase, Schreiber leads one of the most respected teams in online reputation management. But lately, their job has taken on a new layer of complexity: cleaning up after AI misquotes, mislabels, and misrepresents real people and businesses.

“We used to fight Google autocomplete,” says Schreiber. “Now we’re fighting hallucinated AI summaries that invent facts out of Reddit threads.”

When AI Gets It Wrong

AI-generated summaries now sit above search results, in voice assistants, and inside apps that people use to decide who to hire, trust, or ignore. And when those summaries are inaccurate, the damage can be instant, and sticky.

Schreiber points to a recent Erase client, a small HVAC business with a name similar to a national chain. “The AI kept blending them together,” he explains. “It cited fake complaints, listed the wrong address, and even misidentified the owner. None of it was true, but that was the top answer.”

These AI mistakes don’t just hurt feelings. They hurt sales, credibility, and even employee recruitment. Worse, AI overviews often discourage further clicks, so users never see the correction, even if it’s sitting right below.

The New Front Line of Reputation

Schreiber says this is a shift from reputation management to “reputation architecture.” It’s no longer about just ranking well in search. It’s about making sure the content AI sees, and summarizes, is accurate, structured, and recent.

At Erase, that means updating bios across platforms, pushing out high-quality long-form content with structured Q&A, and aggressively monitoring AI-generated results.

“We search our clients on every AI platform with browsing enabled,” Schreiber says. “We look at the summaries, the bios, even the autofill questions. Then we reverse engineer what’s shaping those answers.”

A System That’s Not Ready

The problem is bigger than just one tool. ChatGPT, Gemini, Meta AI, and others all generate answers from whatever they can find. And that includes outdated blogs, forum comments, fake press releases, or even satire.

The regulatory world is struggling to keep up. While the FTC has banned fake testimonials and the EU’s DSA requires transparency, there’s still no accountability when an AI summary misrepresents someone using public (but wrong) data.

According to Schreiber, “It’s not libel. It’s not fraud. It’s just… inaccurate. And nobody’s responsible. That’s what makes it dangerous.”

Winning the Visibility War

So what’s the solution? For Schreiber and his team at Erase, it starts with control. Businesses need to claim their narrative before AI fills in the blanks.

Here’s what they recommend:

  • Audit your online presence regularly, especially on AI-enabled search tools.
  • Update your bios, About pages, and key content to ensure accuracy and consistency.
  • Use natural language in your content so AI models can understand and summarize it correctly.
  • Respond quickly to misinformation, whether it's from a bot or a real person.
  • Avoid shady shortcuts like fake reviews or GPT-spun articles, which often backfire under new FTC rules.

Schreiber puts it simply: “The AI doesn’t care who’s right. It cares who’s visible. So make the right version of you the most visible thing out there.”

generic banners explore the internet 1500x300
Follow CEO Today
Just for you
    By Jacob MallinderJuly 29, 2025

    About CEO Today

    CEO Today Online and CEO Today magazine are dedicated to providing CEOs and C-level executives with the latest corporate developments, business news and technological innovations.

    Follow CEO Today