He’s great, except for the stalking conviction

On Friday the 13th, a friend of mine casually typed his name into ChatGPT and was shocked to find that:

In 2014 [he] was arrested after police said he was allegedly stalking a young girl he had encountered in a neighborhood park. Prosecutors said he followed the girl home and attempted to contact her repeatedly.

The fiction continued, claiming he’d been jailed for 90 days and that the case generated significant media attention because of his previous career as a “respectable but not highly prominent” broadcast journalist (that’s the only part of the response that was true).

My friend’s parting words in his distraught text to me: “Isn’t it nauseating. How can I trust anything I get from AI now?”

I validated his distress – as I do any you’ve experienced enduring flimsy responses – and will now try to answer his question.

How to radically reduce the risk of AI inaccuracies when researching facts

  1. Pay for a better model. My friend was using the $8/month plan, which significantly throttles computing power. If you’re on a $20/month plan, most queries will get more computing power and therefore more reliable responses, especially when you:
  2. Toggle the model to its more accurate mode. ChatGPT calls this “Thinking;” Claude’s is “Think Longer.” I leave this setting on for every query and pay extra for the increased usage.
  3. Always include “cite your work” in your initial prompt. Then make sure you see those reassuring little rectangles embedded in the answer that show you where the info comes from. Once my friend challenged the chatbot for its sources, it immediately backed off and apologized for the inaccuracy.

I can’t guarantee that following these three steps will prevent all AI errors, but I can’t remember the last time I caught AI “lying to me” while following them. In fact, way more often I’ve been blown away by how much more robust the results are than traditional Google search. That’s especially true when I’m trying to surface an obscure fact fast.

One anecdote sums up why I think it’s still worth the (rapidly diminishing) risk of using AI chatbots for research:

I was trying to determine the reach of a media placement in Katie Couric’s email newsletter, which doesn’t publish its subscriber numbers. In less than 30 seconds, ChatGPT reported she has more than a million subscribers, citing that figure from:

  • A case study on the website of the firm that does the design for Katie Couric Media
  • The personal LinkedIn page for the newsletter’s editorial director (whose name I didn’t know)

So yes, AI can still get it spectacularly wrong. But with the right settings and habits, it gets it remarkably right, surfacing details no Google search would uncover in under a minute.

My friend may need a little more time before he’s ready to hear that. But you don’t have to wait.

P.S. Literally while I’ve been writing this newsletter, Claude Cowork organized my Downloads folder. The advent of AI agents doesn’t erase risks, but it does remind me of the upside of learning these tools well.

This article was originally published on March 18, 2026

Get Michael’s 5 Winning Subject-Line Formulas and best PR tips each week free!

Articles Right Form

This is the articles sidebar opt-in form and can be accessed under “Appearance” – “Widgets” – “Articles Sidebar” http://d.bbg.li/k8mDGs

Would you like to get the next article as soon as it goes live?

(I’ll also send you other weekly tips)

'Count Me In' article subfooter optin

This is in the footer of any articles and can be edited in the "Theme Options" and "Single Blog Form" tab: http://d.bbg.li/sbzf7x