Last week I asked if you agreed with the reader who feels I’m too bullish on AI and PR.
The replies that came in were extremely helpful. I’m not going to change WHAT I say in these emails, but the feedback is helping me evolve HOW I say it.
I’m sharing this next fact only to satisfy curiosity: 80 percent of the replies felt I’m striking the right balance, while 20 percent feel I’m overly optimistic. That skew doesn’t prove me “right” at all – it’s only logical that more people would choose to read the ideas of someone they agree with. Kudos to the 20 percent who stick around anyway!
The weird thing was: Those who feel I am too bullish cited arguments to prove their point . . . that I actually agree with!
Depending on your perspective, they either haven’t been paying attention to what I’m actually writing, or I haven’t done a good enough job being precise in my recommendations and predictions.
If you’re dubious of AI, see if these points resonate with you. If you’re already onboard, you might be sharing my same blind spots about how you communicate with your bosses or teams.
Generative AI is not good enough yet to write media pitches
I’ve made this point repeatedly, but I guess it’s natural for subscribers to think, “Michael writes a lot about pitching, and he writes a lot about AI, so he must think AI is good for pitching.” Not true. If you are an experienced media pitcher, and you ask ChatGPT to write a pitch for you, you find that it usually takes longer to fix it than it would have to write it yourself. If you’re not an experienced media pitcher, then you shouldn’t use ChatGPT, because you might not catch where it’s failing you, and it’s almost surely going to fail you.
AI’s first response in any chat is almost never good enough
Those who responded negatively shared arguments that assume other PR pros are shipping the initial output without editing and fact-checking. One took pains to emphasize that she edited and fact-checked after experimenting with AI, as if this was rare. None of the PR pros I’m observing to develop my perspective on AI ship the first response. They have multiple back-and-forths with the bot, then edit the final version themselves. I thought this was so obvious that I guess I haven’t emphasized it enough.
There are many use cases that totally avoid the most commonly cited AI risks
The “negative” emailers (remember, I was ASKING for criticism) also commonly cited risks such as hallucinations, plagiarism and journalists warning against AI-generated content. A safe way to start experimenting with AI is to use it purely as a creativity and strategy partner and never produce any public-facing content.
Here’s a post I wrote about how to safely get your feet wet.
Thanks again for all the feedback, ESPECIALLY from those who don’t agree!
This article was originally published on May 8, 2024
(I’ll also send you other weekly tips)
This is in the footer of any articles and can be edited in the "Theme Options" and "Single Blog Form" tab: http://d.bbg.li/sbzf7x