I’ve been able to spot AI generated content for over a decade. And now that ChatGPT has been so widespread for so long, almost everyone else can spot it too.
Especially if it contains a few chatbot-generated “human touch” tricks.
Look. I helped create this machine-mad-libs-monstrosity over 15 years ago when I co-invented a platform that wrote everything from millions of funny Yahoo Fantasy Football recaps every week to thousands of super-serious Associated Press financial articles every quarter.
I’ve been doing it so long, I can’t unsee it.
Featured Video
An Inc.com Featured Presentation
Now, you probably already know at least a few of these tricks, which I’m going to lay out from most obvious to least obvious. My goal here is to let you know that these are things that everyone can spot.
Oh, and if you’re like me and you happen to still be brain-thinking and hand-writing everything like a fool, here’s some stuff you’re gonna want to start avoiding. First, My Take On Using ChatGPT
Go for it.
Yeah, I hate AI slop as much as you do, if not more. But I’m not going to judge someone for wanting to make their communications a little easier to generate.
I’ve already laid out a list of situations when using generative AI to write for you is problematic (eg. don’t use it to professionally describe an experience you didn’t have), but those scenarios are all outliers. And I’ve already advised that you should not be AI’s editor, but you should instead use it to spot check your original thoughts.
I’m also not someone who will correct someone’s grammar or phrasing or even spelling. First of all, my own isn’t perfect, sometimes on purpose.
More importantly, the writer might not be communicating in their first language, they may have a learning disability, or they might find communication stressful in general. Hell, they may not care enough about hurting my feelings in a comment section to take the time to re-read what they wrote before they hit send.
Like I said, no judgment. I’m just here to keep you from getting called out. ChatGPT Loves Quotes and Bullets and Em Dashes
Let’s start with the most obvious mistakes. This is the only one where I’m going to blame the writer and not the chatbot.
Folks, you’re getting lazy with the copy and paste out of your chatbot and into whatever you’re working on. I’m starting to see a lot of this in emails and in comments on my posts.
First up, sentences or even entire paragraphs that aren’t being quoted from elsewhere but are nonetheless embedded in quotation marks.
“This is not how you communicate an original thought.”
I know Claude does this a lot. It uses quotes for no good reason when making suggestions.
It also loves to shove items into an outline that don’t belong in an outline. Now, here’s where I start to get more upset at the chatbots. I love outlines. I have entire documents that are just 12-page outlines:
Bulleted items are always a useful crutch for communicating multiple ideas quickly
But not every idea needs to be broken out into a bulleted list
And when you do it for no good reason, it’s obvious to the reader
And then the final obvious trick really gets my rage up—I love the em dash.
I think in meta, and I’m constantly using commas, em dashes, and short paragraphs to try to compress my active brain into something the reader can follow along with. Em dashes are for addressing adjacent thoughts—not continuous thoughts, that’s where you use a comma.
Note to my editor: That last sentence is written that way on purpose. It might even be grammatically correct. I don’t even know anymore. It’s Not Just This, It’s That
Before mobile maps, my Dad used to give me driving directions and always slip in advice like, “If you get to the 7-Eleven, you’ve gone too far.”
And I was always like, “Yo Dad, can you just give me the right directions?”
ChatGPT and the other chatbots love to tell you what things are not, for no other reason than to, ironically, sound less like ChatGPT.
“It’s not just a football game, it’s a battle to determine which team has spent their money the most efficiently.”
“She doesn’t just paint pictures, she infuses the canvas with love.”
“You’re not reading this post, you’re being educated and entertained.”
Whatever. So quick check. If the left side is unnecessary, or the right side is silly, or both, that’s a pretty solid giveaway.
But, like, three or four times over the last two weeks, I’ve felt compelled to change something I’ve written, not because it was unnecessary or silly, I do plenty of that, but because what I needed to say came off sounding like “not this but that.”
Shoot. Here’s one from yesterday. “I’m not upset about this. This isn’t a rant. It’s just simple economics.”
It’s an easy trap to fall into, and it’s irritating that it’s bumping up against actual technique. But I’m not going to change what I write. To me that’s the worst part about generative AI—in my book—not that humans won’t be able to recognize ChatGPT, but that they will recognize it and they’ll accept it.
There. I think I just did it again. The Rule of Threes
This is where I really start to get furious. Man, I invented the rule of threes. And also exaggerated claims. Like I also invented the question mark.
The rule of threes is when you use three items to make a point in a sentence, or when you use three bulleted items in a list, or when you use three sentences in a paragraph.
Like that.
I love the rule of threes, especially comedically, because it gives me room for setup, then I can hit a timing beat, then I can subvert the reader’s expectations in a way that might make them laugh.
I’m not taking this out of my toolkit and you shouldn’t either. Like all of these tricks, it’s what the chatbots use to try to be more human. Just make sure your text isn’t overusing the rule. No Chaos
A good way to make sure your writing isn’t mistaken for ChatGPT is banana clown hot pot throw pillow.
Look, a lot of people will tell you that a good way to spot ChatGPT is if there is nothing personal in the text (Dad, directions, 7-Eleven) but I hope I’m not the first to tell you that all the chatbots are happy to make up some personal shit without you even asking for it.
But… all human writing has some butterfly effect to it. If there is no personality, no chaos in the writing—and if you read enough you can sense even the slightest hint of randomness that occurs when the synapses are firing in the actual human brain—then you’ve got yourself a chatbot author there, my friend.
In fact, the sterility of the writing is the trick that’s probably the least obvious, because generative AI is great, perfect even, for getting information across efficiently. It’s what it’s good at, and what it most often should be used for, no matter how many “human touches” it can muster up. It Doesn’t… Say Anything
This column was written to expose the five most obvious “human touch” tricks ChatGPT uses that most humans have mostly figured out. And I wanted to add my own experience and opinion—as someone who stands on both sides of the AI battle line—to give you a sense of why this is important, for both proponents and opponents of AI-assisted communication.
I hope I’ve done that. And I hope I’ve pushed the argument forward.
A bot-written post would have skipped that second part, for the most part. Because it has no experience and opinion. It has chunks and embeddings of what’s already been said.
There are plenty of times in this chaotic and busy world when using ChatGPT to get some communication slapped together makes perfect sense. But unless you’re comfortable with the receiver of said communication hearing “Hey. I put zero thought into this,” then you’re going to want to take some care to at least cover your tracks.
And it’s only going to get harder to do that. This was a difficult text for me to write, because I constantly had to check myself to make sure I wasn’t leaning too hard into my own habits—which can be just as lazy as firing up ChatGPT. And in that, I probably took away a little something from how this column hits you.
AI is an evolution of existing technology and processing, and as I’ve said many times, it’s a lot like spreadsheets. Being totally against generative AI to do communication is like being totally against spreadsheets to do math.
And while I’ll always commit to never use it for a column like this (or any of the other reasons I put forth in the column I linked at the top), I’m not going to judge, or ban, or boycott the technology. That would be shortsighted, and I’m still like 20 years away from telling all technology to eff off.
I just implore you, be human. Always use as much care as you can whenever you communicate. You’ll be better off for it, and so will the humans you’re communicating with.