What content creators get wrong about ChatGPT

By now, you have read enough sweeping claims and incisive takes on generative AI to last a lifetime of hype-cycles. But here’s one more, from the perspective of a boots-on-the-ground content strategist who’s been using ChatGPT since it first became available. I’m not explaining how it can be used by content creators but how it should be used: not to generate new content through effective interfacing but as an idea generator and digital collaborator.

How most people use ChatGPT: Generating content through increasingly precise prompts

Much of the discussion on how best to use ChatGPT from a content-creation angle has revolved around creating better prompts. The prevailing notion behind this sort of mindset is that if you craft and input the right combination of words and terms into the prompt, then it will spit out exactly what you need it to with minimal reworking necessary.

Prompt-crafting is a compelling idea because it has the veneer of being the intuitive solution to the generative-AI-kind-of-sucks problem. If the prompt field is the only way to interface with the AI, it stands to reason that if we just interface with it better, our collaboration will be all the more enhanced; if we just learn to say the right terms and phrases, then we can, as we say in the biz, “unlock more value” from the tool. Sure, the prompt is a big part of using a large language model, and there are still some best practices one can employ to get more worthwhile results (don’t skimp on the adjectives!). But the pill we all must swallow is that no combination of magic words can get ChatGPT to go from mediocre and uninspired to a capable writer on-par with a professional. And a content writer spending their time intricately crafting baroque multipart prompts in the hopes of getting a good result is like using gene-sequencing to make topiary. Sure, it might work, but diving in with a pair of garden shears will save you a headache.

How we should use it: Generating ideas and providing alternative perspectives

Generative AI’s ability to seemingly create on its own has been a smokescreen for its actual utility: externalizing the rote and basic elements of content development to increase the speed and efficiency of creators. ChatGPT is really bad at taking a project from step 2 to step 8, but it’s exceptionally good at going from step 2 to step 3 with minimal investment of time or cognitive load.

What ChatGPT lacks in depth of writing, it more than makes up for in breadth. Need a social post? Why not ask ChatGPT for an idea? Or 10? Or 50? And before you know it, the ideas are there, right in front of you, ready to be toyed with and molded to suit your needs.

Synthesizing five workable posts from a dozen or so mediocre ones is faster and takes less attention than grinding out five bespoke posts yourself or speed-writing a handful and choosing from the best.

For example: prompting ChatGPT to produce “brief, professional LinkedIn posts” to promote this blog, I received a number of simple but straightforward responses, each one focusing on a different angle of the subject matter. No one individual result was satisfactory, but combining a handful and doing some light editing was trivial—and, more importantly, fast.

A couple posts, decent but a little dry—plus some title ideas as a little bonus it decided to throw in. Given these results, I’d put together something like: “In my new blog post, I discuss an alternative to the way most people use ChatGPT for content creation. In fixating on crafting the perfect prompt, we miss out on AI’s ability to act as a collaborator in the creative process.”


Even with more substantial projects, ChatGPT can benefit a writer the same way by offering a virtually unlimited source of perspectives and alternatives that exist outside of their own mind. Even seeing different ways a sentence could be worded can help get the brain going and kick-start a new line of thinking. Inputting “10 different ways of saying X” could be all it takes to get yourself out of a rut.

No one who practices a creative endeavor needs to be told how valuable external perspectives can be. And nowadays, the massive post-pandemic push to return workers to the office underscores just how important collaboration is for business. Sure, collaborating with coworkers is a different animal than collaborating with a glorified autocomplete, but the soul of externalized ideation is still there.

It’s a safe claim to make that human collaboration is likely more effective than AI participation, but practically, that’s not always feasible. Generative AI allows a solitary worker some semblance of collaboration when coordinating with others isn’t a viable option within the constraints of the day’s schedule or the project’s deadline.

Of course, there are risks

It wouldn’t be a generative AI article without talking about the potential risks and pitfalls: yes, many are—and will continue—waxing poetic about the core of ingenuity we are excising from our human spirits by incorporating yet another piece of godless technology into the way we live and work. And yes, it’s true that some skills that were previously important to a content writer may become less valuable as time goes on. But the fact of the matter is that generative AI tools are here and exist, at this very moment, for anyone to use.

The real risks as I see them are tangential to their utility. A new Stanford study has made waves providing some empirical data to support what some have been discussing for a while now: ChatGPT’s responses are getting worse. But, as tech writer John Herman brings up in a recent New York Magazine article, that could just be a sign that the tool is nearing the end of a world-historic tech demo. There’s no clear path forward for any of this, and while it may seem we’re all being drawn along by the inertia of enterprise, it’s crucial to understand tools now and determine for ourselves how we use them before it is determined for us.

Leave a Reply