Recently, a writing community put out an internal survey, and two of the questions stood out.
The first was: Do you use AI? If so, how? ... that's a pretty legitimate question for a writer's group.
The second was: Do you use Grammarly Pro? ... which was rather interesting, and it got me thinking about my own answers to the questions, and more importantly, how I'd answer in the negative.
Grammarly Pro1 is a writing aid, one of a large number of similar writing aids with a lot of features. It can provide prompts and, like many AIs, suggest text to different degrees for you. I use Grammarly Pro, although I actually found out about the prompts feature today in the process of trying to verify my Grammarly subscription details. For me, it mostly nags me about being wordy or fixing typos or duplicate words: I literally use it as a grammar checker that relies on better mechanisms than pure heuristics.
Therefore, my answer to the former question would have to be yes, by definition: Grammarly Pro is "an AI" and if I used nothing else at all, I'd still be using AI.
But that's not the case: I do use other AI tools, even though the definition of "AI" is flexible. In the common jargon today, it means "an LLM" like Claude or ChatGPT or Gemini, but I'd argue that even the simpler, heuristic versions of Grammarly are "AI" - mathematical models that imitate human processes - as are spellcheckers, or autocommit (autocorrect, machine! I meant autocommit!), speech transcription, or nearly any other aid for writing.
It goes farther than that: I use a search engine that now embeds AI results to summarize the likely results of my search. I don't always use that summary - I usually find it insufficient to highlight the link I actually want, which is usually midway down the first or second page - but it's there, and definitely can be usable.
This Is Where We Dig In
So the question becomes, for me: how would anyone answer "Do you use AI" with "no," honestly? Would you be stuck using vim or pico, maybe WordStar? What search engines would you be forced to use? What advantage would you see in undertaking the effort to avoid AI work product in your work, especially when others would run circles around you by using AI, even gently, and those who leverage AI ethically and wisely can run circles around casual AI users?
Would you avoid applications written with AI? There are certainly "tells" for AI-written applications and text - AI text often feels horribly generic and lacks specific experience markers a human would write, and AI applications tend to have a really rapid release cycle that suggests a human told the machine just to do it however, tested manually and found failures, and ran the cycle again.
But a human can write generically - good journalistic text often does look like an AI was trying to be balanced, and a human can undergo rapid release cycles. (If you're one of these humans, please consider using tests instead of releases, but you do you.)
What's the advantage of avoiding AI, outside of knowing that you indeed did it yourself, and if you were going to avoid AI for real - how would you do it?
The draft of this article used no AI at all, but was written in Obsidian, where an AI created a graph of interrelated concepts, and Grammarly Pro did indeed highlight quite a few sentences, especially resenting the use of "actually", and that's before it got sent to one of the more conversational AIs for consideration. No AI complained about my use of "humourous" - I had to remove that myself, and I actually manually typed the "autocorrect" joke - a joke my wife and children are all tired of. No spellchecker factored in until this paragraph, where I misspelled a word. When sent to an "actual AI" it mostly complained that one of the middle paragraphs wandered, and the draft took too long to get to the "dig in" section, which is why that section now has a header.
-
This is not intended to be an endorsement of Grammarly Pro. It is not the only product in its space worth using; if you wish for a writing aid, there's a lot of tooling available, like ProWritingAid, LanguageTool, and others - all of which say explicitly that they use AI.
↩
I saw a comment on a Reddit board today that someone would actively avoid a "vibe coded" application because it stole from millions of ethical coders who'd put their code out there for the AIs to train on. The assertion was quite interesting - are we not allowed to learn from each other? If those coders didn't want their code seen, why did they publish it? (This is assuming open source code: if the AIs use closed source to train, that's obviously an issue.) The zeitgeist against "all AI" is going to be an interesting replay of the Luddite resistance, and we now have to pray that nobody takes inspiration from the Unabomber.