“It has not escaped our notice,” wrote James Watson and Francis Crick in one of the most famous conclusions ever to appear in the scientific literature, “that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.”
This line is famous precisely because of its understated style and calculated restraint—they’re announcing one of the biggest discoveries in biology while expressing it with typical British reserve.
Now imagine the year is 2025, and the duo turns to ChatGPT:
Prompt:
“Please rewrite this conclusion to be clearer and more direct: ‘It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.’”
ChatGPT output:
“The base-pairing structure we have described provides a clear mechanism for how DNA might replicate itself, which has significant implications for understanding genetic inheritance.”
ChatGPT’s sentence falls flat. The deliberate understatement—“has not escaped our notice”—is gone, as is the carefully chosen passive construction that creates suspense. The measured tone of Watson and Crick’s original statement, along with the slightly arch academic voice reminiscent of the wood paneling in the Cambridge pub where they first announced their discovery, lets readers discover the importance themselves. Sure, the ChatGPT version is arguably more “readable,” but at the expense of all the authority and drama of the original. It’s like turning a perfectly timed pause in a speech into a simple declarative statement.
Voice—the alchemy of words, tone, syntax and point of view that distinguish one writer’s creation from another—does real work in scientific communication. Voice helps readers follow the thread of an argument, sense where certainty ends and speculation begins, and grasp why a writer thinks a finding matters. In this way, voice isn’t just another writing technique but the imprint of the human mind on the page. When artificial intelligence sacrifices voice to the altar of clarity, something essential is lost. At the same time, excessive voice, especially in scientific writing, can get in the way of the clear expression of complicated ideas. No one would argue that scientific literature, as a genre, is a paragon of clarity. So how can scientists leverage AI’s workmanlike prose while preserving enough of their voice to make the writing recognizably their own?
Large language models (LLMs) are essentially really complicated autocomplete systems tuned to please the maximum number of users. As such, they default by design to middle-of-the-road word choices and unimaginative syntax. But their training data are so extensive that they can be steered in wildly different stylistic directions. Case in point—an early viral post instructing ChatGPT to write a biblical verse in the style of the King James Bible that explains how to remove a peanut butter sandwich from a VCR. As the output shows, taking some time up front to instruct an LLM how to write something can pay off later.
H
ere’s one strategy to preserve your voice when working with AI, which I’ll demonstrate by walking through a little experiment. I gave the AI the following task: Write the first paragraph of a column called “Does AI make scientists worse writers?” that examines whether AI assistance could erode core writing and thinking skills, being sure to reference the introduction of the calculator as an analogy. Then, I asked AI (Claude Sonnet 3.5 in this case) to do the task twice. In one chat, I left the AI to write in its default style, and in a different chat, I carefully specified the writing style I wanted (based on an analysis of this column’s voice; see “How to create custom style instructions in Claude” below for full methods). The difference in voice is striking:No style guidance
As artificial intelligence tools like ChatGPT become ubiquitous in academic settings, a growing chorus of writing instructors and senior researchers are raising alarm about their impact on scientific communication skills. The parallel to calculators’ effect on mathematical thinking is striking—while these tools undeniably boost productivity, they may also be eroding scientists’ ability to develop and articulate complex ideas independently. Early observations from writing centers at major research universities suggest that manuscripts drafted with AI assistance often lack the logical flow and deep analytical reasoning that characterize strong scientific writing. Yet others argue that AI writing tools, when used thoughtfully, could actually enhance scientists’ communication abilities by serving as collaborative thought partners and freeing up cognitive resources for higher-level synthesis. As the scientific community grapples with this technological shift, the key question becomes not whether to use AI writing assistance, but how to harness it while preserving the essential skills that underpin scientific discourse.
Custom style guidance
The first time I reviewed a manuscript clearly written with AI assistance, the telltale signs jumped out immediately—flawless grammar paired with strangely generic phrasing, technical precision that somehow felt hollow. As both a journal editor and writing instructor, I’ve watched AI tools transform scientific writing from a craft honed through practice into something more automated. Yet the implications run deeper than just smoother prose. When we outsource the writing process, do we also outsource the thinking process that traditionally accompanied it? Just as calculators reshaped how students develop mathematical intuition, AI writing assistants may be fundamentally changing how young scientists learn to structure arguments, synthesize ideas, and develop their scholarly voice.
The first reads like an academic review—comprehensive but detached. While technically sound, it lacks the immediacy that draws readers in. The second version better follows the column’s style by anchoring the discussion in concrete experience: a journal editor encountering AI-written papers. The personal framing (“The first time I reviewed …”) creates a natural way to explore the deeper implications. Rather than cataloging concerns, it leads readers through the thought process: from surface observations about language to questions about cognition. The calculator analogy appears in both versions, but in the second it emerges naturally from the narrative rather than feeling dropped in. The first version is indistinguishable from boilerplate text generated by AI. The second feels distinctly human, or at least a lot closer.
T
o train the model in your own voice, consider excerpts of strong passages from your grants and papers, so long as you have permission to input them into the AI system you’re using. A note of caution: Although this example demonstrates how to preserve voice, using simple prompts to generate full paragraphs from scratch (complete with hallucinated experiences of me as a journal editor) isn’t how I recommend using AI. This use runs counter to my core advice to tap AI as a collaborative partner or to intervene in discrete phases of the writing process, such as by suggesting structure or offering feedback on a draft. But the experiment shows how custom instructions can help AI mimic rather than flatten your preferred writing style.Beyond custom instructions, additional strategies can help preserve voice while working with AI. Rather than using AI as a writer, use it as an analyzer. Have it flag places where clarity could be improved, and ask the AI to explain why—then fix the issue your own way. When you do need AI to generate text, make targeted requests. I’ve found that vague requests such as “improve” or “clarify” often lead to flattening; instead, frame your prompts around maintaining specific elements of style. Ask the AI to “suggest alternative phrasings while maintaining my measured tone” or “help me make this more concise without losing the deliberate understatement.” Finally, when using AI-generated text, systematically revise to restore voice. Reintroduce strategic tentativeness where appropriate (“we suggest” rather than “this shows”), restore personal constructions that serve a purpose (“we reasoned” rather than “it was determined”), ensuring your draft retains the stance and meaning you intended.
Voice in scientific writing isn’t just a matter of literary flourish—it’s a way to transmit to the reader the writer’s intention. When Watson and Crick wrote “it has not escaped our notice,” they weren’t just being stylish; they were using voice to signal the weight of their discovery while maintaining scientific restraint. It might even have been, in today’s parlance, a sort of humblebrag of their success where others had failed. In an age when AI makes grammatically perfect prose trivially available, these subtle markers of human judgment may be what ultimately distinguish the most effective papers from the merely competent.