AI-generated, blueprint-like illustration of a classroom.
Tough questions: Determining how best to use AI as a learning tool may ultimately require a fundamental rethinking of pedagogical approaches.
Illustration by Rebecca Horne / Adobe Firely

Many students want to learn to use artificial intelligence responsibly. But their professors are struggling to meet that need.

Effectively teaching students how to employ AI in their writing assignments requires clear guidelines—and detailed, case-specific examples.

“Can we really not detect AI use? What counts as plagiarism? What should I have said in my syllabus?” This panicked email from a colleague, about a suspected case of artificial-intelligence use by a student, captures the uncertainty many professors face as they grapple with how this technology is disrupting traditional writing assignments in their classrooms. Two years into the generative AI era, data suggest that students are already using AI widely, but professors are still trying to cobble together AI policies with only scattershot guidance from their institutions. Thoughtful adoption is the exception, not the rule—one of the largest AI-focused national surveys of postsecondary instructors to date found that just 14 percent feel confident in their ability to use generative AI in the classroom.

Students are feeling that gap. In a survey conducted at the University of California, Davis late last year, for instance, 70 percent of undergraduates reported wanting more instruction in AI use, yet only 33 percent of instructors had incorporated AI into their courses. I observe a similar pattern in my own surveys of hundreds of workshop participants and students; some 40 to 60 percent report they use AI tools weekly for writing tasks, yet many feel unsure if they are using them effectively. As one of my students put it: “These tools are around and here to stay, so instructors may as well get on board and help us determine how to use them in a reliable and appropriate way.”

A major fear of incorporating AI into the classroom is that its use can interfere with learning, an intuition supported by research examining how technologies shape cognition. Studies have shown that pilots who rely on autopilot struggle during emergencies, and GPS users lose navigation skills. A more recent large-scale randomized controlled trial comparing high school math students in Turkey using unrestricted ChatGPT, AI-based tutoring or only textbooks found that although the AI groups did far better on practice problems (48 percent better for unrestricted AI and 127 percent for AI tutoring), subsequent testing without AI was telling: The AI-tutored group’s advantage completely disappeared, and the unrestricted ChatGPT group performed 17 percent worse than controls.

The upshot is AI is probably no different from previous cognitive-assistive technologies: It may enhance short-term performance, but at the cost of longer-term learning—especially when adopted without guidance or guardrails.

D

espite these risks, it’s tough to argue for an outright ban. The first reason is pragmatic: Given that AI use is difficult to detect reliably (for now), banning it drives usage underground. The Turkish study shows it’s clearly better to use AI as a tutor than as a cheating tool (Anthropic is banking on this approach with its new Claude for Education product). Second, students need training in how to use these tools effectively in their future careers. The truth is that students may not need to master certain skills that previous generations did, simply because AI makes them obsolete. Unfortunately, we don’t yet know which skills those are, particularly in scientific writing. I’m not saying that future career needs should be the primary driver of curriculum, but they’re worth considering. Professors, who have more expertise, have a responsibility to experiment with AI and guide students on which uses help or harm learning.

Admittedly, this experimentation can take a lot of work. In my graduate-level scientific writing course, I do allow AI use, but with strategic restrictions. I have a course AI policy stated in the syllabus (mine is reprinted below, or see New York University’s FAQ About Teaching and AI for additional sample syllabus statements). The policy explains the course’s approach to AI use, transparency requirements and data privacy protocols. Beyond a course-wide policy, I’ve found that assignment-specific guidance is also important. It helps students navigate AI use at the moment of decision so they don’t have to make guesses from a brief syllabus statement they (maybe) read weeks earlier.

The success of my policy requires buy-in from students—after all, it’s difficult to detect unauthorized uses. (In the spirit of transparency, I also reveal to students how I have used or will use AI while preparing teaching materials or evaluating assignments.) To help students understand how unfettered AI use can impair learning, I use a simple analogy: If you want to lift a heavy barbell, you can either bench press it or use a forklift. But if you want to build physical strength, you must lift the barbell yourself. Your brain is no different. If you use AI to automate the task you’re trying to learn, you won’t learn it. And to extend the metaphor—why would you invest in a gym membership only to drive a forklift around? Students generally appreciate this straightforward discussion about the relationship between effort, skill development and the purpose of education.

As an example, when students write their specific aims pages, I allow AI to check grammar, analyze the structure of other aims pages and evaluate adherence to items on a provided checklist; I prohibit students from using AI to generate research ideas, suggest alternative hypotheses or construct experimental designs (see footnote below for full text), explaining that these are core thinking skills they need to be developing. Though AI may be a worthy co-scientist one day, I believe that to fully take advantage of such tools, students still need to build their own expertise independently—and I don’t have a pedagogical means to do that yet without writing assignments.

W

hen I reflect on my colleague’s panicked email about suspected AI use, I realize their questions—about detection, plagiarism and syllabus language—reflect a deeper anxiety about our changing educational landscape. The immediate impulse is to focus on detection and punishment. But rather than resist it entirely or surrender to it completely, I’ve found that, at the graduate level at least, students are open to guidance in how these tools affect learning.

Thoughtful guidance should include clearly communicating which thinking skills remain essential in an AI-assisted world and providing assignments that meaningfully integrate these tools rather than pretending they don’t exist. Addressing these issues head-on can inspire a good-faith discussion of how AI can serve, rather than subvert, the fundamental goals of education.

That said, I have to sound a note of concern here, lest these recommendations come off as hopelessly naive. The truth is that I teach motivated graduate students who mostly experienced college without the crutch of generative AI. At other institutions, and with other students, generative AI technologies are wreaking havoc, as students can now graduate from college “essentially illiterate,” as one professor memorably put it in a recent New York Magazine feature story. Different educational environments will require strategies specific to their students’ needs.

Guidance on effective AI use will likely be a short-term solution. Determining how best to use AI as a learning tool may ultimately require a fundamental rethinking of pedagogical approaches; educators and administrators will have to grapple with tough questions about the purpose of higher education in an AI-saturated world, and how universities can create a culture that incentivizes genuine learning over transactional credentialism.

AI poses an immense challenge, but it also offers a chance to reassess what we teach, how we teach it and why it matters in a world where knowledge is rapidly evolving and artificial intelligence is becoming increasingly sophisticated. It won’t be easy, and there will be missteps along the way. But if we approach this moment with an open mind, a commitment to our students and a willingness to adapt, we might just emerge with an education system that’s more resilient, more relevant and better prepared for the future.

Sample course-wide AI policy

The latest generation of AI tools, such as ChatGPT, has dramatically altered the writing landscape. This course acknowledges that reality and will encourage and equip students to use AI tools responsibly and transparently. This course requires disclosure in the form of an “AI use statement” with every assignment—meaning students must explicitly document if, when and how they use AI for assignments. This course will outline acceptable and unacceptable uses of AI in general, and specifically for each assignment. If you do not use AI, you must state that. Examples with additional detail will be posted on Brightspace. Guiding principles include the fact that learning to write takes some unassisted effort—and likely struggle—by the student, as critical learning only happens through direct practice. On one hand, over-reliance on AI tools may allow you to pass this class in the short term but will interfere with acquiring professionally valuable skills in the long term. On the other hand, AI tools promise to reduce unnecessary inefficiencies and struggles that plague many writers and may even contribute to undesirable disparities in science and industry. This course will help students navigate the tricky balance between skill development and practicalities of academic and professional life in the era of generative AI. Two key practical points:

  • NYU’s internal MCIT-approved AI access point must be used for unpublished work, including thesis proposals. This is to protect NYU intellectual property.
  • In your AI use statement, which must accompany every assignment, you must disclose the model (e.g., ChatGPT-4o or Claude 3.5 Sonnet) and mode of access (public versus NYU).

Sample AI use policy for a specific assignment:

Acceptable uses:

  • Grammar and minor clarity improvements
  • Using the “Core Logic” worksheet to ensure your aims page contains all required components
  • Analyzing other aims pages for structural guidance/inspiration

Examples of appropriate AI use:

  • “Compare the structure of these two aims pages. What are the key differences in how they:
    1. Introduce the research problem
    2. Transition between aims
    3. Connect outcomes to the overarching hypothesis”
  • “Review my aims page against this checklist and identify any possible missing or unclear components.”
  • “Check the sentences in this paragraph for grammar, usage and clarity. Do not make unnecessary changes. After you edit the sentences, review this paragraph to ensure your edits didn’t introduce any logical gaps or distort the meaning.”

Unacceptable uses:

  • Generating initial research ideas
  • Writing entire sections/paragraphs
  • Suggesting hypotheses or aims

Examples of inappropriate AI use:

“My hypothesis is X. What are some alternative hypotheses I should consider?”

[Seems like brainstorming but outsources critical scientific thinking]

“How would you structure three aims to connect finding X with method Y?”

[Appears organizational but outsources experimental design logic]

“Rewrite this paragraph to sound more sophisticated.”

[Seems like editing but often results in content changes you may not fully understand]

“What controls would be appropriate for this experiment?”

[Looks like methods check but delegates experimental design]

“Make my research sound more innovative and impactful.”

[Appears like polishing but could introduce claims you can’t fully defend]

Rationale

The development of a specific aims page is a fundamental exercise in scientific thinking and research design. You must be able work unassisted to identify knowledge gaps, develop hypotheses, design feasible experiments, entertain alternative explanations and anticipate reviewer concerns. Using AI while doing this core intellectual work can compromise the validity of your science and your growth as a researcher. Thus, I would use AI with extreme caution.

Required disclosure

Include a brief statement indicating if/how you used AI. Examples:

“AI use: Used NYU's GPT to check grammar and parallel structure in aims statements.”

“AI use: None”

Get alerts for “AI: From bench to bot” in your inbox.

This column explores the promises and pitfalls of artificial-intelligence tools in writing—when it can make writing better, faster and easier, and how to navigate the minefield of possible dangers.