Skip to content

AI at Clemson

Academic and Research AI Usage Quick Guide

Artificial intelligence tools can support learning, discovery and innovation across Clemson University. This quick guide outlines key considerations for using AI in academic coursework and research activities, with a focus on academic integrity and responsible scholarly practice.

As a general rule, always follow the requirements outlined in the course syllabus, research sponsor guidelines, journal policies and IRB protocols. When expectations differ, those policies take precedence. Regardless of the tools used, users remain responsible for the accuracy, integrity and originality of their final work.

For general, non-academic AI use, refer to the Non-Academic AI Usage Quick Guide. For more details, review Clemson’s Generative AI Guidelines and the Data Classification Policy.

Green — Generally Allowed

These uses are considered low risk when authorized and used transparently. They are generally allowed when rules permit and AI use is properly disclosed.

Students

  • Only use AI in ways approved by instructors and in the syllabus.
  • Be prepared to explain the process (what AI did vs. what the student did).
  • Disclose/cite AI use when required.

Examples:

  • Brainstorming an outline if the instructor allows brainstorming tools.
  • Getting feedback on clarity/grammar of your own draft (with required disclosure).
  • Generating practice questions to study (not submitting as graded work).

Instructors

  • Set clear expectations for allowed uses of AI (e.g., outline help).
  • Define prohibited uses of AI (e.g., full draft generation).
  • Outline AI disclosure requirements for students.

Example:

  • Assignment instructions: “AI may be used for outlining and grammar, but not for final answers; students should include a short AI-use statement.”

Researchers

  • Use AI to accelerate workflows when data is appropriate and approvals are met.
  • Follow disclosure norms required by journals, publishers or sponsors.
  • Maintain records to support reproducibility when AI influences methods/analysis.

Examples:

  • Summarizing published literature.
  • Polishing wording of a manuscript draft (with required disclosure).
  • Drafting a code comment or documentation (no sensitive data).

Yellow — Use with Caution

These uses carry moderate risk and require careful judgment. They may be allowed in some circumstances, but only when they align with course policies, sponsor requirements and research guidelines. Extra steps, including verification, documentation and clear disclosure, are expected.

Students

Drafting content that will be evaluated/graded:

  • Only if explicitly permitted.
  • Must disclose/cite as required.
  • The work must reflect the student's understanding.

Examples:

  • Generating a first-pass explanation of a concept, then rewriting and citing AI use (if allowed).
  • Generating code snippets for an assignment only if the course allows it and students can explain how it works.

Instructors

Any assignment where AI use could blur authorship:

  • Clarify what students may do (outline vs. draft vs. final).
  • Specify what must be disclosed (tool name, how used).

Example:

  • Allowing AI summarization for reading responses but requiring students to attach a reflection describing how they verified it.

Researchers

Unpublished, proprietary or regulated contexts raise risk:

  • Sponsor agreements may restrict sharing or IP disclosure.
  • Human-subjects data requires extreme care and approved environments.
  • Reproducibility requires tracking prompts, versions, and verification steps when AI matters.

Examples:

  • Analyzing research notes that are not public—only in an explicitly approved environment.
  • Drafting parts of a methods section—keep a record of prompts/versions and validate claims.
  • Using AI in human-subjects work—only if IRB/approved processes explicitly support it.

Red — Prohibited

These uses present high risk and are not permitted when they violate course expectations, research policies or sponsor restrictions. Engaging in these activities may constitute academic misconduct or research noncompliance and can result in serious consequences.

Students

  • Submitting AI-generated work as original work when not authorized.
  • Using AI in ways that violate the course syllabus/instructor rules.
  • Failing to disclose AI use when disclosure is required.

Examples:

  • Turning in an AI-written essay/problem set when the syllabus forbids AI drafting.
  • Copy/pasting AI output into a graded assignment without permission or disclosure.
  • Presenting AI-generated code as original work when not allowed.

Instructors

  • Leaving expectations ambiguous in ways that encourage policy violations.
  • Allowing grading practices that conflict with stated integrity requirements.

Example:

  • No guidance on AI use for a major writing assignment, then penalizing students inconsistently.

Researchers

  • Entering unpublished, proprietary, or sponsor-restricted research information into unapproved/public AI tools.
  • Violating sponsor, journal, or IRB/human-subjects constraints.
  • Misrepresenting authorship or failing required disclosure of AI assistance.

Examples:

  • Uploading sponsor-restricted datasets or unpublished results into a public AI chatbot.
  • Using AI with human-subjects data outside explicitly approved environments/processes.
  • Submitting a manuscript without required AI disclosure per journal/publisher rules.

Connect with the AI Initiatives Team

Get connected to stay current on upcoming events, opportunities and more!

Mitch Shue
Provost Fellow
Professor Of Practice, School Of Computing
Executive Director,AI Research Institute For Science And Engineering
mshue@clemson.edu

Nathan J. McNeese, PhD
Associate Vice President for Technology & Innovation
McQueen Quattlebaum Endowed Professor of Human-Centered Computing
mcneese@clemson.edu