Skip to content

AI at Clemson

Generative AI Guidelines

Purpose

Generative artificial intelligence (AI) refers to systems that can create or transform content such as text, code, images, audio, video and summaries. These tools can support learning, teaching, research, administration, programming and data processing. Clemson University faculty, staff, students and affiliates are responsible for protecting University information and complying with University policies, legal obligations and ethical standards when using AI.

These guidelines provide practical direction for using generative AI safely and responsibly, including guidance that applies to both publicly available AI services and Clemson-approved, contract-protected AI services. The following diagram illustrates how Clemson’s AI Guidelines fit within the broader hierarchy of University policies, departmental guidance and course- or project-specific requirements, with more specific requirements taking precedence as applicable.

For clear, practical guidance, refer to the relevant AI Usage Quick Guide:

Academic and Research AI Usage Quick GuideNon-Academic AI Usage Quick Guide

Chart showing a pyramid infographic illustrating how Clemson University's AI Guidelines fit within the hierarchy.
Pyramid infographic illustrating how Clemson University's AI Guidelines fit within the broader hierarchy of:
  1. Course or Project-specific Requirements,
  2. Department or Division Policies and Guidelines
  3. AI Guidelines, and
  4. Clemson University Policies as the foundation of the pyramid.

Scope

These guidelines apply to Clemson University faculty, staff, students, contractors and affiliates when using generative AI tools for University work or coursework, whether on University-owned devices, personal devices or vendor-hosted platforms.

These guidelines complement and work in concert with existing University policies, including those specific to Data Classification, Information Security, Acceptable Use, IT Governance, FERPA, Academic Integrity and Research Misconduct.

Core Principles

  1. Protect Clemson data. Use data classification to decide what can be entered into an AI system.
  2. Prefer Clemson-approved services for University work. Clemson-approved services are reviewed and may include contractual protections and administrative controls.
  3. Assume outputs can be wrong. Verify AI-generated information using authoritative sources.
  4. Be transparent. Disclose and cite AI use when required by your instructor, publisher, sponsor, journal or University policy.
  5. You remain responsible. You are accountable for decisions, work products and outcomes produced with AI assistance.

Definitions and Categories

Data Classification (Clemson)

View the complete Clemson University Data Classification policy.

  • Public: Intended for public release.
  • Internal Use: Non-public Clemson information intended for internal operations.
  • Confidential: Sensitive Clemson information requiring safeguards (e.g., many HR/finance/vendor matters).
  • Restricted: Highest sensitivity (e.g., regulated data such as FERPA education records, PHI; certain security data; other highly sensitive institutional data).

AI Tool Categories

Like data classifications, there are also categories of AI tools that help apply these guidelines consistently.

  1. Public AI tool (unapproved/public-facing):
    • Consumer/public AI services where Clemson has not approved the service for institutional data.
    • Data protections may be unclear, user-controlled or subject to change.
  2. Clemson-approved, contract-protected AI service:
    • An AI service renewed and approved for University use.
    • May include contractual assurances, administrative controls and security safeguards (e.g., encryption, access controls, protected storage, data handling commitments).
    • Examples: ChatGPT Edu, Microsoft Copilot
  3. Contracted service/application with AI features:
    • A service or application that is not used for AI generation on its own, but includes features that leverage AI to enhance the capabilities of the tool.
    • Contractual assurance should limit usage data to train AI models.
    • Examples: Zoom AI companion, Slate AI
  4. Third-party add-ons, plugins, bots, extensions or connectors:
    • Anything you install or enable that introduces another vendor or sends data outside the primary system (even if the primary system is approved).
    • Often the highest-risk pathway for data leakage.

Data and Tool Matrix

Before using any AI tool, determine the applicable data classification and tool category, and consult this matrix to confirm whether the proposed use is permitted, requires prior approval, or is prohibited. Use this table as a high-level guide. Final determination depends on the tool’s formal University approval status and any applicable contractual, regulatory or sponsor-specific requirements.

 

Data and Tool Matrix
Clemson Data Type Public AI Tool (Unapproved) Clemson-Approved, Contract-Protected AI Service Third-Party Plug-ins
Public OK OK Use caution. Be aware of what the vendor does with inputted data
Internal Use Do not enter Only if approved for this data type Generally do not use unless reviewed/approved by CheckIT
Confidential Do not enter Only if approved for this data type Do not use unless explicitly approved by CheckIT
Restricted Do not enter Do not enter Never use unless explicitly approved (rare)

Key rule: Approval of the main platform does not automatically approve every plugin/bot/connector added to it.

AI Risks

Generative AI introduces risks that vary by tool and configuration.

  1. Data handling risks
    • It is important to understand what the AI service does with inputted data.
    • Retention: prompts, uploads and outputs may be stored for a period of time.
    • Training/model improvement: Some services may use submitted content to improve models. Clemson approved-services may include contractual assurances that user content is not used for training, but this is not universal across tools.
    • Access pathways: Vendor administrators, support staff or security teams may have limited access for troubleshooting or abuse prevention, depending on contracts and settings.
    • Integrations/connectors: Plugins, bots and connectors can transmit data to additional parties and change the risk profile drastically.
  2. Accuracy and reliability risks
    • AI tools can generate incorrect, fabricated or misleading content ("hallucinations"), including citations, calculations and code behavior.
  3. Legal and ethical risks
    • AI outputs may incorporate or resemble copyrighted or proprietary content.
    • AI use may create plagiarism or authorship issues if not disclosed or permitted.
    • Some uses can introduce bias, privacy harms or misuse of sensitive information.

Suggested Checklist Before AI Use

This checklist provides a brief overview of what faculty, staff, students and affiliates should do before entering any information (beyond that which is clearly classified as Public) into an AI tool.

  1. Identify the data classification (Public / Internal Use / Confiential / Restricted).
  2. Identify the tool category (public tool, Clemson-approved service, third-party add-on/bot/connector).
  3. Confirm whether the tools has contractual/administrative protections appropriate to the data, such as:
    • Whether content is used for model training or not (if relevant),
    • Retention/deletion controls (if relevant),
    • Access controls and account management (e.g., Clemson-managed login),
    • Security and compliance commitments (as applicable).
  4. Check whether the tool (and any add-ons) are Clemson-approved for the intended data type and use case.
  5. If any of the information above is unclear: do not enter the data. Seek guidance via CheckIT.

AI Best Practices

  • Enter only public data into unapproved/public AI tools.
  • Minimize data. Share the least information necessary to accomplish the task.
  • Remove identifiers. When possible, de-identify or aggregate information (e.g., remove names, IDs, unique project details).
  • Never paste secrets. Do not include passwords, access tokens, private keys or security-sensitive configurations.
  • Treat AI outputs as untrusted. Verify facts, calculations, citations and claims via authoritative sources.
  • Document important use. For decisions, reports, official communications or compliance-sensitive work, record the tool used, date and what was asked (without storing sensitive prompts inappropriately).
  • Audit outputs, as they may reflect bias or unfair assumptions. Evaluate outputs for potential bias and disparate impact, especially in research, assessment, hiring or other high-stakes contexts.
  • Be transparent. Disclose and cite AI use when required.

Third-Party Apps, Bots, Plugins, Extensions and Connectors

Third-party add-ons often create the greatest risk because they can silently move data to new vendors.

Examples (non-exhaustive)

  • Meeting note takers/transcription bots: e.g., Otter.ai bots, Fireflies.ai bots, similar "join my meeting" assistants.
  • Browser extensions: "AI writing assistants" that can read web pages, email or text you type.
  • Productivity-suite add-ons: plugins inside email/docs/chat platforms that send content to external services.
  • Connectors: features that connect an AI tool to cloud storage, LMS content, ticketing systems or knowledge bases.

Rules of thumb

  • Assume a third-party add-on is a new IT solution and may require review/approval by CheckIT before use with non-public Clemson information.
  • Do not enable connectors/plugins for University systems unless they are reviewed and approved for the intended data.
  • If you're invited to "authorize access" to accounts, drive or shared folders: pause and verify approval first.

AI in Meetings and Note Takers

AI features for meetings in Zoom and beyond can capture sensitive information unexpectedly.

Clemson Zoom AI Companion

  • Clemson Zoom AI Companion is approved for Public, Internal Use, Confidential and Restricted data types (such as PHI and FERPA) when used under Clemson’s approved configuration.

Third-party Meeting Bots (not covered)

  • Third-party note takers/bots are NOT covered by Clemson’s Zoom approval and should never be used with sensitive data unless explicitly reviewed and approved.

Host Guidance

  • Enable the Waiting Room feature so you can review who is attempting to join.
  • Require Clemson authentication for meetings that are primarily Clemson participants.
  • If sensitive topics may be discussed, confirm whether recording/transcription is enabled and disable if not appropriate.

Attendee Guidance

  • Ask the host whether recording or AI transcription is enabled.
  • If the meeting involves non-public Clemson information and transcription/recording is enabled without appropriate approval, request it be disabled or leave the meeting.

At-a-Glance Examples

Allowed / Generally Low Risk

These examples are generally allowed and considered generally low risk when best practices are followed:

  • Entering Public data only into a public AI tool.
  • Using AI as a support tool (brainstorming, outlining, grammar improvement of your own writing).
  • Verifying outputs with trusted sources.
  • Using Clemson-approved AI tools for approved data types.
  • Disclosing/citing AI use when required.

Use With Caution

Slow down and verify accuracy and data classification protections before proceeding:

  • When accuracy matters (policy interpretation, medical/legal guidance, high-stakes decisions).
  • When tool settings/data protections are unclear.
  • When AI features appear in software without clear notice.
  • When a tool offers connectors/plugins to other systems.
  • Drafting content that will be evaluated or graded (only if permitted and properly disclosed).

Prohibited / High Risk

These actions are prohibited or considered high risk:

  • Submitting AI-generated or ghostwritten work as your own for graded assignments when not explicitly permitted.
  • Entering Internal Use, Confidential or Restricted Clemson data into public/unapproved AI tools.
  • Using third-party meeting bots in sensitive meetings.
  • Sharing non-public student, HR, health, contract/grant, security or proprietary research information into unapproved tools.

Academic Work and Academic Integrity

AI can be used in learning, but academic integrity rules still apply.

Students

  • Follow your course syllabus/instructor rules on whether and how AI may be used.
  • Do not submit AI-generated work as your own when it is not authorized.
  • When AI use is allowed, be prepared to:
    • Describe your process, and,
    • Cite/disclose the use of AI as required by your instructor.

Instructors and Teaching Staff

  • Make AI expectations explicit in syllabi and assignment instructions (allowed uses, prohibited uses, required disclosure).
  • Consider whether students may:
    • brainstorm/outline,
    • receive feedback on drafts,
    • generate code snippets,
    • or use AI for summarization and what must be disclosed .

Research and Scholarly Work

AI can accelerate research workflows, but risks increase with unpublished, proprietary or regulated data.

  • Do not enter unpublished, proprietary or sponsor-restricted research information into unapproved/public AI tools.
  • Mind contracts and grants: Sponsor agreements may restrict data sharing, publication workflows or IP disclosure.
  • Human subjects/IRB: treat human-subjects data with extreme care; use only explicitly approved environments and processes.
  • Reproducibility: keep records of prompts, versions and verification steps when AI meaningfully influences methods, analysis or conclusions.
  • Authorship and attribution: follow disciplinary norms and journal/publisher requirements for disclosure and citation of AI assistance.

Software Development, Scripting and Automation

When using AI for code:

  • Never paste secrets (API keys, tokens, private keys, credentials).
  • Assume generated code may be insecure or incorrect. Perform code review, testing and security checks.
  • Watch for:
    • vulnerable dependencies, 
    • insecure input handling,
    • prompt-injection risks when building AI-enabled apps,
    • accidental logging of sensitive prompts or outputs.
  • Treat AI-generated scripts as untrusted until validated.

If You Accidentally Shared Sensitive Information

  1. Stop immediately. Do not share additional data.
  2. Capture minimal details needed for reporting (what tool, what data type, when).
  3. Report the incident to the Office of Information Security as soon as possible.
  4. Follow any additional guidance from CCIT and the Office of University Compliance and Ethics.

Need Help?

Related University Policies and Guidance

These AI Guidelines are based on and must be read in conjunction with existing Clemson University policies, which remain controlling where applicable.

Clemson University has multiple policies that help protect University data. University employees, students and affiliates must not enter Internal Use, Confidential or Restricted institutional data into publicly available generative AI tools. This includes details like student information, personnel records, confidential University information from contracts or grants, and any proprietary or non-public intellectual property. Make sure that the information you submit is Public and does not contain personally identifiable or sensitive data.

Below are some of the relevant policies and standards that can help ensure that privacy and security are maintained and guide decision making.

Related University Policies and Guidance
Policy and Guidance Key Points Relevant to AI
Acceptable Use of IT Resources Policy
  • Use of IT Resources must comply with University policies and legal obligations (including licenses and contracts), and all federal and state laws.
  • Specific prohibitions include illegal uploading of copyrighted materials.
  • Mandates reporting of policy violations.
Clemson Undergraduate Academic Integrity AI Guidance
  • Faculty guidance on expectations, assignment design, and academic integrity practices related to AI use in coursework.
Data Classification Policy
  • Provides descriptions of data classification categories.
  • Sets requirement to safeguard data in accordance with the Minimum IT Security Standards based on data classification category.
IT Governance Policy
  • All IT solutions (including generative AI tools and services), whether obtained through procurement, by gift, through research, donation, open source, or other means, must be approved by the IT Governance team before the new IT solution can be used.
FERPA
  • Provides prohibitions around the disclosure of student education records.
Academic Catalog — Undergraduate Academic Integrity
  • Plagiarism includes copying language, structure, or ideas without attribution.
  • Graded works generated by unauthorized AI (not authorized by the instructor/syllabus) or ghostwritten are expressly forbidden.
Research Misconduct Policy
  • Plagiarism includes appropriation of another person’s ideas, processes, results, or words without giving appropriate credit.
Information Security Policy
  • Clemson IT Resources are managed in accordance with applicable policies, procedures, standards, and guidelines (including vendor-hosted cloud environments).
  • University information is classified, stored, protected, and transmitted in accordance with applicable policies, procedures, standards, and guidelines.

External Resources

Resource Why It's Relevant
EDUCAUSE — Artificial Intelligence Resources

Higher-ed oriented resources and guidance on AI strategy, policy, and implementation.

GenAI glossary

Shared vocabulary for generative AI concepts and security-related terms.

NIST Trustworthy and Responsible AI Resource Center

Trustworthy AI characteristics and practical guidance aligned with NIST’s AI RMF.

Malwarebytes — AI Security Risks

Intro-level overview of AI-related cybersecurity risks and common threat patterns.

Telefónica — Overview of EU AI Act risk levels Plain-language overview of risk-tier approaches used in AI regulation.
Future of Privacy Forum — AI agents and data protection considerations Data protection considerations for “agentic” AI systems and emerging privacy risks.
OWASP — LLM Applications Cybersecurity and Governance Checklist v1.1 Security and governance checklist for teams deploying LLM-enabled applications.
OWASP — LLM and Generative AI Security Center of Excellence Guide Best practices for building LLM/GenAI security programs across an organization.