Artificial Intelligence
Generative Artificial Intelligence (AI) – Guidance on Use and Applicable Policies
Ever since the emergency of commercially available Large Language Models (LLMs) in 2022, the field of AI has been an ever-changing landscape with significant impacts on higher education. Due to this dynamic environment, it is difficult to encode formal policy around AI, as the next generation of developments could fundamentally alter the capabilities of AI. While recognizing the dynamic nature of AI developments, it is vital that educational institutions meaningfully respond to AI in order to advance educational opportunities, ensure data security, and maintain academic integrity. This page serves as a central repository of resources on AI at Nevada State including relevant policies, training materials, and best practices surrounding AI.
Relevant Policies
Topic | Summary | Relevant Policy(ies) |
Academic Standards, Academic Misconduct, & Student Misconduct |
Defines misuse of AI in the context of overall academic standards and outlines disciplinary procedures. | |
Data and Privacy | AI services may or may not be secure. Procedures for the proper handling of sensitive information with AI services is defined in context of overall data security. | |
Purchasing of AI Services | AI services almost always have a contract associated with them that must be reviewed by Nevada State Contracts Group. Purchasing AI services follows the procedures outlined in the Purchasing Manual found in the Nevada State Dropbox Public Folders. | Purchasing and Procurement Card Manual |
AI Attribution, AI Faculty Conduct | Proper citation of AI use in research and scholarship is defined and mechanisms for addressing misconduct are addressed. |
AA 19.1: Research Misconduct Policy NSHE Code Title 2, Ch. 6: Rules & Disciplinary Procedures for Faculty |
AI and Disability | Some AI tools fall under the umbrella of disability accommodations. These include speech-to-text and transcription, document readers, and translators. |
DRC 5.1: Disability Accommodations Offered by NSU
|
Incident Reporting | Any violations of the above policies or other grievances related to the improper use of AI can be reported on the Nevada State reporting portal. |
* Currently undergoing a major revision.
Developing an AI Policy for Your Classroom
AI tools such as ChatGPT, Gemini, Grok, Grammarly, and other GenAI technologies can be valuable resources for learning when used appropriately. Instructors should carefully consider the role AI will play in their classroom. Instructors are encouraged to develop a clear syllabus statement addressing AI as well as discussing AI use with students early in class.
Developing an AI syllabus statement
Best practices for establishing a strong AI syllabus statement include:
- Define AI tools clearly
- Clarify permitted and prohibited uses of AI
- Establish expectations for AI attribution
- Discuss academic integrity and clarify consequences
Please reference the template and example AI syllabus statements below:
- Sample Template
- Syllabus Statement 1
- Syllabus Statement 2
- Syllabus Statement 3
Discussing AI tools in class
In addition to going over your course syllabus statement, holding an in-class discussion during the first week of class on AI use can help set the tone for the rest of the course. By establishing expectations up front, students can be more confident they are using AI appropriately as well as understand why proper AI use is important. In addition to your syllabus statement, you can:
- Use AI in an icebreaker activity
- Model how AI might be used in your classroom
- Assign a reading or video that aligns with your philosophy on AI use. Examples include:
- Ethan Mollick’s post “Seven Ways of Using AI in Class”
- Derek Muller’s talk “What Everyone Gets Wrong About AI and Learning”
- Passages from Tim Harford’s book “Messy: The Power of Disorder to Transform Our Lives”
Using AI Responsibly
Students
While AI tools can help you be a better learner and speed up tasks, you should take care to acknowledge when and how you use it as required by your instructor. Misuse of AI is a form of plagiarism and can result in disciplinary action. When AI is used to supplement learning rather than replace it, AI can be a very effective tool. You should always reference the AI policy in your course syllabus before using AI on any course work including any required AI attribution. Potential uses of AI in the classroom are shown below:
Possible Use | Tips | Pitfalls |
Homework Help and Tutoring. Stuck on a homework problem? AI can provide feedback on your work and step-by-step solutions. | Always try a problem on your own before asking an AI for help to develop key skills. Be sure to also use office hours and other instructor-provided resources, which may better align with instructor expectations. | Overreliance on AI for homework problems can undermine learning. When exams come around, you may not have built essential concepts in muscle memory. |
Note-Taking and Summarization. AI can summarize lectures, readings, and class discussions. | Physically writing or typing notes yourself is a learning best practice as it helps you make connections with course content. AI note-taking can help when this is not possible, but don’t let it replace your own notes and summaries. | You should be attending lectures, not the AI. While summaries can save time in the short-term, you will not be mastering material if you are not paying attention in class yourself. |
Idea Generation and Structuring for Paper and Projects. AI helps brainstorm ideas, outline arguments, or organize thoughts or writing. | Brainstorm together with AI rather than outsourcing idea generation completely to AI. AI can be very helpful when you don’t know where to start, but you should iteratively refine ideas by giving it feedback like you would do with a human collaborator to arrive at a final design or solution. | AI outputs can often be bland and uncreative, so take care to be critical when using their outputs especially for creative tasks. AI can be particularly dangerous for novices or amatuers in a particular field, as they do not have a point of reference for what is good and what is bad. |
Research Assistance. AI finds relevant sources and generates citations. | Use AI together with other secondary sources like Wikipedia references to identify important sources in a given field. AI can be useful as a first, but not a final, resource for identifying sources. | Current GenAI models are known to have weak search capabilities often providing surface-level sources or even making up sources. Always check all sources provided by an AI. Many research projects will require more niche sources that won’t appear in AI search results, including archival documents and research papers. |
Writing Assistance or Editing. AI helps with grammar, style, and content generation. | AI can be helpful as an on-the-go editor to catch spelling and grammar mistakes or even rephrasing tasks. Care should be taken when relying on it to generate content, which is best reserved for routine tasks rather than creative ones. It is best to write together with AI tools, modifying outputs as needed rather than blindly copying and pasting. | Writing is a skill that can take years to develop on your own. Get in the practice of writing yourself. Overly relying on AI for writing tasks can result in bland and trite writing, and you lose your unique voice. |
Language Translation. AI translates text and speech for non-native speakers. | AI can make resources available to you that would otherwise be inaccessible due to language barriers. If you do use an translated material, take care to check the accuracy of the translation yourself or with a native speaker. You should include instructions in the prompt on how literal of a translation you want or how to handle transcription errors. | AI can struggle with idiomatic phrases and transcription errors. It also has no guarantee of accuracy, often resulting in outputs that are not to be found in the original input text without warning. It will not warn you that text is missing or that it is making educated guesses. |
Note that you are responsible for the work you turn in. If content generated by an AI tool in inaccurate or doesn’t fit the criteria of a given assignment, you are held accountable for those mistakes. A good rule of thumb is to carefully check all work generated by an AI including verifying solutions, checking facts and sources, and editing outputs.
Faculty
AI Attribution
Proper citation of AI varies by discipline:
APA Style
OpenAI. (2023). ChatGPT (Feb 13 version) [Large language model]. https://chat.openai.com
Chicago Style
Text generated by ChatGPT, March 31, 2023, OpenAI, https://chat.openai.com.
MLA Style
“Text of prompt” prompt. ChatGPT, Day Month version, OpenAI, Day Month Year, chat.openai.com
Find further details at Purdue University’s AI Citation Guide. Instructors can also establish their own expectations on proper citation of AI in the classroom. For example, ChatGPT allows users to generate a shareable link to past conversations which can be submitted with assignments.
Publisher Policies
Publishers likely have established their own policies on the use of AI in scholarly publications. Examples include:
- Elsevier: Publishing Ethics
- IEEE Submission Policies
- Nature: Authorship
- PLOS ONE: Ethical Publishing Practice
Be sure to review publisher guidelines on AI use before submission.
Definitions
Artificial Intelligence (AI): A branch of computer science focused on creating systems that can perform tasks typically requiring human intelligence, such as reasoning, perception, and decision-making.
AI Literacy: Understanding how to use, critically evaluate, communicate about and with, and collaborate effectively with generative AI (GenAI) in a manner that is ethical and socially and academically responsible.
Machine Learning (ML). A subset of AI where algorithms learn from data to make predictions or decisions from patterns in the data without being explicitly programmed for each specific task.
Neural Networks. Computational models inspired by the structure and function of the human brain, consisting of layers of interconnected nodes (neurons) that process input data to generated predictions. More layers allow neural networks to exhibit more abstract behaviors.
Deep Learning. A subset of machine learning that uses artificial neural networks, particularly deep neural networks with many layers, to model complex patterns and relationships in data.
Generative Artificial Intelligence (GenAI). A type of AI that creates new content, such as text, images, music, or videos, based on patterns learned from existing data. Examples include ChatGPT for text generation and DALL-E for images.
Large Language Model (LLM). A category of deep learning models trained on massive text datasets to understand, generate, and manipulate human language.
Diffusion Models. A class of generative AI models that create images by iteratively refining noise, mimicking the process of reversing a physical diffusion process. Used in AI generation tools like Stable Diffusion and DALL-E.
Agentic AI. AI systems that operate autonomously with the ability to plan, make decisions, and take action in the world to achieve goals, often with minimal human oversight.
Prompt Engineering. The practice of designing and refining prompts to effectively interact with generative AI models like ChatGPT for desired outputs.