Responsible Use of Artificial Intelligence
Purpose
Policy #3.15: Generative Artificial Intelligence (AI) is a rapidly evolving technology with applications across many industries and disciplines. While Hartwick College is committed to encouraging the use of new and emerging technologies, in keeping with its purpose of educating people who will thrive in and contribute to the world of the future, the College also has a responsibility to ensure that users understand the risks and limitations of Artificial Intelligence, and how to use this technology responsibly.
Artificial Intelligence tools, such as ChatGPT (OpenAI) and Gemini (Google), are designed to produce text, images, code, and other content in response to user prompts. Other tools, including Microsoft Copilot, Anthropic’s Claude, and Perplexity AI, offer similar functionality with different features and integrations. Because this industry continues to develop rapidly, with new tools and capabilities emerging at a fast pace, this policy is designed with flexibility in mind and may be revised periodically in response to changes in the technology.
Policy Scope
This policy applies to all Hartwick College students and employees, as well as to vendors and individuals who have a contractual relationship with Hartwick College.
Responsible Office
Office of Academic Affairs
Artificial Intelligence tools can be a source of creativity and can save time with many routine tasks. Provided that the data used and content generated comply with Hartwick College policies, it is typically acceptable to use Artificial Intelligence to draft text for emails and memos, remarks for presentations, outlines and work plans, job descriptions or ads, etc. and to brainstorm or sketch out initial ideas. Such draft text should be revised to ensure accuracy and mitigate against bias. When not explicitly prohibited, Artificial Intelligence can also be useful in the initial stages of research for summarizing the main ideas or conclusions in internal or external information (ex. Articles, news stories, policies, etc.), provided that this information is not personally identifiable, sensitive, or protected information, or data that is subject to regulatory compliance. Hartwick affirms that AI technologies should enhance access and learning for all users, including through assistive and adaptive tools that support diverse needs. The College will continue to consider accessibility and usability as integral aspects of responsible AI use.
Users are required to disclose the use of Artificial Intelligence when it constitutes a substantial or original contribution to academic work, official reports, or externally facing content. Routine internal communications or materials edited or drafted with AI (e.g., memos, emails, drafts) do not require disclosure, though users remain responsible for accuracy, appropriateness, and compliance with College policies.
This policy will be reviewed annually and revised as appropriate.
Hartwick students, faculty, and staff are expected to comply with Hartwick College’s Title IX, Bias, Harassment, and Discrimination Policy when using Artificial Intelligence tools. Users of Artificial Intelligence tools are responsible for reviewing AI-generated content for inaccuracies and bias. Any factual errors become the user’s responsibility once work containing AI-generated content is submitted. Users of Artificial Intelligence are also responsible for any content that violates Hartwick College’s Title IX, Bias, Harassment, and Discrimination Policy. The use of Artificial Intelligence to generate malicious, harassing, or threatening content of any kind, or that misrepresents Hartwick College or any members of the Hartwick College community, is prohibited and will be addressed through the appropriate Hartwick College conduct or disciplinary processes.
Users of Artificial Intelligence should keep in mind that AI tools have a significant impact on the environment. Their computational resources have a high carbon footprint and use a great deal of water. At a time when access to clean drinking water is precarious for many populations in the world, the use of AI tools is a matter of environmental justice.
Hartwick students, faculty, and staff are expected to comply with all Hartwick College policies on information security and appropriate use of technology when using Artificial Intelligence tools. Personally identifiable, sensitive, or protected information, and data that is subject to regulatory compliance (ex. FERPA, HIPAA, GLBA) may not be entered into Artificial Intelligence tools. This includes information about prospective, current, and former students; job applicants and current and former employees; Trustees, alumni, and donors; and other persons or parties affiliated with the College. Users of Artificial Intelligence are responsible for adhering to Hartwick College’s policies on the use of copyrighted material, and to the Digital Millennium Copyright Act, and are expected to verify that AI-generated content does not violate Copyright.
Hartwick College empowers instructors to establish their own policies on the use of Artificial Intelligence in the classroom, provided that these guidelines adhere to Hartwick’s Academic Integrity Policy. It is understood that instructors will approach the use of Artificial Intelligence in their classroom with varying levels of knowledge, skill, and comfort with the technology. Therefore, instructors are expected to include explicit policies on the use of Artificial Intelligence in their course syllabi that will clearly communicate to students the instructor’s expectations. For example, if an instructor wishes to prohibit the use of Artificial Intelligence-generated content without additional sources that can verify the accuracy of the AI-generated content, this must be stated explicitly in the syllabus. Per the Hartwick College Academic Integrity Policy, submitting AI-generated work without advance permission from the instructor is considered plagiarism. Students are responsible for adhering to the instructor’s policies as stated in the syllabus.
Instructors who suspect that part or all of an assignment was generated by AI may not rely solely on AI-detection tools. As with any form of detection software, there is the potential for misidentification and even bias against certain students. Therefore, instructors should use additional methods of analysis to establish a pattern of evidence of plagiarism, including, but not limited to:
- Unusual vocabulary or sentence structure
- Unusual patterns of progress on multiple drafts of an assignment
- Student’s ability to demonstrate, in response to questions, comprehension of ideas, methods, and conclusions of a submitted assignment
- Verification of citations, sources, and links for authenticity
- Analysis of the use of information covered in class
These additional methods can best be applied by following the procedures outlined in the Hartwick College Academic Integrity Policy, which state that instructors should first assemble evidence of suspected, questionable academic activity and then make a genuine attempt to meet with the student to discuss the charges.