Artificial Intelligence (AI) Policy Guidance
This policy guidance page is intended to give districts an overview of artificial intelligence, its use in schools, potential policy implications, and important considerations in developing a district-specific policy. This page is for informational purposes only and does not constitute legal advice. Specific questions should be referred to the school district's legal counsel.
ChatGPT and other artificial intelligence algorithms (AI) are quickly progressing technologies which are already used widely by teachers and students across the country. AI may have the potential to address some of education’s challenges, innovate teaching, and improve learning practices. However, the development and use of AI has been fast-paced - federal law, state law, and policy have not caught up to the widespread use of these technologies. Districts may wonder what steps they can take to address the use of AI in schools, while remaining in compliance with any relevant federal and state laws. This guidance document provides an overview of AI technologies and recommended approaches for districts to take to address AI locally.
Q1: What is Generative AI?
Generative AI refers to a category of artificial intelligence (AI) algorithms that generate new outputs based on the data they have been trained on. Unlike traditional AI systems that are designed to recognize patterns and make predictions, generative AI creates new content in the form of images, text, audio, and more. Examples of Generative AI include ChatGPT, AlphaCode, GitHub, Bard, and more - with more tools and applications in the process of development.
Q2: Are there any federal or state laws governing Generative AI?
No. There is no federal law that governs the use of AI in this context. Colorado has not adopted any state-specific laws either. However, state and federal privacy laws may apply to AI programs, as they use and process substantial user data.
Q3: Will CASB draft a sample policy regarding AI use?
Not presently. Because there are not state and federal laws governing AI, and comprehensive legislation may be forthcoming in coming months or years, CASB is not intending on releasing a sample policy at this time and believes it is best for school boards to determine the appropriate use of AI in their districts. As such, drafting a new policy or modifying existing policies should be accomplished at the local district level for the time being. Any local policies should involve community engagement and address the needs of the community.
Q4: What are potential benefits of Generative AI?
Some potential benefits or uses of Generative AI in the school context include:
AI tools can automate administrative tasks, such as admissions, attendance, grading, and homework monitoring.
AI can offer tutoring and studying programs for students.
AI tools can provide detailed and helpful feedback on students’ writing assignments, which could help students improve writing skills and enhance their overall learning experience, without needing teachers to provide multiple rounds of feedback.
AI tools can assist with personalized learning for students, by analyzing their learning history and providing them with tailored learning materials based on their interests and strengths.
Q5: What are potential concerns regarding Generative AI?
AI may cause laziness or cheating among students due to its ease of use.
AI could exacerbate digital equity issues, as students without access to the internet or computers to use AI tools could be at a disadvantage.
Language barriers and cultural differences may hinder AI chat effectiveness for non-native English speakers, as AI tools are generally most advantageous and accessible in the English language.
Use of AI has significant privacy concerns, due to the large amounts of data that AI systems collect and analyze. If not trained correctly on how to use the tools, students or teachers may expose personal information which could be subject to data breaches or misuse.
Use of AI also raises substantial ethical questions. These include consideration of whether AI systems should track and analyze student behavior beyond academic performance, and how AI systems should handle student mistakes or misbehavior.
AI is imperfect and information may be incorrect or biased. Generative AI is created by analyzing information and data made available by humans - therefore, bias and inaccuracies are incorporated into the algorithm either intentionally or unintentionally.
Q6: What CASB sample policies could be impacted by use of AI in schools?
If a district typically uses CASB sample policies, but wishes to modify their policies to accommodate the use of AI, they may wish to review and update the following CASB sample policies.
- JICDA: Code of Conduct includes a definition of scholastic dishonesty that “includes but is not limited to cheating on a test, plagiarism or unauthorized collaboration with another person in preparing written work.” If districts follow CASB sample policy JICDA and wish to change their definition of scholastic dishonesty, they may wish to review and modify JICDA.
- JS*: Student Use of the Internet and Electronic Communication includes significant content relating to internet use, and many parts of the policy could be impacted or reviewed if a district wishes to make changes based on AI, particularly the unauthorized and unacceptable uses section.
Q7: What should be considered in my district’s AI Policy?
Limitations of AI. AI software is limited. Answers in AI chat softwares include potentially discriminatory and/or inaccurate responses. As such, policies should include regular training so that students and staff are aware of the limitations of AI, and are aware of how AI can be used responsibly. Training should involve consideration of how AI can be used as a brainstorming tool instead of blindly relying on its accuracy, and also discuss when and how AI’s answers or products should be verified.
Ethics and Data. AI can collect significant amounts of personal data, particularly when personal data is voluntarily inputted through questions. Personal data may be protected, making it inappropriate and unsafe for it to be collected by AI software. For example, a teacher could use AI to create an IEP plan, which could violate the security of the student’s protected health information. As such, any AI policy should include training requirements regarding how staff and students should handle personal data. In conjunction with other data security policies, districts should ensure that students and staff are otherwise protected against data breaches.
Acceptable Use. One of the most transformative changes relating to AI is its use in writing and homework assignments. Students could use AI to write essays, provide book reports, or answer research questions. Although these capabilities may be helpful in certain contexts, they may not always be appropriate. Board policy should outline acceptable use of AI technologies, which relates to academic integrity discussed below.
Academic Integrity. There are several types of academic integrity which students should follow. This includes proper citation and referencing of sources, avoiding plagiarism, and avoiding cheating. Students should fully understand the district's expectations for how AI should be used in research and writing and its intersection with academic integrity, as well as the consequences for students if expectations are not followed.
To develop its policy on academic integrity as it relates to AI, boards should consider questions such as:
Is using AI on a writing assignment without teacher approval cheating?
Is using AI on a writing assignment without crediting the AI plagiarism?
Should students cite AI responses on papers or assignments? If so, how should the students be instructed to complete citations?
Professional Development. As the changes associated with AI are new to staff in addition to students and board members, a school board should prioritize professional development and training for staff and teachers. Staff should be included in outside presentations or seminars regarding AI, and the board should consider providing regular training, communication, and open dialogue to staff members.
Reviewing and Updating Policy. As AI technology rapidly advances and may change in future years, boards should establish a process to regularly examine and review the AI policy.
Q8: How should my district respond to generative AI?
This question will depend on your district, as there is not a “one size fits all” approach. To guide you in your response, you should take the above topics into consideration, and should also engage the community and examine your existing policies.
Additionally, CASB can direct you to the following resources:
June 2023