

MODERN AMERICAN SCHOOL
International Baccalaureate Diploma Programme (IBDP)
American High School Diploma Program COGNIAfor Schools (AdvancED)
Artificial Intelligence (AI) Use Policy
Vision Statement of the Modern American School
The Vision of the Modern American School in Amman is to nurture lifelong learners and global thinkers to become responsible citizens with leadership qualities and universal values while instilling pride in one's cultural identity.
Mission of the Modern American School
The Mission of the Modern American School is to provide learners with an engaging and challenging blended learning environment within a diverse community while focusing on international programs, catering for learners’ well-being, fostering international mindedness, and offering various opportunities and experiences that contribute to learners’ growth.
Mission Statement of the International Baccalaureate
The International Baccalaureate aims to develop inquiring, knowledgeable, and caring young people who, through intercultural understanding and respect, help to create a better and more peaceful world. To this end, the organization collaborates with schools, governments, and international organizations to develop challenging programmes of international education and rigorous assessments. These programmes encourage students across the world to become active, compassionate, and lifelong learners who understand that other people, with their differences, can also be right.
Purpose of the Policy
This policy covers any generative Artificial Intelligence (AI) tool (i.e., a specific type capable of creating new data or content, such as that which humans can produce), whether standalone products (e.g., CoPilot, ChatGPT) or integrated into productivity suites. This policy applies to all data and content creation, including text, artwork, graphics, video, and audio.
The Modern American School is committed to the ethical and responsible use of AI tools. It recognizes the emerging opportunities to reduce teacher workload, develop intellectual capabilities, and prepare learners for future workplace technology use. Although AI tools offer numerous benefits, their content may not always be accurate, safe, or suitable and could result in malpractice.
This policy aims to ensure that staff and learners use AI tools safely and appropriately and positions the requirement for ongoing professional development to promote best practices.
Terminology
For this policy, the following terms are defined as:
• Artificial Intelligence (AI): The International Baccalaureate (IB) defines artificial intelligence (AI) as technology that can perform tasks typically requiring human intelligence, such as visual perception, speech recognition, decision-making, and language translation. The IB emphasizes the importance of using AI tools ethically and in alignment with its principles of academic integrity.
• Generative AI: A category of artificial intelligence algorithms trained on large datasets to produce new content, such as text, code, answers to questions, human-like responses, audio, video, simulations, diagrams, and images, based on human input.
• Misuse of AI: The use of AI by staff or learners to replace professional judgment, bypass independent demonstration of their abilities, or violate data protection, online safety, and IT policies. This includes the inappropriate or harmful application of AI tools that result in adverse outcomes, such as spreading misinformation, breaching privacy, enabling discrimination, or causing harm to individuals or society. Such misuse can be intentional, such as using AI for malicious purposes, or unintentional, arising from flawed design or insufficient oversight.
Using AI Tools Effectively and Appropriately
While generative AI tools can expedite and simplify various writing tasks, they cannot replace the discernment and profound subject expertise of human professionals. Therefore, our education system must ensure that students gain knowledge, develop expertise, and enhance their intellectual abilities.
The school will ensure that AI tools and activities are suitable for the school and are used appropriately to achieve the following aims and opportunities:
▪ To use technology safely and effectively to deliver excellent education that prepares learners for future workplaces.
▪ To reduce workload.
▪ To assist with the production of high-quality documentation.
▪ To develop thinking by checking responses, providing counterarguments, and generating questions.
▪ To support research.
▪ To identify and use appropriate resources and approaches.
▪ To actively involve parents in understanding and supporting the safe use of AI tools, providing regular updates and resources.
▪ To address the ethical implications of AI use, including potential biases and the importance of promoting fairness and inclusivity.
▪ To establish clear procedures for learners, parents, and staff to provide feedback on the use of AI tools and report any concerns.
▪ Over time, include case studies or examples of successful AI integration to illustrate best practices.
Preventing the Misuse of AI Tools
The school acknowledges that misuse of AI tools can be both accidental and intentional, and that education is crucial in preventing this. The school will consider restricting access to online AI tools on school devices and networks, particularly on devices used for exams and assessments, where necessary.
The school will keep in mind that the content produced by AI tools can be:
▪ Inaccurate
▪ Inappropriate
▪ Biased
▪ Taken out of context and without permission
▪ Out-of-date or unreliable
Generative AI returns results based on the dataset on which it has been trained. For example, a generative AI tool may not have been trained in the English curriculum. It may not provide results comparable to those of a human-designed resource developed in the context of our curriculum.
The school agrees with the following principles:
▪ Generative AI tools cannot replace a human expert’s judgment and deep subject knowledge. Learners must acquire their own knowledge, expertise, and intellectual capability.
▪ Learners should not rely on generative AI as the primary step in their thinking.
▪ Learners should not circumvent their learning (e.g., when asked to reflect on a task).
▪ We must teach learners that access to generative AI is not a substitute for having knowledge in our long-term memory and sense-checking results rely on a framework against which to compare them.
▪ AI should not be used to generate personalized feedback to learners on formative and summative assessments. The justification of a mark or grade should be based on human judgment.
▪ Where AI is used to create administrative documents, the staff member who produced them remains responsible for the quality and content. Staff should not assume AI-generated documents match the quality or context of those created by humans.
▪ Any content produced by generative AI tools requires professional judgment to check for appropriateness and accuracy.
▪ Learners and staff should avoid using AI to create personal data documents, as the software might store and reuse this information. Any data entered will be unidentifiable and treated as if released to the internet
▪ Assessed work must always be the learner’s writing and must not contain computergenerated text directly (or for academic referencing, without citation).
▪ Learners will be made aware of the importance of referencing AI tools correctly when using them to produce work, allowing teachers and assessors to review how AI has been utilized, including the date and name of the AI tools used, as well as the work generated.
▪ For any assessed tasks, learners should retain a brief explanation of how AI has been used, along with a screenshot of any input or response (or other un-editable source).
Identifying the Misuse of AI Tools
Staff will continue to use the skills and observation techniques already in use to ensure that the learner’s work is authentically their own. They will be aware of and look out for potential indicators of AI use, which include, but are not limited to:
▪ Inconsistent style or tone: AI-generated text may exhibit shifts in style, tone, or vocabulary that are unusual for the writer’s typical work.
▪ Lack of personal or contextual knowledge: AI-generated content may lack specific, local, or topical knowledge, showing a general or generic approach rather than detailed, context-aware insights. It will not account for the subject knowledge of the individual using it.
▪ Overuse of specific phrases or terminology: AI tools may rely on repetitive phrases or technical jargon that is inappropriate for the intended audience or level of work.
▪ Missing citations or references: AI-generated content may lack proper citations or references, particularly in academic or research contexts, or may use unverifiable references.
▪ Unusual formatting: AI tools might produce content with abnormal structures, such as excessive concluding statements, repetitive sections, or unusual paragraph use.
▪ Incorrect or confidently wrong information: AI may confidently state incorrect facts or provide misleading information within the otherwise coherent text.
▪ Lack of depth or critical thinking: AI may generate content that lacks the nuanced analysis or depth expected from a human expert, resulting in more generic or surfacelevel responses.
▪ Absence of visual aids: In assignments or reports where visual aids (graphs, charts, etc.) are usually expected, AI-generated content may lack these elements, even if they would enhance the quality of the work.
▪ Outdated information: AI tools may reference information that is outdated or lacks reference to recent events, reflecting the cutoff date of the AI’s data source.
The teaching staff will employ various assessment methods to evaluate learners’ understanding and ensure they genuinely grasp the subject matter. This may include:
▪ Class discussions, oral presentations, practical demonstrations, written reflections, and project-based assessments.
▪ Through assessment strategies, staff can verify learners’ comprehension beyond what AI tools can assess, promoting deep learning and authentic engagement.
▪ Allocating time for sufficient portions of assessed work to be completed under direct supervision (where appropriate).
▪ Examining intermediate stages in the production of assessed work to ensure that these are being completed in a planned and timely manner, and that the work submitted represents a natural continuation of earlier stages.
▪ Investigating any work suspected of being generated through misuse of AI tools.
▪ Issuing tasks that are, wherever possible, topical, current, and specific and require creating content that is less likely to be accessible to AI models.
Current AI detectors are not yet reliable enough to conclusively identify AI-generated text. They can produce false positives, inaccurately flag human-written text as AI-generated, and may be biased against non-native English writers. Staff should not rely solely on AI detectors to detect academic misconduct. Where detectors are used, staff must be aware of their limitations, and transparent processes are needed to investigate flagged texts, given the chance of false positives. Assessment redesign should be prioritized over AI detection.
Exams and Assessments
Learners will be informed about the consequences of improper use of AI tools and the school’s stance on plagiarism and academic misconduct. They will also be made aware of the risks of using AI tools to complete exams and assessments, which include:
▪ Submitting work that is incorrect or biased.
▪ Submitting work that provides dangerous and/or harmful answers.
▪ Submitting work that contains fake references.
▪ Failing to accurately reference content generated by AI.
The school will also ensure that parents are informed of the risks associated with using AI tools, what constitutes misuse, and the school’s approach to addressing malpractice. Teachers and other relevant staff members will discuss AI tools and agree on a joint approach to managing learners’ use of AI tools in the school.
▪ Learners will only be permitted to use AI tools to assist with assessments where the conditions of the assessment permit Internet use and where they can demonstrate that the final submission is the product of their independent work and thinking.
▪ The misuse of AI by learners constitutes misconduct and will be subject to the same consequences as plagiarism, as outlined in the academic integrity policy.
Roles and Responsibilities
The School Management Team will be responsible for:
▪ Ensuring their knowledge of AI tool usage in the school is up to date.
▪ Monitoring the use of AI tools within the school and deciding when to obtain a license for a specific AI tool.
▪ Integrating AI tools into relevant school policies, training, guidance, and procedures.
▪ Ensuring that staff receive regular and up-to-date training on the use of AI tools.
▪ Communicating with parents to ensure they are kept informed about how AI tools are being utilized in school and how the school is ensuring their safe and effective use.
▪ Regularly audit and evaluate the use of AI tools.
▪ Ensuring IT technicians provide technical support in developing and implementing the school’s AI practices, policies, and procedures tailored to the needs.
▪ Ensuring that IT technicians are implementing appropriate security measures.
Staff and teachers will be responsible for:
o The safe and effective use of any AI tool.
o Maintaining and modeling a professional standard of conduct while using AI tools.
o The security of AI tools and the data they use or access.
o Familiarize themselves with any relevant AI tools the school uses and the associated risks.
o Reporting concerns in line with the school’s reporting procedure.
Learners will be responsible for:
o Complying with the Online Safety and Acceptable Use Agreement, as well as other relevant policies and school rules.
o Not entering personal data into generative AI tools, including email addresses.
o Reporting any concerns to a member of staff.
Parents will be responsible for:
o Participating in discussions and providing feedback on AI initiatives.
o Supervising children’s use of AI tools at home.
o Promoting responsible and ethical use of AI.
Data Protection, Cybersecurity, and Online Safety
The school will follow the procedures in these policies to protect the school community from harmful online content that AI tools could produce.
The school will:
▪ Instruct staff not to enter data classed as personal or unique category data into AI tools under any circumstances. Any data entered will not be identifiable and will be considered released to the internet.
▪ Not to allow or cause intellectual property, including learners’ work, to be used to train generative AI models without appropriate consent or exemption from copyright.
▪ Ensure that learners do not access or create harmful or inappropriate content online.
The tables below outline the benefits, risks, and risk mitigation strategies associated with the use of AI tools by learners, staff, and management.
Learners
Personalized Content and Review: AI can help generate customized study materials, summaries, quizzes, and visual aids, enabling students, including those with disabilities, to access and develop tailored resources that meet their specific needs. Additionally, AI can help students organize their thoughts and review content.
Plagiarism can occur when learners copy from generative AI tools without proper approval or adequate documentation and submit AI-generated work as their own.
Aiding Creativity: Students can harness generative AI as a tool to spark creativity across diverse subjects, including writing, visual arts, and music composition. AI can suggest novel concepts or
Misinformation can be generated by generative AI tools and disseminated on a large scale, resulting in widespread misconceptions.
Bullying and harassment by using AI tools to manipulate media to impersonate others can have severe consequences for learners' well-being.
Overreliance on AI models can lead to undercutting the
In addition to being transparent about when and how AI tools may be used to complete assignments, teachers can restructure assignments to reduce opportunities for plagiarism and decrease the benefit of AI tools. This may include evaluating the artifact development process rather than just the final artifact and requiring personal context, original arguments, or original data collection.
Learners should be taught to assess all AI-generated content for potential misinformation or manipulation critically and to understand the principles
generate artwork or musical sequences to build upon.
Tutoring: AI technologies have the potential to democratize one-on-one tutoring and support, particularly for students facing financial or geographic constraints. Virtual teaching assistants, powered by AI, can provide round-the-clock support, assist with homework, and supplement classroom instruction.
Critical Thinking and Future Skills: Students who learn about how AI works are better prepared for future careers in a wide range of industries. They can develop computational thinking skills to break down complex problems, analyze data critically, and
learning process and abandoning human discretion and oversight. Important nuances and context can be overlooked and accepted. People may overly trust AI outputs, especially when AI is seen as having human-like characteristics.
Unequal access to AI tools exacerbates the digital divide between learners who have independent and readily available access at home or on personal devices and those who rely on school or community resources for access
of responsible content creation and sharing.
Staff and learners should be taught how to cite and acknowledge the use of AI where applicable properly
If an assignment permits the use of AI tools, the tools must be made available to all learners, considering that some may already have access to such resources outside of school.
evaluate the effectiveness of solutions.
Benefits
Content Development, Enhancement, and Differentiation: AI can assist educators by differentiating curricula, suggesting lesson plans, generating diagrams and charts, and creating customized worksheets tailored to students’ needs and proficiency levels.
Assessment Design and Analysis:
In addition to enhancing assessments by automating question creation, providing standardized feedback on common mistakes, and designing adaptive tests based on real-time student performance, AI can conduct diagnostic assessments to identify knowledge or skill
Teachers
Risks
Societal bias is often due to human biases reflected in the data used to train an AI model. Risks include reinforcing stereotypes, recommending inappropriate educational interventions, or making discriminatory evaluations, such as falsely reporting plagiarism among non-native English speakers.
Diminishing student and teacher agency and accountability is possible when AI technologies deprioritize the role of human educators in making educational decisions. While generative AI presents useful assistance to amplify teachers’ capabilities and reduce teacher workload,
Risk
Mitigation
Select AI tools that provide an appropriate level of transparency in how they create their output to identify and address bias. Include human evaluation before any decisions informed by AI are made, shared, or acted upon.
Educate users on the potential for bias in AI systems, enabling them to select and utilize these tools more thoughtfully and effectively.
All AI-generated content and suggestions should be reviewed and critically reflected upon by students and staff,
gaps, thereby enabling more comprehensive performance evaluations.
Teachers should ultimately be responsible for evaluation, feedback, and grading, as well as determining and assessing the usefulness of AI in supporting their grading work. AI should never be solely responsible for grading.
Continuous
Professional Development: AI can guide educators by recommending teaching and learning strategies tailored to student needs, personalizing professional development to meet teachers’ needs, suggesting collaborative projects between subjects or teachers, and offering simulation-based training scenarios, such as teaching a lesson or managing a parentteacher conference.
these technologies should be a supporting tool to augment human judgment, not replace it.
Privacy concerns arise if AI is used to monitor classrooms for accountability purposes, such as analyzing teacherstudent interactions or tracking teacher movements, which can infringe on teachers’ privacy rights and create a culture of surveillance.
thereby keeping “humans in the loop” in areas such as student feedback, grading, and when AI recommends learning interventions. When AI tools generate instructional content, teachers need to verify that this content aligns with the relevant curriculum standards and learning objectives.
Ethical Decisions:
Understanding how AI works, including its ethical implications, can help teachers make informed decisions about the use of AI technologies and support the development of ethical decision-making skills among students.
Management Benefits Risks
Operational Efficiency:
Staff can utilize tools to support school operations, including scheduling assistance, automated inventory management, enhanced energy savings, and the generation of performance reports.
Data Analysis: AI can extract meaningful insights from vast amounts of educational
Compromising privacy is a risk when AI systems gather sensitive personal data on staff and students, store personal conversations, or track learning patterns and behaviors. This data could be hacked, leaked, or exploited if not adequately secured and anonymized. Surveillance AI raises all of the concerns above, as well as the issue of parental consent, potential biases in the technology, the
Risk Mitigation
Evaluate AI tools for compliance with all relevant policies and regulations, including privacy laws and ethical principles.
AI tools should be required to detail if/how personal information is used to ensure that personal data remains confidential and isn't misused.
data by identifying trends in performance, attendance, and engagement, thereby enabling more personalized instruction.
Communications: AI tools can help draft and refine communications within the school community, deploy chatbots for routine inquiries, and provide instant language translation.
Professional Development:
AI can tailor professional development programs and content based on staff interests and career stages.
emotional impact of continuous monitoring, and the possible misuse of collected data.
Policy Review
This policy will be reviewed annually and updated to reflect advancements in technology and best practices.
Last update May 2025
Access to AI Use Policy
The policy may be accessed on the school’s website. A new version is made available whenever the policy is updated. Before applying, learners and their parents or legal guardians are expected to become familiar with the rules stipulated in this policy.
Students and their parents or legal guardians will sign a form to confirm that they have read, understood, and agreed to comply with the academic integrity policy and other school policies and regulations
Links to Other School Policies
This policy has been developed in conjunction with the school's Assessment Policy, Language Policy, Inclusion Policy, and Admission Policy. For any matters not specified in this document, please refer to the relevant policy.
Appendices Appendix 1
Letter to Parents and Guardians concerning the use of AI tools
Dear Parents and Guardians,
As emerging technologies, such as artificial intelligence (AI), become increasingly prevalent, our school is proactively developing principles to guide the safe, effective, and responsible use of these tools in student learning. After careful consideration, we have established the following principles:
1. Support Education Goals for All: AI will be thoughtfully used to enhance outcomes for every student.
2. Privacy & Security: The use of AI will align with regulations that protect student data privacy, safety, and accessibility.
3. AI Literacy: Students and teachers will develop skills to evaluate and utilize AI technologies in an ethical and critical manner.
4. Realize Benefits and Address Risks: We will carefully examine the benefits of AI while proactively mitigating associated risks.
5. Academic Integrity: Students will produce original work and adequately credit sources, including AI tools.
6. Maintain Human Agency: AI will provide support, not replace the discretion of educators and students in decision-making. Our staff will set parameters for each class and assignment for when and how AI systems can be used
7. Continuous Evaluation: We will routinely audit AI use, updating policies and training as needed.
We remind parents and guardians that AI tools may have age restrictions in place. For example, ChatGPT currently requires users to be at least 13 years old and requires parental or legal guardian consent for students between the ages of 13 and 18. The website warns that “ChatGPT may produce output that is not suitable for all audiences or all ages, and educators should be mindful of this when using it with students or in classroom contexts.”
Our goal is to create a learning environment where AI technologies empower rather than replace the human aspects of education. We approach these technologies cautiously to prepare students for a future where they will be ubiquitous. Please reach out with any questions or input on these principles as we navigate this rapidly changing terrain together. We thank you for your support.
Sincerely,
Appendix 2
Letter for teachers and staff
Dear Teachers and Staff,
Artificial intelligence (AI) can transform our schools in exciting ways, but we must also mitigate the risks. Below are a few examples of responsible and prohibited uses of AI. Throughout the remainder of the school year, we will provide ongoing professional development opportunities.
Examples of Responsible Uses of AI
Student Learning:
o Aiding Creativity: Students can harness generative AI to spark creativity across diverse subjects, including writing, visual arts, and music composition.
o Content creation and enhancement: AI can assist in generating personalized study materials, summaries, quizzes, and visual aids, helping students organize their thoughts and content, and facilitating content review.
Teacher Support:
o Assessment Design and Analysis: AI can enhance assessment by creating questions and providing standardized feedback on common mistakes. Teachers will ultimately be responsible for evaluation, feedback, and grading, including determining and assessing the usefulness of AI in supporting their grading work. AI will not be solely responsible for grading.
o Content Differentiation: AI can assist educators by differentiating curricula, suggesting lesson plans, generating diagrams and charts, and customizing independent practice to meet the needs and proficiency levels of individual students.
o The responsible use of AI in the classroom may vary. For example, AI may be suitable only for specific graded assignments. I encourage you to discuss the use of AI with your students.
Examples of Prohibited Uses of AI
Student Learning:
o Bullying/harassment: The use of AI tools to create deepfakes, manipulate media, or impersonate others for bullying, harassment, or any form of intimidation is strictly prohibited. All users are expected to employ these tools solely for educational purposes, upholding values of respect, inclusivity, and academic integrity at all times.
o Plagiarism: Learners and staff should not copy from any source, including generative AI, without prior approval and adequate documentation. Learners should not submit AI-generated work as their original work. Teachers will be clear about when and how AI tools may be used to complete assignments and restructure assignments to reduce opportunities for plagiarism. Existing procedures related to potential violations of our Academic Integrity Policy will remain in effect.
o Bias: AI tools trained on human data will inherently reflect societal biases in the data. Risks include reinforcing stereotypes, recommending inappropriate educational interventions, or making discriminatory evaluations, such as falsely accusing non-native English speakers of plagiarism. Staff and learners will be taught to understand the origin and implications of bias in AI, AI tools will be evaluated for the diversity of their training data and transparency, and humans will review all AIgenerated outputs before use.
o Diminishing learner and teacher agency and accountability: AI technologies will not be used to supplant the role of human educators in instructing and nurturing learners. AI is a supporting tool to augment human judgment, not replace it. Teachers and staff must review and critically reflect on all AI-generated content before use.
o Staff and learners are prohibited from entering confidential or personally identifiable information into unauthorized AI tools.
Sincerely,
Appendix 3
Learners Agreement
Artificial intelligence (AI) can help me learn more effectively and is crucial for my future, so I promise to use it responsibly and make informed choices.
1. I will use AI tools responsibly and refrain from using them in a manner that could harm myself or others.
2. I will not share personal or confidential information with an AI tool.
3. I will only use AI to support my learning and will follow my school’s rules and teacher’s instructions on when and how to use AI on an assignment.
4. I will be honest about when I use AI to assist with assignments, and I will not submit work that is entirely generated by an AI as my own.
5. If I use AI, I will review its work for mistakes.
6. I will check with my teacher when unsure about what is acceptable.
Student Signature:


Reference
International Baccalaureate. "Statement from the IB about ChatGPT and Artificial Intelligence in Assessment and Education." IB, 15 Feb. 2023, https://www.ibo.org/news/news-about-the-ib/statement-from-the-ib-about-chatgpt-andartificial-intelligence-in-assessment-and-education/
Łodzikowski, Kacper, Peter W. Foltz, and John T. Behrens. "Generative AI and Its Educational Implications." SCALE Initiative, Stanford University, Jan. 2024, https://scale.stanford.edu/genai/repository/generative-ai-and-its-educational-implications
International Baccalaureate Organization. "Guidance on the Use of Artificial Intelligence Tools." Academic Integrity Policy, 2023, Programme Resource Centre.
University of San Diego. "39 Examples of Artificial Intelligence in Education." https://onlinedegrees.sandiego.edu/artificial-intelligence-education/.
Cassidy, Dara, et al. "Use Scenarios & Practical Examples of AI Use in Education." SCALE Initiative, Stanford University, 07/2024, https://scale.stanford.edu/genai/repository/usescenarios-practical-examples-ai-use-education.
Itransition. "AI in Education: 8 Use Cases & Real-Life Examples." https://www.itransition.com/ai/education.