“I spent two hours rewriting my essay because the AI-checker software kept flagging it as AI-written, even though it’s my 100 percent original work. Can I still submit or should I keep rephrasing?”
That’s what my student asked, concerned about being falsely accused of using generative AI. Her experience mirrored the overwhelming challenges our K-12 staff faced when generative AI exploded in the fall of 2022.
At the Center for Future Global Leaders at Elite Open School, we knew that the outright prohibition of AI wasn’t educational or feasible for our online high school program. Our aim was to find the right balance: setting guardrails that allowed students to build future-ready skills while still mastering fundamental education and critical thinking.
A Common Problem
Back to topWe weren’t alone in this. Cengage Group’s 2025 research found that almost all higher education instructors (92 percent) and students (83 percent) agree AI literacy is vital for future employment. The positive perception is even higher among K-12 educators and administrators.
However, a 2023 study by the same organization highlighted persistent concerns: academic integrity, bias mitigation, instructor tech-fluency, and the accuracy of AI output. Over two years, these concerns have barely budged, showing we still lack a comfortable solution.
The Way Forward Through Social and Emotional Learning
Back to topOur school found a way forward by being transparent. We were honest with our students: This technology is as new to us as it is to them. By opening it up to a discussion, we created a safe space for collective exploration and reflection. This approach not only fostered trust but also provided a practical way to provide meaningful opportunities to build social and emotional learning competencies into our curriculum.
Together with our students, we moved through a few iterations of the AI policy, but the first and most crucial step was shifting away from a purely punitive approach. The impact was quite obvious and immediate. After we established our co-authored policy, we could hold students accountable and have productive discussions about their AI usage.
As a part of the co-authoring process, we held an initial session to set our expectations as the faculty and drew some boundaries around academic integrity. For example, pasting in fully AI-generated essays as a submission was off the table.
We also listened to our student’s personal uses of generative AI, and provided them with directions by highlighting some good AI-usage examples we liked, such as using it as a tool to brainstorm ideas or exploring different topics for a project. Shortly after, we requested our students to submit a policy draft by the end of the week, in the light of our discussion.
Shifting the Mindset
Back to topEventually, we settled on a solution that promotes transparency and rewards the AI applications that foster skill-based growth—as long as it was transparent and justified in a way that upheld academic integrity. This way, students felt comfortable with the space we provided to interact with AI while also receiving feedback on their usage without being penalized but still remaining within the boundaries of academic integrity policies.
This solution allowed us to add an additional layer of accountability, as detecting AI-generated work is nearly impossible, even with the best detection softwares. The reward mechanism also acted as an additional motivation for disclosure, making the tangible benefit of transparency a more attractive option than risking a breach of academic integrity.
Additionally, building an environment of trust and transparency allowed an open-communication channel between the teachers and students rather than situating the faculty as the “AI-police” in their eyes.
How Does Co-Writing AI Policies Foster Social and Emotional Learning in Schools?
Back to topHere’s how co-writing an AI policy with students fosters key competencies:
- Responsible Decision-Making: Engaging students in policy creation compels them to think critically about the consequences of their AI use. By debating ethical use, they learn to evaluate risks and benefits, transforming a simple set of rules into a deeply considered framework for digital citizenship.
- Self-Awareness: The process forces students to reflect on their own habits and motivations. When they define what constitutes fair use, they must examine why they use AI: Is it for genuine learning or to avoid work? This self-reflection helps them understand their strengths and weaknesses as learners.
- Self-Management: Co-authoring a policy provides a framework for students to practice self-regulation. When they have a personal stake in the rules, they are more likely to commit to using AI responsibly and managing impulses to take shortcuts. By collectively establishing and committing to guardrails, they build the discipline needed to enforce the policy.
- Social-Awareness: Discussing AI policy requires students to consider the impact on others. They must think about how academic integrity affects peers, instructors, and the broader school community. This conversation builds empathy as they recognize their AI use can impact the fairness of the learning environment for everyone.
- Relationship Skills: The entire policy-writing process is a collaborative exercise. Students practice effective communication, negotiation, and cooperation. This isn’t just about creating a document; it’s about building a culture of trust and mutual respect between students and staff, which is essential for a productive learning environment and for maximizing academic integrity.
Putting it Into Action
Back to topTo make this effort a success, create a safe space for open, transparent, and reflective discussion. Remind students there are no right or wrong answers and the goal is to shape their tech interactions with their perspectives. Remember to keep this a flexible, evolving conversation.
Ultimately, inviting students to the table transforms a moment of technological anxiety into an opportunity for growth. Creating a safe-space to discuss policies that concern students growth, moves us beyond a punitive approach to academic integrity and toward a partnership built on trust, responsibility, and mutual respect. By fostering these essential social-emotional skills, we empower students to not only navigate the complexities of AI today but to become the ethical and thoughtful leaders of tomorrow’s digital world.
Interested in learning more and sharing your thoughts about AI use in schools? Join us at the 2025 Exchange in Minneapolis, Nov. 4-6. Register now!
Resources:
Apprehension of generative AI in Higher Education overstated, Cengage Survey finds. Generative AI in Higher Education Research Findings | GenAI. (2023, August 28).
Cengage Group. (2025, April 3). AI in education report: New Cengage Group data shows growing GenAI adoption in K12 & higher education. [Press release].
Cengage Group. (2024, October 31). From hesitation to adoption: The growing role of GenAI in the classroom.
Dunham, S., Lee, E., & Persky, A. M. (2020). The psychology of following instructions and its implications. American Journal of Pharmaceutical Education, 84(9), 8031.
McCabe, D. L., Treviño, L. K., & Butterfield, K. D. (2001). Cheating in academic institutions: A decade of research. Ethics & Behavior, 11(3), 219.
The views in this blog are those of the author and do not necessarily reflect the views of CASEL.
Bukle Unaldi Kamel is a researcher, entrepreneur, and educator focused on the intersection of cognitive science and technology. She is also the co-founder of an award-winning ed-tech startup, Hands-on Labs, making STEM education more accessible. In her roles at CGFL, Elite Open School she serves as an Academic Program Manager, applying the science of learning to academic partnerships and managing academic programs from k-12 to higher education.
Related Posts
- Join the Conversation: What’s Being Said About AI in Education and SEL’s Role Alongside it?
- AI + SEL & The Mental Health Crisis: How Can We Leverage New Tech to Meet the Moment? (and What Are the Risks?)
- How AI-Powered Tools Can Enhance Our Human-Centered Work
Write for Us
Back to topAre you interested in writing for CASEL’s blog, Constellations? Learn more about what we’re looking for and how to pitch your idea!