As we navigate the rapidly evolving landscape of artificial intelligence in education, it’s crucial that we ground our approach in sound pedagogical principles and a clear vision for student success. The U.S. Department of Education’s 2023 report on AI in education provides an excellent foundation, but the breakneck pace of AI advancement in 2024 demands that we revisit and refine our understanding.

At the core of any discussion about AI in education must be our commitment to keeping humans – especially educators – at the center of the learning process. As Glickman et al. (2007) remind us, “Knowledge of how teachers can grow as competent adults is the guiding principle for supervisors in finding ways to return wisdom, power, and control to both the individuals and the collective staff in order for them to become true professionals” (p. 51). This principle aligns closely with the Department of Education’s vision of AI as an “electric bike” rather than a “robot vacuum” – enhancing and multiplying human effort rather than replacing it.

However, the sophisticated AI assistants that have emerged in 2024 are blurring the lines between AI and human roles in ways that weren’t fully anticipated. This necessitates a careful reevaluation of how we implement “human-in-the-loop” approaches in educational settings. We must be vigilant in ensuring that AI serves our educational objectives, rather than the other way around.

The potential for AI to enhance personalized learning has grown exponentially. While this addresses some of the concerns raised in the 2023 report about deficit-based models and narrow adaptivity, it also intensifies the need for what Glickman et al. (2013) call “developmental supervision” – an approach that values teachers as adult learners and encourages professional growth.

As we embrace these new possibilities, we must also confront the ethical considerations that have become even more pressing. Issues of bias, fairness, and transparency in AI systems demand our attention, as do concerns about data privacy and exploitation. The call for education-specific AI guidelines has gained new urgency, and we must work diligently to develop policies that protect our students while fostering innovation.

In line with the Department of Education’s recommendations, our research focus should remain on developing context-aware AI that can adapt to diverse learning environments. This aligns well with modern learning principles that emphasize social learning and building on students’ existing knowledge and strengths.

Finally, we must redouble our efforts to build trust and transparency around AI in education. As the 2023 report wisely notes, “If every step forward does not include strong elements of trust building, we worry that distrust will distract from innovation serving the public good that AI could help realize.” This trust-building must include robust professional development for educators, ensuring they are well-equipped to navigate this new landscape.

As we move forward, let us remember that our goal is not to see what AI can do, but rather to leverage AI in service of our educational vision. By keeping our focus on student needs, empowering our educators, and thoughtfully integrating AI into our educational systems, we can create learning environments that truly prepare our students for the challenges and opportunities of the 21st century.

References:

Glickman, C. D., Gordon, S. P., & Ross-Gordon, J. M. (2007). The basic guide to supervision and instructional leadership. Pearson.

Glickman, C. D., Gordon, S. P., & Ross-Gordon, J. M. (2013). The basic guide to supervision and instructional leadership. Pearson.

U.S. Department of Education. (2023). Report on Artificial Intelligence in Education.

Skip to content