By Tammy Brecht Dunbar, M.Ed., STEM
When I assigned my fifth graders a five-paragraph character comparison essay on Stephen & Lucy Hawkings’ “George’s Secret Key to the Universe,” one of them teased he would use AI to write it. “Great idea!” I said. “Let’s try it out right now!” I projected my computer up to the screen, opened an AI platform and typed in the prompt above. When AI generated the essay, we read it out loud.
My students were surprised.
“That isn’t what happened!” “George wasn’t like that.” “They got the wrong bad guy!”
Modeling the pitfalls with AI – as well as the benefits – gave my students a new appreciation for how AI should be used. They understood they needed a “wary eye” when looking at anything generated by Artificial Intelligence.
This is why I’m not afraid of AI, despite watching 2001: A Space Odyssey and hearing the on-board computer HAL’s* chilling message to its human astronaut, “I’m sorry, Dave. I’m afraid I can’t do that.”
That helped spark our overall fear of artificial intelligence. Then IBM’s Deep Blue beat Gary Kasparov in a six-game chess match in 1997, and in 2011, IBM’s Watson supercomputer defeated Jeopardy’s all-time champion Ken Jennings. We all started wondering what computers might do next.
Of course, we don’t worry when AI helps make our lives easier: remembering passwords, translating notices to send home, generating quick worksheets and calculating grades. How great is it to ask your personal listening device to play your favorite show or remind you of the ingredients for your favorite holiday cookies? And how lost would we get if we didn’t have our GPS companion with us to find our next destination?
But what, teachers worry, will we do if our students are able to access and use AI?
Copilot, ChatGPT and other AI platforms can write thesis papers, essays, articles, songs and much more. How can we make sure our students don’t cheat? How will we be able to tell if they do? How can we stop this madness?
As when we first got access to the internet in our classrooms back in the early 2000’s, we need to remember not to be afraid of this new world.
We must start by teaching Digital Literacy for AI to our students. They must learn how to question and evaluate this new technology as well as how to harness its power to elevate and empower their studies and themselves.
As the excellent new Guidance for K-12 Public Schools on Human-Centered AI from the Washington Office of the Superintendent of Public Instruction says, “Uses of AI should always start with human inquiry and end with human reflection, human insight and human empowerment.”
Students must be aware of the ethical implications of AI so they will use it responsibly. They need to know the bias that can be in artificial intelligence platforms. They need to be taught how to look at AI critically to evaluate what it creates and make an informed decision on whether or not to use it. Our students’ future careers will be infused with AI, so we need to teach them how and when to use it properly.
We need to stop fearing AI because, as Frank Herbert wrote in Dune, “Fear is the mind killer.” Fear can limit us and our students from reaching our full potential.
My students were able to spot the flaws in that AI essay without even trying because they had read the book and were prepared. The same is true with technology. Preparation is the key.
We teachers must model our own journey in learning AI. When we show our students that we are not afraid to use new technologies and continue the journey as lifelong learners, then they will know that they shouldn’t be afraid, either.
*Take the letters of 2001: A Space Odyssey’s computer HAL and move each letter one ahead in the alphabet. H = I, A = B and L = M. Hmmm. I think Arthur C. Clark and Stanley Kubrick were a little afraid of computers even in 1968.