Relationships First: A Skeptic’s Look at AI in Schools
Encouraging school board members at FSBA’s “Future Focused” conference to prioritize relationships and ask tough questions about AI
Everyone knows a legendary educator. I am blessed to have known more than a few. Frankie St. James is one of them. She was a visionary who served 14 years as principal of Key Largo School, during which time it was recognized as a National School of Excellence. More importantly, it became what every school should be: a place where students wanted to come each day, teachers wanted to teach, and parents wanted to volunteer. If you ask her the secret to running a great school, she’ll tell you: “Focus on the three Rs — Relationships, Relationships, Relationships.”

Next week I am attending the Annual Joint Conference of the Florida School Boards Association (FSBA) and the Florida Association of District School Superintendents (FADSS). This year’s theme is “FUTURE FOCUSED: Navigating the New Frontier.” There will be panel discussions with Google and Code.org, and multiple breakout sessions on the challenges of implementing AI.
Will any sessions focus on not implementing AI? I doubt it.
The excitement surrounding AI in education confuses me. We are just emerging from a nationwide experiment in virtual learning — the pandemic — which made it abundantly clear that most students thrive with teachers in classrooms, not on screens. We are also in the midst of a student mental health crisis in which digital social media is a major culprit. School districts are being urged to ban cell phones. Parents are begging for less screen time, concerned about its effects on learning, social development, health, and well-being.
So why does anyone believe that more screens — now with AI — will improve K–12 education?
AI will not revolutionize education
Last summer, a Have You Heard podcast episode called “Don’t Buy the AI Hype” introduced me to the work of Ben Riley, a cognitive scientist and founder of Cognitive Resonance. His organization helps people understand how generative AI works, what it can and cannot do, and how to make informed decisions about its use. Its tagline is fitting: “Building Human Knowledge to Halt AI Hype.”
Riley’s presentation at the 2025 ASU+GSV Summit — the largest gathering of ed-tech investment and AI hype in the world, the same event where Secretary of Education Lynda McMahon repeatedly referred to AI as “A-1” — was titled “AI Will Not Revolutionize Education.” It is well worth watching. Riley explains what AI actually is and why claims of “transformation” far outpace the evidence. The current research base showing AI improves learning outcomes at scale is weak. He warns that hallucinations make AI a risky tutoring tool and that rapid adoption could widen existing inequities.
Riley emphasizes that the pressure to “keep up” can push districts toward hasty adoption without sound pedagogy, infrastructure, or evidence. Instead of chasing a transformational narrative, he urges schools to prioritize teacher expertise, human relationships, and solid instruction — treating AI as a tool, not a replacement.
[The fear of being left behind appears to be driving FSBA’s “Future Focused” agenda.]
The same Have You Heard podcast reintroduced me to Audrey Watters, a writer who examines the intersection of education, technology, and politics. She notes that for generations, education technology has promised big and delivered little — and AI will be no different. She also highlights the connections between tech billionaires pushing AI and those pushing privatization and dismantling of public education.
Watter’s blog, “Second Breakfast,” — particularly the piece “LLM as MLM,”— led me to read Emily Bender and Alex Hanna’s book The AI Con. The book teaches the reader to identify the AI Hype and not fall for the con. They write:
“Artificial intelligence, if we’re being frank, is a con, a bill of goods you are being sold to line someone’s pockets. A few major well-placed players are poised to accumulate significant wealth by extracting value from other people’s creative work, personal data, or labor, and replacing quality services with artificial facsimiles.”
In education, the “productivity” pitch translates into fewer teachers, more automation, and algorithm-driven learning — not better outcomes. Public schools are the “quality service” that will replaced with chat bots and other “artificial facsimiles.”
Attempting to hear “the other side”
Concerned I was only hearing one side, I enrolled in an eight-week certification course at the University of North Florida called “AI for Work and Life.” It promised foundational understanding and hands-on experience with AI. We learned how easy it is to generate low-quality (crappy) videos. One session discussed ethical concerns — privacy, equity, copyright — but deliberately avoided the environmental impacts of data centers, instead imagining that AI might solve those very problems. (They were also upbeat about AI’s ability to solve poverty.) For the record, I am now certified — but not convinced.
I also requested a demonstration of the chatbots being piloted in my district. I was shown how Khan Academy’s Khanmigo can help students prepare for the SAT. Before using it, students must acknowledge that the chatbot “doesn’t always tell you the truth.” I wondered aloud: Why would we hire a tutor that admits it lies? No one wants that.
In September, I attended a mini-conference for school board members and district staff, sponsored largely by ed-tech vendors. Speakers encouraged us to advocate for state and federal funding for the data centers needed to support the “AI revolution.” At one point, we were asked to brainstorm what budget cuts could fund these tools. Fewer books? Larger class sizes?
Instead of asking how to fund AI, what we should be asking:
What are the benefits of not adopting AI in the classroom?
If money were no object, would we invest in books and teachers — or in AI-driven data collection and ed-tech products?
Meanwhile…
Meanwhile, Governor DeSantis has announced plans to regulate artificial intelligence, citing concern about its rapid expansion and potential impacts, particularly in education. The Florida Citizens Alliance — architects of Florida’s book ban laws — is preparing to push for guardrails including parental opt-ins, strict data controls, and policies requiring districts to contract only with AI providers who embrace a “Western civilization (biblical) worldview.” [Ah… Floriduh…)
At the same time, tech companies are pouring billions into AI chips and data centers while financial analysts warn of an AI bubble. The recent stock market rally has been fueled almost entirely by the biggest tech firms. If the market experiences a correction similar to the dot-com collapse, the consequences could be global.
So what is a Florida school board member to do?
First: stop listening to tech giants/broligarchs insisting that AI will revolutionize education. Remember that education is fundamentally a human endeavor. Focus on connecting students with great teachers. If AI truly is the future, districts can adopt it later — after solid evidence, proven pedagogy, and meaningful guardrails exist.
Policy making around responsible use of AI will certainly be necessary. But that is different from embracing AI as the next great educational transformation.
Above all: be a skeptic. Ask good questions. Recognize the hype.
As we navigate AI’s expansion, our charge is simple: protect the relationships that make learning possible. Every policy, every pilot, every tool we introduce should answer one question — does this help students and teachers connect more deeply, or does it get in the way? By keeping human connection at the center, we ensure that technology serves education, not the other way around.
ADDENDUM:
Here are some questions I hope will be addresses during the upcoming FSBA AJC:
1. Evidence & Outcomes
- What independent, peer-reviewed evidence demonstrates that AI improves learning outcomes for K–12 students at scale?
- Has any AI tool been shown to outperform high-quality teaching by humans? If not, why invest district funds?
2. Pedagogy & Human Relationships
- How does this AI tool strengthen — not weaken — teacher–student relationships?
- What tasks does AI replace, and which human interactions does it reduce?
- If AI is tutoring or giving feedback, who is responsible for the accuracy, quality, and appropriateness of that instruction?
3. Accuracy, Hallucinations & Safety
- What liability exists if an AI tool gives harmful, biased, or misleading advice? Who is accountable — the district, the vendor, or the teacher?
- Why is using a product that admits it fabricates answers better that hiring an honest human?
4. Privacy, Data & Surveillance
- What student data is being collected, stored, or inferred by the AI system?
- Does any of this data leave the district? Is it used to train commercial models?
- How long is the data retained, and can families opt out of data collection?
5. Costs, Contracts & Sustainability
- What is the true total cost of ownership — including training, infrastructure, data storage, and recurring licensing?
- What budget cuts will be required to fund AI adoption? Books? Teachers? Support staff?
- If the AI bubble bursts or the vendor collapses, what happens to districts locked into multi-year contracts?
- Does adopting this tool require expanding our district’s data-center footprint, electricity usage, or cooling capacity? At what cost?
6. Equity & Unintended Consequences
- How does AI improve equity rather than widen existing gaps between students with and without support at home?
- What populations are most likely to be harmed by algorithmic bias or error?
- Could AI be used to justify larger class sizes or fewer teachers? If so, how will this impact vulnerable students?
7. Screen Time & Development
- How does increasing AI usage align with parent concerns about excessive screen time?
- What evidence suggests more digital tools will improve — rather than undermine — social development and mental health?
8. Environmental Impact
- What is the estimated carbon footprint or water demand associated with the AI tools being promoted for classroom use?
- Is this an ethical use of public school resources during a climate and infrastructure crisis?
9. Hype vs. Reality
- What can this tool actually do today — not what it might do “in the future”?
- How do we avoid making expensive decisions based on fear of being “left behind”?
10. Opportunity Cost — the most important questions of all
- If money were no object, would we rather invest in outstanding teachers and high-quality learning materials, or in AI-driven products?
- What will we lose — pedagogically and socially — by replacing human learning experiences with algorithmic ones?
