Chapter 1 - Introduction

As AI begins to transform classrooms, Chapter 1 outlines what’s really at stake.

4/30/20249 min read

Chapter 1 - Introduction

Education is the most powerful weapon which you can use to change the world.” — Nelson Mandela

In 1951, fiction author Isaac Asimov wrote a short story imagining technology’s influence on the future of education. In the story, The Fun They Had, a young girl, Margie, recalls her grandfather’s tales about a bygone era when all books were printed on paper. In Asimov’s imagined future, education is efficiently conducted at home under the direction of a personal “mechanical teacher.” Every student submits homework to the mechanical teacher for instant review and grading. As her mother reminds her daughter to attend “class” with her “teacher”, Margie reluctantly complies with disinterest. At the end of the story, she recalls how her grandfather described his school experience: “All the kids from the whole neighborhood came, laughing and shouting in the schoolyard, sitting together in the schoolroom, going home together at the end of the day. They learned the same things so they could help one another on the homework and talk about it” (Asimov, 1957).

While Asimov’s dystopian view of how the future of technology could affect education was written more than 70 years ago, it eerily depicts a future that looks closer than ever with the integration of AI seemingly everywhere. The story predates the modern conceptions of AI, and even the internet, yet still brings up important questions when considering the near-term and long-term role of AI in education and the implications.
One thing is clear, computers and AI will never be able to replace the joy of human interaction. As with the lessons of Isaac Asimov, We need to emphasize social interactions. We can’t let computers run our educational process.

By its very nature, AI is about knowledge: the manipulation of existing knowledge, the dissemination of knowledge, the management of “what is known”— and most recently the actual creation of new knowledge — so it follows that AI will have broad implications for nearly every aspect of how we as humans will acquire, process, and use knowledge in our future, and at its core, role of our educational system as the purveyor of knowledge.
In the short term, the impact of artificial intelligence (AI) on education may seem somewhat superficial: students might use ChatGPT to write their essays, or teachers can use AI to create lesson plans. In education, these changes are already becoming apparent to students, parents, and educators. However, over time, the broader considerations of AI’s integration into the educational system will become evident to a much wider audience and could have far—reaching consequences — with implications that could not only change the way teachers teach and students learn but will likely affect society as a whole. Employers may notice shifts in the skill sets of the workforce that were either positively or negatively affected by students taught by AI. Policymakers may be called upon to adopt regulations and guidelines to accommodate these changes. Furthermore, anyone who engages with media and new information will experience new ways of thinking and conveying information brought about by a new “AI” generation of writers, thinkers, and learners who were taught by an educational system transformed by AI that looks and functions entirely differently from what it is today.
In the long term, the unchecked advancement of AI could lead to profound societal transformations that would radically change our world with either advantageous or damaging consequences. For example, the potential future could redefine human interactions, alter societal roles, and likely involve AI-driven ‘human-AI partnerships’ that will totally redefine workplace dynamics. Advancements in AI could lead to ‘transhumanist’ augmentations that literally enhance human cognition without the need to actually learn. Understanding the possible scenarios today that may result from AI in the future, can help us properly steer technical development in the right direction, ensuring that technological progress not only supports innovation and progress, but also supports human values and societal well-being. Because education will be profoundly affected, it will be important to ensure that the path forward will enhance educational systems rather than undermining them as AI opens new possibilities — positive and negative — that could likely completely reshape education as we know it.

With the advent of every great technological change, prominent thinkers have studied how scientific and technical advancements have affected our society throughout history and provided some helpful context. Heralded by The Wall Street Journal as “The leading academic on the politics of technology,” Langdon Winner offers a profound observation in his book The Whale and the Reactor: “If the experience of modern society shows us anything, ...it is that technologies do not merely aid to human activity, but also powerful forces acting to reshape that activity and its meaning” (Winner, 2020, p. 6). Winner’s observation is a sobering reminder when considering the potential effects of the integration of AI into education. It recognizes that AI is not simply a tool for enhancing learning, writing essays more quickly or making lesson plans more efficiently, but could serve as a significant influence capable of redefining the entire educational landscape.

Arguably one of the most influential persons in the world of AI, Sam Altman, described his utopian vision for how AI will be incorporated into education. Altman explains how each person will have their individualized “Oxford tutor” who is available to teach any subject from any angle as many times as necessary, with instruction tailored to the individual’s learning style. Altman also claims that eventually, these readily accessible AI tutors will be better than any tutor students have access to today. If this truly is what the future holds for education, how will schools work? Will there even be human teachers? What do students need to learn and who decides this?

Already, the burgeoning advancements in AI are mostly driven by large corporations and well-funded ventures focused on profit and ROI. According to Stanford’s 2024 AI index report, 72.5% of foundation models, large-scale machine learning models, originated from industry, not academia (Perrault et al., 2024). Currently, large corporations such as Microsoft, Google, Meta, and OpenAI are the leaders on almost all issues related to AI and are often the ones investing in startups as well, with much of their development remaining largely unregulated until recent state-specific laws were instituted regarding personal privacy (Field & Leswing, 2024; Lerude, 2023). While some of these companies might have good intentions, corporations are primarily profit-driven, and if left alone without some oversight, will likely not always make the humanitarian decision. When Google was founded, its slogan, “Don’t be evil” was the central tenet of its code of conduct. However, Google eventually got rid of its slogan in April of 2018 for unclear reasons (Crofts & van Rijswijk, 2020). Similarly, OpenAI originally launched with a non-profit, altruistic mission, but was recently turned into a “for-profit” company by Altman so he and his company could benefit financially from its meteoric popularity, and is another example of a corporation that drifted from their original altruistic mission in the pursuit of profit.

Further, the world of education has a higher calling beyond advancing business interests and so concerns about the implication of AI’s development as it relates to education merit special attention. To a large degree, the purpose of education is to advance knowledge and prepare each subsequent generation for meaningful work and make positive contributions to society. It therefore should receive careful scrutiny and oversight as AI develops to ensure its mission can continue to be fulfilled. This is why it is important to first imagine the outside boundaries of a world where AI and education intersect and students, not corporations, come out on top.

In his book, The Whale and the Reactor, Winner concludes that we need to be active participants in technical change. He describes a short anecdote about when he visited his hometown near San Luis Obispo and the $5.5 billion nuclear power plant located nearby in a beautiful location called Devil’s Canyon. Upon arrival, Winner looks out on the unappealing power plant nestled in a beautiful canyon with a beach and ocean backdrop. At this moment he witnesses a gray whale breaching the surface and begins to reflect on the travesty of building this powerplant in such a location. Winner adds that this plant was also built on a fault line, making it susceptible to damage from an earthquake.

Winner’s point is that regardless of what the risk and benefit calculations may have shown, this plant should never have been put in that spot and that its presence is a tribute primarily to the frequently higher value placed on profit over rational concerns about nature and our common humanity, if not controlled. Winner warns us not to always place profit over rational concerns about our common humanity. He urges us to take a more active role in shaping how technology is used and avoid sitting in the passenger seat while individual interests and profit sit in the driver’s seat. Published in 1986, as the implications of technological promises were beginning to be evident with negative effects (such as environmental degradation), Winner’s ideas are formed by identifying patterns in history and speak to the sometimes unanticipated and long-term consequences of integrating new technology without future considerations. Looking ahead, the new promises—or perils—of AI in education will likely start to affect the system one way or the other, and sooner than we may realize, for generations to come.

In Cal Newport’s New York Times and Wall Street Journal’s best-selling book, A World Without Email, Newport explains how email actually negatively impacted productivity and forced workers into a “hyperactive hivemind” (Newport, 2021). In the book, Newport identifies a modern example of passive acceptance of new technologies, exemplifying Winner’s observations in The Whale and the Reactor, where tools like email become entwined with our daily routines without thorough consideration of the negative effects it has on individuals — or as Newport says in an interview on PBS, “falling into this type of workflow by default” (Sreenivasan & Newport, 2021). While email has been transformative to the way we communicate, making connections between people virtually instantaneous, this uncritical adoption has the concurrent potential to foster environments that may not align with our best interests or productivity, highlighting a broader issue of how we engage with technological advancements.

In all aspects of life, people strive to integrate new technologies, some of which are more revolutionary than others. Each innovation contributes to the fabric of our society, and the field of education is no exception. Whether it’s through the avenues of instruction, learning or assessment, educators worldwide are considering ways to integrate new technologies into their classrooms, aiming to ease the burden on teachers and enhance students’ learning capacities. AI is the latest groundbreaking technology.

Currently, AI lacks regulation and standard practices which are resulting in an almost ‘wild west’ approach to its incorporation into educational systems. And yet, while the risks associated with AI are acknowledged, its potential benefits are too significant to ignore. To heed Winner’s warning about new technology, the situation will likely necessitate the establishment of new standards and regulatory practices to ensure its appropriate use. However, time is of the essence. If AI continues to evolve at its current exponential rate, it could soon become unmanageable.

As a student in my final year of college, I’ve noticed that the use of AI as a tool for cheating has emerged as a hotly debated topic in daily conversations. Each professor holds a different stance on AI usage, and the university itself struggles to determine appropriate regulations. While AI’s role in academic dishonesty is a widely discussed topic, it represents just the tip of the iceberg in the broader context of AI’s emergence in education. As technology develops it will present profoundly significant opportunities as well as critical and fundamental concerns regarding how we learn, how the material is taught, how our work is assessed, and how all this will affect the trajectory of society in the future.

The central question of this thesis is: what will be the effects of AI on education, and will they be positive, negative, or both? These new technologies often start with the public good in mind, but become monetized by corporations with few guard rails to manage the evolution of the technologies, potentially doing more harm than good.

In this thesis, I explore the promises and perils of integrating artificial intelligence into the entire fabric of education. In Chapter 2, I provide a historical overview of AI, examining its development and specific relevance to educational systems, which sets the stage for understanding AI's potential transformative impact on learning and teaching. In Chapter 3, I examine the near-term implications that AI’s likely integration into education will have on learning, instruction, and assessment. In Chapter 4, I explore the longer-term transformative potential AI could have in totally redefining educational boundaries, first through the lens of an experimental “futuristic” school that has fully embraced technology and AI followed by a more pessimistic view of how unfettered AI could totally alter the educational landscape and society itself. In Chapter 5, I offer my final thoughts on the critical role of human motivation as it relates to AI-enhanced education. In this section, I advocate for redefining educational practices to maximize AI’s benefits while emphasizing the essential role of teachers in fostering an engaging and supportive learning environment that we could look forward to in the future.

References

Asimov, I. (1957). The fun they had. In Earth is room enough: Science fiction tales of our own planet (pp. 249–254). Doubleday.

Crofts, P., & van Rijswijk, H. (2020). Negotiating ‘evil’: Google, Project Maven, and the corporate form. Globalizations, 17(5), 892–909. https://doi.org/10.1080/14747731.2020.1716920

Field, H., & Leswing, K. (2024, April 1). Generative AI ‘FOMO’ is driving tech heavyweights to invest billions of dollars in startups. CNBC. https://www.cnbc.com/2024/03/30/fomo-drives-tech-heavyweights-to-invest-billions-in-generative-ai-.html

Lerude, B. (2023, November 1). States take the lead on regulating artificial intelligence. Brennan Center for Justice. https://www.brennancenter.org/our-work/research-reports/states-take-lead-regulating-artificial-intelligence

Newport, C. (2021). A world without email: Reimagining work in an age of communication overload. Penguin Business.

Perrault, R., Clark, J., & AI Index Steering Committee. (2024). AI index report 2024: Artificial Intelligence Index. Stanford Institute for Human-Centered Artificial Intelligence. https://aiindex.stanford.edu/report/

Sreenivasan, H., & Newport, C. (2021, August 10). Cal Newport imagines a world without email [Video]. PBS NewsHour. https://www.pbs.org/video/cal-newport-imagines-a-world-without-email-njx3od/

Winner, L. (2020). The whale and the reactor: A search for limits in an age of high technology (2nd ed.). University of Chicago Press.