Chapter 4, Part 3 — The longer-term Future of AI and Education: Four Scenarios
As AI accelerates past human capabilities, what happens to learning, purpose, and progress? This section introduces four possible future scenarios.
EDUCATIONAITHESIS


Chapter 4, Part 3 — Applying AI to Education
The longer-term Future of AI and Education: Four Scenarios
According to some researchers, the effects of AI may result in humanity experiencing ‘peak humanity’ today in human education. In a recent working paper that completely reimagines a world dominated by artificial intelligence, AI researchers Arran Hamilton, Dylan William, and John Hattie postulate that our current times may be a period that is “a time where we have the greatest levels of education, reasoning, rationality, and creativity – spread out amongst the greatest number of us” (Hamilton, William, & Hattie, 2023). The authors see such achievements as the direct outcomes of the widespread adoption of universal education and the influential role of higher education institutions — a trend that could begin to decline if we allow AI to take over our thinking for us (Hamilton et al., 2023).
This idea is about their concerns focusing on AI’s current abilities that already exceed that of the average human and the possibility of AI advancing to much higher levels of AGI and ASI than today (Hamilton et al., 2023). The main question research poses is what happens to the incentive to learn and grow if AI exceeds all of our cognitive capabilities (Hamilton et al., 2023)? What if employers turn to hiring less expensive and more intelligent and capable AI machines capable of replacing almost all workers (Hamilton et al., 2023)? The researchers assert that the “grave risk is that we then become de-educated and decoupled from the driving seat to the future” (Hamilton et al., 2023).
Overall, these researchers emphasize that the future is not predetermined and highlight the importance of recognizing potential negative outcomes before we inadvertently find ourselves trapped in dystopian scenarios with no escape. They reference philosopher and author Will MacAskill's concept of our current era as a "hinge point," underscoring its critical significance in shaping the future.
The authors also highlight the timeframe for when AGI might arise, with some researchers thinking it is two years out, while most researchers agree it will be here before 2040. However, the authors argue it is imperative to err on the side of caution and act as if this dystopian future could be here sooner rather than later. Once again, Langdon Winner’s warning of avoiding a state of somnambulism and taking an active role in shaping the future of AI technology may provide some guidelines for thought (Winner, 1986).
The authors express strong concerns that AI will surpass human cognitive ability sooner than we may realize. While their paper paints a picture of a potentially unwelcome future, their concerns are echoed by others in the field. A study conducted by the Pew Research Center, in collaboration with Elon University’s Imagining the Internet Center, surveyed a diverse group of experts across various sectors—including government, nonprofits, technology firms, and academia—and found that 37% of these experts expressed more concern than enthusiasm about the potential changes AI might bring in the future (Pew Research Center, 2023). In fact, Nobel Prize-winning economist Gary Becker’s idea of “Human Capital Development” advocates for investing in education not just as a public good but as a vital economic strategy to develop a workforce that thrives alongside technological advancements in an area where cognitive abilities in an era where machines have automated most types of “blue collar” work (Hamilton et al., 2023).
With Artificial Intelligence now threatening to take the position of the “white collar workers,” the authors outline four possible future scenarios for a society fully driven by AI, each with different implications for the future of education:
AI Advancement Banned: A future where artificial intelligence progress is forcibly halted, leaving society with AI’s existing capabilities (Hamilton et al., 2023).
Human-AI Partnerships: Regulations that force collaboration between humans and AI even when the technology can handle tasks independently. This aims to protect traditional jobs but may lead to artificial workloads (Hamilton et al., 2023).
Transhumanism: To compete with each other, or with AI in a landscape dominated by AI, individuals opt for “brain chips” or genetic enhancements to boost their abilities (Hamilton et al., 2023).
A world of leisure supported by Universal Basic Income (UBI): a society that is totally run by AI with no requirement for employment. To support individuals, a new social structure provides a universal basic income to enable people to live comfortably without the requirement of employment or any productive work, with support almost entirely funded by the government (Hamilton et al., 2023). While they include this situation in their analysis, I will omit it from my thesis because it feels more like an economic model that is part of a future scenario than a future scenario itself.
Taken together, the scenarios act as a wake-up call to consider new technology carefully, plan, take action, and regulate when the public interest is at stake to bring about a future of our choosing.
References
Hamilton, A., Wiliam, D., & Hattie, J. (2023). The future of AI in education: 13 things we can do to minimize the damage [Working paper]. Cognition Education. https://cognitioneducation.com/news/ai-in-education/
Pew Research Center. (2023). As AI spreads, experts predict the best and worst changes in digital life by 2035. https://www.pewresearch.org/internet/2023/06/21/as-ai-spreads-experts-predict-the-best-and-worst-changes-in-digital-life-by-2035/
Winner, L. (1986). The whale and the reactor: A search for limits in an age of high technology. University of Chicago Press.