Chapter 5, part 3 - Policy Recommendations for a Sustainable AI–AI Integrated Education Future

As AI reshapes education, this section lays out the urgent policies, leadership, and global cooperation needed to protect human purpose, guide innovation, and ensure students stay motivated to learn.

EDUCATIONAITHESIS

4/13/20247 min read

Chapter 5, Part 3 — Conclusion

Policy Recommendations for a Sustainable AI - AI Integrated Education Future

Ideally, a goal would be to build a system where AI not only enhances learning but also promotes critical thinking, creativity, and inquisitiveness. Moving forward, prioritizing these educational values above profit will be crucial for ensuring a future where AI’s effects on education benefit society as a whole. Achieving this ideal will require collective action toward reimagining a new and better educational model that leverages technology to boost human potential. This journey needs to begin with our proactive engagement in guiding AI’s role in education to positively transform our ways of learning, teaching, assessing, and how we will live our lives in the future.

Unfortunately, most of us have little say in the direction of AI. Short of writing letters to local congress members and calling for specific guardrails that staunch the growth or future direction of AI, most of us can only control how we interact and use AI. The future of AI in education will rely on the government or some other organizing body to take an active role in shaping the boundaries around AI as it develops and assures a future that motivates people to want to enhance human intelligence, cultivate human potential and human purpose, and not take them away (Khan, 2023).

Where the Current Administration Stands

The U.S. Department of Education has recognized the potential risks of unrestricted AI and is already collaborating with a diverse group of stakeholders—including teachers, faculty, support staff, other educators, researchers, policymakers, advocates, funders, technology developers, community members and organizations, and most importantly, learners and their families/caregivers—to develop policies that will shape the future of AI in education (U.S. Department of Education, 2023).

According to Artificial Intelligence and the Future of Teaching and Learning, the Department of Education opposes the idea of “technological determinism” and considers AI as a tool that enhances human capabilities, likening it to an electric bike rather than a robot vacuum (U.S. Department of Education, 2023). With an electric bike, the rider remains fully engaged and in control, experiencing less strain while gaining more efficiency through technological assistance. In contrast, robot vacuums operate independently, removing the human from the process entirely.

The Biden Administration has committed to robust frameworks that ensure AI enhances educational outcomes ethically without exacerbating social inequalities (Biden, 2023). President Biden also stresses the importance of protecting American interests against the adverse impacts of AI (Biden, 2023). To bolster this vision, the administration aims to attract global AI talent to create an environment that fosters innovation that aligns with U.S. values and contributes positively to society (Biden, 2023). The intended effort involves an “AI toolkit” for educational leaders (Biden, 2023). This will facilitate the implementation of recommendations from the Department of Education’s report on AI and the Future of Teaching and Learning. It includes measures for proper human oversight of AI decisions, the design of AI systems to bolster trust and safety, and the alignment with privacy laws and regulations applicable in educational settings, as well as the establishment of specific guardrails for education (Biden, 2023).

5 Policy Suggestions for AI Guardrails

The Future of AI and Education research paper (referenced in Chapter 4), ends with 13 policy recommendations that our world should consider before global inaction allows AI to evolve to the point where there is no return (Hamilton et al., 2023). The researchers recognize that these policy recommendations might be controversial and serve more as a “stimulus for further debate” on the topic (Hamilton et al., 2023). In my opinion, the following represent the five most important policy recommendations that are the most likely to build a future that continues to allow for education to flourish.

Policy Recommendation 1: Assume that Artificial General Intelligence (AGI), is already capable of performing complex human tasks better and more cost-effectively, and could emerge even more mature technology within two years (Hamilton et al., 2023). In other words, we should assume that advanced AI will arrive sooner rather than later, adhering to more aggressive predictions. I agree with this perspective because AI has the potential for exponential growth, especially if it develops the ability to “self-improve” in exponential ways. This could lead to rapid advancements that might quickly become unmanageable. In my view, adopting a precautionary approach by preparing for an earlier arrival of AGI is wise. It would mitigate the substantial risks of dystopian outcomes discussed in Chapter 4, despite potential downsides such as slowing AI innovation prematurely and the costs associated with rapid regulatory implementation.

Policy Recommendation 2: All companies developing AI technologies should undergo an organizational licensing process before they are allowed to develop and release new systems publicly, similar to the existing regulatory frameworks applied to pharmaceutical, gun, car, and food industries (Hamilton et al., 2023). This process would permit AI companies to build and experiment with systems within their laboratories and conduct small-scale testing under the supervision of a regulatory body (Hamilton et al., 2023). I believe that implementing a licensing process for EdTech and other companies that develop AI is, on balance, advantageous because it would likely encourage these companies to include research about the potential negative effects of these new technologies in educational contexts, as well as in other contexts, and thereby encourage more considerations of short or long-term effects on education as a whole.

Policy Recommendation 3: Implement penalties for breaches of AI regulations, with the aim of fostering a culture of responsibility and accountability within the AI industry and among end-users (Hamilton et al., 2023). I believe that implementing penalties for violations in education and other industries would force developers to adhere to agreed-upon standards that prevent harm and act as a deterrent. While perhaps slowing the pace of innovation, my opinion is that this would likely lead to a more cautious behavior within the AI community—essentially following the guidance of Langdon Winner and others who advocate taking an active role in setting limitations to unbridled technological innovation and include considerations for societal impacts, such as in education. While a deontological argument would suggest that companies would “do the right thing” on ethical grounds, the high likelihood is that they will not, and therefore require strict penalties. Google and OpenAI, which abandoned their original altruistic mission as referenced in Chapter 1, are examples.

Policy Recommendation 4: AI systems should provide clearly described explanations for their decisions, particularly in high-stakes applications such as student placement or exams (Hamilton et al., 2023). I believe this would yield overall positive effects by not only enhancing trust but also enabling more effective scrutiny and accountability. This would further mitigate some concerns about AI, such as bias, equity, and algorithmic discrimination, as referenced in Chapter 3. The ability to scrutinize decisions can even lead to improvements in AI systems, as flaws can be identified and corrected, leading to more accurate and fair outcomes. Further, it could also help to remove the opaque nature of current generative A,I where there is little to no traceability of original source content.

Policy Recommendation 5: Create a global regulatory structure for AI that encompasses an international coordinating body capable of preemptively addressing the risks and challenges posed by rapidly advancing AI (Hamilton et al., 2023). Following the model of international agreements such as the ban on cloning in 2005, I believe an international coordinating body could halt the potential harm caused by AI while also facilitating cooperation across borders, thereby ensuring that AI developments benefit humanity globally without exacerbating inequalities or causing harm in education and in other industries.

The Need for More AI-Savvy People in Government

While I agree that potential policies are good to talk about, it is ultimately up to our government to take an active role in shaping specific guidelines for AI. However, current data shows that less than 1% of all new AI PhDs graduating in North America go into government work. In order to make the right change and put up the requisite guardrails for AI in its future, our government needs to entice more of the brightest and best into public service in this emerging area. I believe we need more of the right people with the right knowledge, making sure the AI is rolled out in the right direction. This starts with getting more graduates with a specialty in AI into government work. Our government needs to recognize this specific area as a need and add the right benefits and incentives, and make this a priority (Hamilton et al., 2023).

Conclusion

After spending my senior year researching AI’s potential impact on education, I believe that as we integrate AI into our educational systems, we must be aware of the short- and long-term impact AI will have on learning, teaching, and assessment. New AI technology demands a reimagining of the boundaries that currently define education. We should avoid merely replacing old technologies with slightly more advanced ones that operate essentially within the same constraints. AI offers many powerful applications, and its most beneficial uses for humanity may well lie beyond the traditional boundaries defined by our current educational models.

Following the research for this paper, one of my main takeaways is that agreed-upon international institutions with humanitarian goals will need to be established to set development parameters to guard against the existential threat AI poses. According to the World Bank’s 2023 report on digital trends, “The latest breakthroughs in AI technologies have sparked widespread excitement as well as unease. It is critical for the global community, including low and middle-income countries, to work together to carve out a new development path to prepare for the AI disruption” (World Bank, 2023).

For reasons described in this thesis, a coordinating oversight organization will need to act proactively and knowledgeably to leverage AI’s extraordinary educational opportunity among other pursuits, while at the same time mitigating potential risks. Such policies should cultivate a business-friendly environment that drives innovation. But these policies should also include well-defined guardrails that allow AI to enhance human knowledge responsibly, and especially motivate future generations of students to want to learn.

In the end, I believe that the best weapon humanity has against the many existential threats we face is our own human intelligence, which is built on the foundation of education. Therefore, to help shape our future in the best way possible, focusing on education that is aided by AI, but not run by AI, should remain our priority.

Just as Margie in Asimov’s The Fun They Had wishes for the communal and interactive learning of the past, our approach to the future of AI in education will likely need to blend AI’s extraordinary capabilities with the intrinsic human desire for community and collaboration that motivates all of us to learn.

References

Biden, J. (2023). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The White House. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

Hamilton, J., Hazell, M., Kuppens, T., Nair, A., & Wylie, C. (2023). The future of AI in education: 13 things we can do to minimize the damage. https://drive.google.com/file/d/1IKLkFazTzARc0W7ZgAVD2SuSuIrxz6D8/view

Khan, S. (2023, March). How AI could save (not destroy) education [Video]. TED. https://www.ted.com/talks/sal_khan_how_ai_could_save_not_destroy_education

U.S. Department of Education. (2023). Artificial intelligence and the future of teaching and learning: Insights and recommendations. Office of Educational Technology. https://tech.ed.gov/ai/

World Bank. (2023). Digital progress and trends report 2023. https://www.worldbank.org/en/news/feature/2023/10/23/digital-progress-and-trends-report-2023