Skip to main content
SearchLoginLogin or Signup

Empowering Children through Algorithmic Justice Education

Published onMay 28, 2019
Empowering Children through Algorithmic Justice Education
·

Companies, policy-makers, parents, and educators all have a role in building kid-friendly platforms that protect children when they interact with AI. However, we must also empower children to be conscientious consumers and designers of AI systems. The impact of AI on today’s children will be enormous: World Economic Forum predicts that AI will create 58 million jobs by 2022 [1]. From an economic justice standpoint, it is essential that we prepare all children - not just university goers - with the tools they need to benefit from this technology and navigate an AI-powered society.

Inequity of education remains a key barrier to future opportunities and jobs where success depends increasingly on intellect, creativity, and the right skills. While AI is already entering the education system to support students, teachers, or school administration, it is not currently offered as a topic to be learned until the university level. Just as learning to code has become recognized as a new literacy for the 21st century, students also need to learn about AI given its growing prevalence across industries, institutions, and society on a global scale.

AI education is also an issue of social justice - it is not enough to teach AI or machine learning as a technical topic. Researchers such as Buolamwini [2] and O'Neil [3] have shown algorithms, though often advertised as “neutral” or “objective” systems, to be biased against women, people of color, and low-income individuals. For this reason, ethics must be taught in conjunction with AI as a technical topic [4]. This prevents the weaponization of these systems against marginalized groups by enabling them to have a critical perspective about technology, allowing them to recognize who they are building for, and to build fairer and more compassionate systems in the future [5].

In this paper, we argue that children need to be both ethical consumers and designers of AI systems. First, we present a curriculum designed to: (1) teach middle school students how AI systems work, (2) give them the opportunity to exercise critical thinking and empathy by learning to critique existing AI systems, and (3) equip students with design protocols so that they may build better, kinder, and fairer AI systems in the future. The curriculum takes a constructionist approach [6][7] to teaching AI and ethics with a particular emphasis on low cost or “unplugged” activities.

Second, we discuss the results of a pilot in Pittsburgh where over 200 middle schoolers, grades 5th-8th, participated in the curriculum during their regular school day as well as the results of a week-long summer workshop with approximately 30 middle school aged students. We show which topics students were able to master and the assessments used to measure student learning.

Third, we present materials for teacher training and support and discuss best practices in assisting and preparing educators to teach children about AI and ethics. Finally, we make recommendations about how to incorporate this curriculum into the traditional classroom setting.

About the Authors

Cynthia Breazeal | Dr. Cynthia Breazeal is an Associate Professor in the MIT Media Lab, Director of the Personal Robots Group, and Associate Director of Strategic Initiatives for the Bridge, MIT Quest for Intelligence. Her research focuses on developing the principles, techniques, and technologies for personal robots that are socially intelligent, interact and communicate with people in human-centric terms, work with humans as peers, and learn from people as an apprentice. She has developed some of the world’s most famous robotic creatures ranging from small hexapod robots, to embedding robotic technologies into familiar everyday artifacts, to creating highly expressive humanoid robots and robot characters. Her recent work investigates the impact of social robots on helping people of all ages to achieve personal goals that contribute to quality of life in domains such as physical performance, learning/education, health, and family communication + play over distance.

She received her B.S. (1989) in Electrical and Computer Engineering from the University of California, Santa Barbara. She did her graduate work at the MIT Artificial Intelligence Lab, and received her M.S. (1993) and Sc.D. (2000) in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology.

Blakeley H. Payne | Blakeley H. Payne is a graduate research assistant at the MIT Media Lab where she studies the ethics of artificial intelligence. Specifically, she develops educational materials to teach children to be both conscientious consumers and designers of AI systems. Before joining MIT, she completed her undergraduate studies at the University of South Carolina, where she earned a B.S. in Computer Science and Mathematics. While at the University of South Carolina, Blakeley founded several after school computer science programs for 3rd-5th grade students from low income families. She has also completed several AI internships at institutions such as Adobe, Inc. and U.C. Berkeley.

Randi Williams | Randi Williams is a graduate research assistant in the Personal Robots group at the MIT Media Lab. She received her S.M. (2019) in Media Arts in Sciences from MIT and her B.S. (2016) in Computer Engineering from the University of Maryland, Baltimore County. Randi has developed a number of AI education projects, including PopBots, social robot learning companions that teach AI through social interaction, and GenAI, a K-5tht grade AI curriculum. Additionally, she investigates children’s relationships with intelligent agents. Her research intersects human-robot interaction and early childhood education with a particular focus on engaging students from diverse backgrounds.

Comments
0
comment
No comments here
Why not start the discussion?