Meet them where they are: A social work informed conceptual framework for youth inclusion in AI violence prevention systems
Artificial Intelligent (AI) systems are being created to monitor youth violence. Some youth use social media as a tool for help-seeking, grief processing, and connecting with others. On the other hand, they are involved in excluding others, cyberbullying, violence, and other negative features. However, due to the lack of understanding of language, jargon, and social context, there are numerous incidents when AI technologies wrongly interpret youths posts as violent. Thus, leading to a circular effect of a disproportionate response of law enforcement, and the enhancement of social exclusion, particularly among marginalized youth globally.
In the US, the content of youth’s online conversations has driven school districts to leverage AI to monitor social media in an effort to prevent community and school-based violence. Research from the Brennan Center indicates that over the last five years, several companies are selling software, powered by AI, that detects signs of violence or other concerning behavior among youth on social media. One example is the Chicago Public Schools. The large urban district hired intelligence analysts and purchased social media monitoring services to analyze the online conversation among students. Keyword searches were used to find threats at the programs target schools that were predominately Black and Latino. Students were not made aware of this initiative, and it remains unclear what words/ phrases connote “threat.”.
In Israel, youth come from a broad and diverse range of social, cultural and religious backgrounds such as Arabs, Jews, and immigrants from Ethiopia and the Former Soviet Union. Due to the barriers, of language, and diverse ethnic and racial backgrounds, tensions and hostility have often ascended between the various social groups, resulting with the use of social media as a platform for social exclusion and violent expressions. Currently, there is no AI designed to understand the context and nature of the language used; thus, limiting the chance of effectively preventing violence. Moreover, there is no clear policy in Israel on how educators and social workers should engage with youth online consequently, minimizing online interaction.
There are around 645 distinct tribes in India with 19,500 languages or dialects are spoken. Only 22 languages are scheduled officially, and Google recognizes and supports only nine languages. Currently, caste and religion (Dalits Vs. Non-Dalits and Hindus Vs. Non-Hindus) based polarizing expressions on social media in regional languages lead to outbreaks of communal violence among youth. However, there is no AI technology to understand these languages and slangs at any level. Local youth knows the purpose, context, and hidden meaning of these expressions. At the same time, these language and youth are not included in AI technological research. Moreover, there is no policy in India to ensures safe, inclusive and participative virtual spaces for young people. These systems do not include the voices, experience, and expertise that young people have regarding their life online. There is a pressing need for engaging local indigenous youth knowledge to decode the text and content to understand the caste-religious expressions on social media. Social exclusion and bias are not only prevalent in the physical environment, but it is also highly present in the world of the Internet.
It is critically important that we AI systems that are optimized for identifying the promotive ways in which youth use social media and accurately understands language, particularly language from youth from marginalized communities and countries with myriad languages and hyper-local context. Social workers and local youth are well poised to support AI development to address these emerging issues. The core values and principles of social work such as individual’s respect, dignity, mutual participation for all need to be incorporated to have a more cohesive and inclusive virtual society for all.
Informed by collaborative research and practice with youth in the United States, Israel and India, we propose a global conceptual framework, leveraging social work methodologies, for the development of AI systems for youth violence prevention that includes the expertise and lived experiences of youth. We explore social work intervention models to promote social cohesiveness using AI in social work practice. This paper examines the needs and importance of co-creation and collaboration between local youth, social works computer scientist and other stakeholders of youth development with appropriate ethical standards.
Desmond Upton Patton, PhD, MSW is the founding director of the SAFElab and Associate Professor of Social Work, Sociology and Data Science at Columbia University.
Siva Mathiyazhagan, PhD, MSW is the founding director of Trust for Youth and Child Leadership TYCL International and an affiliate researcher at SAFElab, Columbia University.
Aviv Y. Landau, PhD, MSW worked for several years as a social worker in Israel and is a postdoctoral research scientist at SAFElab and the Data Science Institute, Columbia University.