, ,

Why should social work and social policy researchers care about welfare algorithms? 

min read

Blog Images

By Anabelle Ragsag

I am Anabelle Ragsag, a doctoral student at McMaster University’s School of Social Work. My doctoral research aims to join scholarly conversations in the intersection of political economy, governance of welfare algorithms, and transnational and Third World feminisms. My commitment to this work has been informed by my lived experience as a Filipina immigrant, by my engagements with diasporic Filipinas and racialized women, and with my international development work predating my doctoral studies that relate to global poverty patterns and various antipoverty and social policy and welfare responses to them. Put in another way, my interest in studying welfare algorithms emanates from intertwined personal, political, and epistemological considerations.  

For the 2024 Sherman Centre Graduate Residency, I will be pursuing a smaller part of my doctoral thesis. While in this residency, I will look into the  AIAAIC Repository (standing for AI, Algorithmic, and Automation Incidents and Controversies) to select cases of identified algorithmic incidents in the welfare sector in Canada.  

I envision creating a database or a list from this repository and digging deeper into public-facing web-based documents describing these algorithmic incidents.  According to the AIAAIC website, “the Repository is an open resource that details incidents and controversies driven by and relating to artificial intelligence, algorithms, and automation. By collecting, dissecting, and surfacing incidents and issues from a responsible, ethical ’outside-in’ perspective in an objective and balanced manner, the Repository enables users to identify, examine, and understand the nature, risks, and impacts of AI, algorithms, and automation.” 

While this is primarily oriented to help me in building my understanding of how algorithms are used in social assistance programs, I intend to also use the data I come up with for use by researchers in the area of digital social work, digital welfare states, and technology (in)justices. I also hope to use this time at residency to have a supportive digital scholarship community and explore possibilities with my cohort and residency supervisors to come up with collaborative work together.  

A way of looking at these algorithmic incidents is through the academic literature on algorithmic harms. The AIAAIC Repository and my proposed research here at the Sherman Centre for Digital Scholarship will allow me to scratch the surface and probe what is being understood as welfare algorithms, in what ways are welfare algorithms reported as “incidents” and why, and how are they governed? 

How do I understand algorithmic harms? The first step is to understand what an algorithm is. Those who are using social media might already be familiar with the term. In social media, algorithms are rules, signals and data that govern the platform’s operation. These algorithms determine how content is filtered, ranked, selected and recommended to users. In some ways, algorithms influence our choices and what we see on social media. 
 
It is the same way when algorithms are used in social welfare systems, such as in Ontario Works’ determination of who is (in)eligible and how one might identify the likelihood of fraud; algorithms influence the outcomes of decisions. Algorithms are mirrors of how the bigger society operates. They can only take the form of the ideologies, values, and biases of institutions and the people who create them. People who are already poor and vulnerable because of wider inequalities in society are doubly made vulnerable by algorithmic decisions that are not transparent and are made top down. So, in my study, I understand algorithmic harms as “the adverse lived experiences resulting from a computer’s [n algorithmic] system’s deployment and operation in the world — occur[ing] through the interplay of technical system components and societal power dynamics” (Shelby et al., 2023).  

I agree with the school of thought that algorithmic harms have a social life, meaning, these harms are not bounded by the parameters of the technical system, and can “travel through social systems (e.g., judicial decisions, policy recommendations, interpersonal lived experience, etc.)”, and impact the world, individuals, communities, and ecosystems (Moss & Metcalf, 2022). This definition departs from what is predominant in AI ethics and fairness literature, which Ganesh and Moss (2022) observe to favour an understanding of algorithmic harms from the perspective of technology designers and its systems rather than centring on those who are harmed. 

Why am I interested in welfare algorithm governance as a social policy scholar? There has been explosive growth in the use of algorithmic tools, one of the avatars of new technologies, to automate and assist in the governance and delivery of poverty alleviation and social assistance measures, globally and locally. However, many of the new technology-mediated decisions that impact people, mostly the poor and the vulnerable, have not necessarily been guided by social justice considerations (Constantaras et al., 2023).  

As Zeffiro (2021) noted in her work on the limitations of ethics as a framework in social media research, policy responses around algorithm and AI governance are increasingly shaped by Big Tech, in the form of corporate ethics charters and ethics boards, offering self-regulation to shield them from public scrutiny. Wagner (2018), as cited in Zeffiro (2021)  argues that this amounts to an “ethics washing”: where ethics are used as a means to resist regulation but “while little or nothing is done to achieve them” (Zeffiro, 2021, p. 451). It is then imperative to attend to how algorithmic governance is taking form – whether in social media research or in social assistance programming. 

In a 2021 childcare benefits scandal in Holland, in which 20,000 families were wrongly accused of fraud identified through welfare algorithms, the entire Dutch government had to resign (Geiger, 2023). An alphabet soup of welfare algorithmic platforms have been used in Ontario Works, such as Consolidated Verification Procedure (CVP), Maintenance Enforcement Computer Assistance (MECA), Service Delivery Model Technology (SDMT), and Social Assistance Management Systems (SAMS).  Here in Canada, recent studies have successively raised the alarm on increasingly artificial intelligence-powered welfare surveillance.  In Ontario Works, algorithmic decisions are increasingly invasive in regulating the poor through, asexperienced by Indigenous single mothers, Black, and poor women in Ontario (Abdillahi, 2022; Dobson, 2022; Maki, 2011, 2021). 

Addressing poverty, social policy responses, and implementing them implicates social workers since the discipline and practice of social work are centred around individual and collective well-being and the power dynamics involved in achieving them. Digital Social Work scholar Lauri Goldkind wrote in a commentary, in light of the increasing use of highly automated tools in decision-making and service delivery, “[s]ocial work has an obligation to enter into the discourse of AI-enhanced-everything to insert our ethical [and social justice] perspective into the development of algorithmic tools and products” (Goldkind, 2021, p. 373). More broadly, in a recent policy brief on Democratizing Artifical Intelligence, Michele Gilman says, “Our society is stronger when people have the democratic right and practical means to influence the AI systems shaping their lives” (Gilman, 2023). 

About Anabelle Ragsag

Anabelle Ragsag is a Ph.D. Social Work student interested in understanding the refusal of algorithmic harms and re-envisioning of algorithmic care by Southeast Asian Canadian mothers on Ontario Works. Anabelle’s background is in politics and policy, labour, and data science from the University of the Philippines, Carleton University, and the University of Guelph. She grounds her community building efforts, research, and personal endeavours in creative, collaborative, and liberatory approaches. She loves traveling, thrifting, and equal parts building relationships, and spending time alone. 

For the 2024 Sherman Centre Graduate Residency, Anabelle pursues a smaller part of her PhD thesis. Algorithmic harms are predominantly understood from AI ethics literature, favouring the perspective of technology designers and its systems rather than centring on those harmed. While AI justice studies is emerging to contest this, little is known about how these algorithmic harms work within social welfare systems. One of the existing, even if Anabelle argues, technology-centric and incomplete ways of knowing where these harms find their way in social welfare systems is through the AIAAIC Repository (standing for AI, Algorithmic, and Automation Incidents and Controversies). Anabelle envisions creating a database and case study description of how algorithmic harms in social welfare systems are currently presented technologically, from this Repository, even if she recognizes its limits, as a data source.  

References

Abdillahi, I. (2022). Black women under state: Surveillance, poverty & the violence of social assistance. ARP Books. 

Constantaras, E., Geiger, G., Braun, J.-C., Mehrotra, D., & Aung, H. (2023, March 6). Inside the suspicion machine. Wired.Com.  

Dobson, K. (2022). Living in algorithmic governance: A study in the digital governance of social assistance in Ontario [Carleton University]. https://curve.carleton.ca/e8326d40-4d6a-442d-acad-9f552fcb17f2 

Geiger, G. (2023, March 6). How Denmark’s Welfare State Became a Surveillance Nightmare. Wired.Com.  

Gilman, M. (2023). Democratizing AI: Principles for meaningful public participation. Data&Society Policy Brief.  

Goldkind, L. (2021). Social Work and Artificial Intelligence: Into the Matrix. Social Work (New York)66(4), 372–374. 

Maki, K. (2011). Neoliberal deviants and surveillance: Welfare recipients under the watchful eye of Ontario works. Surveillance and Society9(1–2), 47–63. 

Maki, K. (2021). Ineligible: Single mothers under surveillance. Fernwood Publishing. 

Moss, E., & Metcalf, J. (2022, March). The Social Life of Algorithmic Harms. Data and Society Workshop. 

Shelby, R., Rismani, S., Henne, K., Moon, Aj., Rostamzadeh, N., Nicholas, P., Yilla, N., Gallegos, J., Smart, A., Garcia, E., & Virk, G. (2023). Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction. arXiv.Org.  

Wagner, B. (2018). Ethics as an escape from regulation: From ‘ethics washing’ to ethicsshopping? In E. Bayamlioglu, I. Baraliuc, L. A. W. Janssens, & M. Hildebrandt (Eds.), Being profiled: Cogitas ergo sum – 10 Years of profiling the European citizen. Amsterdam University Press.  

Zeffiro, A. (2021). From Data Ethics to Data Justice in/as Pedagogy (Dispatch). Studies in Social Justice15(3), 450–457. 

Leave a Reply

Your email address will not be published. Required fields are marked *

*