Study finds that decision-making algorithms do not improve the lives of youth in Child Welfare System
Algorithms used by states within the U.S. Child Welfare System (CWS) to make unbiased, evidence-based decisions about child placement and predicting risk of maltreatment miss the bigger picture, finds a research team from Marquette University and the University of Central Florida.
A systematic review and analysis of 50 peer-reviewed publications on algorithms used by CWS shows that they lack a human-centric approach and use models that mainly focus on risk assessment to minimize future harm and not on improving the quality of lives of foster children.
Based on the findings, Dr. Shion Guha, assistant professor of computer science at Marquette and the principal investigator of the study, and his team recommend a human-centered approach that actively engages stakeholders, such as foster parents and case workers, incorporates more comprehensive, well-studied predictors and a focus on outcomes that improve the lives of foster children instead of mitigating risk.
“There were 437,465 children in the child welfare system in the United States in September 2016—a 10% increase in just four years—and those numbers are expected to keep rising unless significant efforts are made to improve youth outcomes,” says Guha.
The team summarized their findings in “A Human-Centered Review of the Algorithms used within the U.S. Child Welfare System,” which was scheduled to be presented at the 2020 ACM CHI Conference on Human Factors in Computing Systems, which is widely regarded as the best conference in the field of Human-Computer Interaction. The physical conference was cancelled due to the coronavirus outbreak, but the paper will be indexed and archived in the ACM Digital Library. This paper was honored with a Best Paper Honorable Mention award from the Awards Committee—a distinction reserved for the top 5% of submitted papers.
The research team conducted a comprehensive literature review of 50 peer reviewed publications that outlined algorithms used for decision making by children’s welfare services in the United States. The articles were qualitatively and quantitatively reviewed to ask and answer the following questions:
- What methods have been used to build algorithms in the child welfare system?
- What factors have been shown to be salient in predicting CWS outcomes?
- What outcomes have CWS organizations been predicting?
The study found that CWS computer models for determining child placement and predicting risk of maltreatment perform poorly as regression models seek to omit outliers to improve predictive power. From the study:
“For CWS, cases of severe abuse and neglect are the statistical outliers. Regression models that are designed to predict the most moderate (average) outcomes tend to perform poorly on outliers. In terms of CWS, poor performance on outliers raises several ethical and accountability concerns.”
There were considerable differences in the predictors currently being used and those found salient in the child-welfare literature. While racial and ethnic disparities are recognized by social sciences, more than half the papers did not include child or parent demographics in their models.
Despite noted impact of the case worker as a primary contact for children and parents, just two papers used variables related to characteristics of the agency and one accounted for characteristics of the case worker.
Just one study accounted for child’s interactions with other people, such as siblings, relatives and the system itself. Factors such as placement with a sibling, proximity to child’s home/relatives, and characteristics of the agency and caseworker are well-studied in child-welfare literature.
Between 1985 and 2016, no peer reviewed, published study accounted for foster parent-related factors, while only four papers that were reviewed accounted for the characteristics of the foster parents. Two papers address the preferences of foster parents, one accounts for the foster parents’ past performance and capabilities, and there has been little research done with regards to cultural background.
CWS has traditionally focused on ‘risk assessment,’ rather than positive outcomes, as 28 papers, focused on predicting risk as their targeted outcome. At the same time, just two papers targeted matching children with foster parents as a targeted outcome.
“CWS should actively focus on approaches that disrupt the status quo and seek to improve the lives of foster children,” Guha said. “Through human-centered participatory design, we can begin to see a shift in focus to other well-documented outcomes such as Child-Foster Parent Matching. This requires an ongoing engagement with foster parents and foster children to understand their specific values and needs, as well as their cultural and parental expectations.”
The research team’s future work will involve connecting directly with CWS agencies and conducting user interviews about the systems and algorithms being used within CWS to identify any other algorithms that have been implemented. The goal is to work with stakeholders in CWS to understand how different policies, practices and programs create different decision pathways for child placements and services offered to families.
Additional authors on the paper include Dr. Pamela Wisniewski, assistant professor of computer science at University of Central Florida, as well as doctoral students Devansh Saxena of Marquette and Karla Badillo-Urquiola of UCF who are the first and second authors of this study.