Experts Discuss Pros and Cons of Predictive Risk Tools in Child Welfare Practice

Banner image for predictive risk modeling event

NEW YORK — November 12, 2021 — Child welfare agencies increasingly rely on artificial-intelligence-driven risk assessment tools to help predict which young people are most likely to experience maltreatment, so that investigations and resources can be deployed and harm prevented. Proponents say tools such as predictive risk modeling improve child protection services and reveal racial disparities in their use, while critics say the algorithms perpetuate and even magnify those very same disparities.

“There must be a rigorous examination of the ways that social service and government agencies use these tools, in partnership with researchers, subject matter experts, practitioners and community members,” NYU McSilver Executive Director Dr. Michael A. Lindsey said as experts from both sides of the discussion presented data and perspectives Wednesday evening. He co-moderated the program with Anne Williams-Isom, James R. Dumpson Endowed Chair in Child Welfare at Fordham University’s Graduate School of Social Service (GSS) and former CEO of the Harlem Children’s Zone. Fordham GSS and NYU McSilver co-hosted the event.

Roughly half of U.S. states have considered predictive analytics tools for their child welfare systems, with jurisdictions in at least 11 states currently using them, according to Anjana Samant, a Senior Staff Attorney with the American Civil Liberties Union (ACLU) Women’s Rights Project.

In New York City, the tools in use include the Severe Harm Predictive Model, which predicts the odds that a child in an open case will be the subject of substantiated allegations of physical or sexual abuse within the next 18 months; and the Service Termination Conference Model, that prioritizes which cases Administration for Children’s Services (ACS) will be directly involved in during a meeting to decide if a case will be closed.

Samant presented an overview of the ACLU’s report, Family Surveillance by Algorithm, which warns that “any tool built from a jurisdiction’s historical data runs the risk of perpetuating and exacerbating, rather than ameliorating, the biases that are embedded in that data.”

Dr. Rhema Vaithianathan spoke about the development process of the Allegheny Family Screening Tool (AFST), which creates a risk score for complaints received through child maltreatment hotline of the Pennsylvania county including Pittsburgh. The New Zealand-based Director of the Centre for Social Data Analytics at Auckland University of Technology said that her team spoke with hotline call screeners and families of children who had aged out of the system during the development process.

Two things struck her once the system was deployed, said Dr. Vaithianathan: 1) The county was entering a lot of children into the system who were scored as “low risk” by the tool, and for whom investigations did not turn up abuse; 2) “Black children who were scored a low score were more likely to be investigated than white children who had a low score, and white children who scored a high score were more likely to not be investigated than Black children who scored a high score, and these [results] were statistically significant.”

She concluded that the tool has a role to play in uncovering bias in the decisions made by child welfare staffers, presenting an opportunity to address systemic discrimination.

JCCA’s CEO Ronald E. Richter, an organizer of the talk, said he has seen first-hand how Black families have always been disproportionately represented among those regulated by child welfare agencies. He also acknowledged skepticism among some of the speakers and audience members that predictive modeling tools can address that inequity.

As a former New York City Family Court judge and Commissioner of New York City’s Administration for Children’s Services (ACS), Richter said, “When we talk about a ‘tool’ that may help reduce disproportionality in family regulation, sincere issues of trust are surfaced, especially from those of us who have witnessed first hand what child welfare looks like on the ground and also those who have been historically judged by the system and feel strongly that many children and families have been misjudged – that the government’s wide net has not really gotten much right.”

J. Khadijah Abdurahman, a Fellow at Columbia University and NYU’s AI Now Institute, is concerned predictive risk modeling is one more tool for family regulation systems that are responsible for destroying Black families, enabling them to sweep up more families within that wide net that Judge Richter spoke of. “Just like they were separating us at the auction block, just like they were separating us in the slave ship is the way that they separate us on Riker’s Island and the way that they are separating us through machine learning and artificial intelligence.”

She also said, “There’s a lot of focus on the algorithm itself and how it’s making decisions, but I think it’s important to understand—because these models rely on administrative data, [they rely] on seeing like a State—and how does The State see people?”

Dr. Emily Putnam-Hornstein, the Co-Director of the Children’s Data Network who worked with Vaithianathan on AFST, voiced concern that getting lost in the debate over value of child welfare systems and their use of AI-driven tools was the fact that these systems can save lives. “I don’t think that one-third of American children should experience an investigation during childhood and I think there’s been far too little attention paid to the consequences of unnecessary investigations for families, especially low income families and families of color. But, I don’t think this should be called a family regulation system. This is not intended to be punitive. There are children in our communities who need protection….This work is incredibly difficult that we asked our child protection system and our frontline staff to do.”

She added that most of the tools in use to assess risk are outdated and that AI-driven tools can improve outcomes for children and families.

Aaron Horowitz, the ACLU’s Chief Data Scientist and a co-author of their report, questioned whether outcomes are being improved by the tools. He said the use of AI algorithms in criminal justice for pre-trial release decisions has tended to perpetuate or increase incarceration levels, except in instances when additional wraparound policies were implemented with the express goal of decreasing incarceration.

It’s important to know what policies are being embedded into the design of a tool and will shape the decisions made from the data it produces, he said. Otherwise, once a person is in the system, “These tools, by definition, are likely to increase the perception of your risk and therefore increase the odds of your family being [negatively impacted].”

Jennie Feria, Deputy Director for the Los Angeles County Department of Children and Family Services (DCFS), acknowledged that there can be shortcomings to using predictive risk modeling and described the lessons learned by her agency during a pilot implementation of a tool. “While there were promising results, a key was that predictive modeling must—and I emphasize must—be an open source, non-proprietary algorithm in a glass box to ensure there’s transparency.”

She added, “We also learned that working with researchers or academics, instead of the black box proprietary software developed by for-profit companies, is critical for several reasons: To help ensure that the data does not perpetuate racial or ethnic disproportionality; to facilitate dialogue with our community stakeholders; to ensure data is being used safely and ethically; and to ensure DCFS has the capacity to update and validate data and improvements to the model with local data in real time, improving the accuracy of the model.” She said her agency is improving its screening and investigation process, in partnership with their office of equity.

The discussion was the first of a series that will examine predictive risk modeling in child welfare practice.

Learn more about The AI Hub at McSilver.

Photo of Dr. Michael A. Lindsey
Michael A. Lindsey, PhD, MSW, MPH
Dean of the Silver School of Social Work