Oregon Department of Human Services to End Its Use of Child Abuse Risk Algorithm

The algorithm was based on another used in Allegheny County, Penn., that flagged Black families disproportionately.

Kids' bikes parked outside Sunnyside School. (Sam Gehrke)

The Oregon Department of Human Services will stop using an algorithmic tool that helps social workers decide whether to investigate families for child abuse and neglect, shortly after a study showed a similar tool disproportionately flagged Black families.

The algorithm, first used by Oregon officials in 2018, was inspired by a screening tool developed for child welfare officials in Allegheny County, Penn. But the Pennsylvania algorithm has come under scrutiny: Data collected by a Carnegie Mellon University research team and reviewed by the Associated Press in April indicated that Allegheny’s algorithm flagged a disproportionate number of Black children for “mandatory” neglect investigation in its first years of operation.

Lacey Andersen, deputy director of Oregon DHS, announced that the agency would stop using the algorithm by the end of June in a May 19 email to staff obtained by the AP. “We are committed to continuous quality improvement and equity,” Andersen said in the email.

Oregon’s algorithm, used as part of its Safety at Screening Tool, is meant to assist hotline workers in deciding whether social workers should investigate reports of child abuse and neglect. The reports fielded by hotline workers go through a screening process in which the algorithm generates a numerical risk score that indicates the likelihood of children ending up in foster care or their treatment being investigated in the future. Hotline workers use the numerical risk scores as additional data to inform their decisions about state interventions, but they are permitted to use their own discretion in determining which cases to investigate.

The algorithm used by the Allegheny Family Screening Tool provides the “procedural basis” for Oregon DHS’s Safety at Screening Tool, according to a DHS report.

Allegheny’s algorithm uses personal data from government data sets—including Medicaid, mental health, and jail and probation records—to calculate numerical risk scores. A study cited by the AP investigation found that the tool, given comparable rates of calls, “would have recommended that two-thirds of Black children be investigated, compared with about half of all other children.”

Jake Sunderland, press secretary for the Oregon Department of Human Services, told WW in an email that DHS made adjustments to its algorithm to account for racial bias.

“Knowing that algorithms are at risk of perpetuating racial biases and structural inequities, ODHS the Safety at Screening Tool was developed with an algorithmic ‘fairness correction’ to correct for the biases in the data,” Sunderland wrote in the email to WW.

Sunderland said there was no connection between recent media coverage of algorithmic bias and the agency’s decision. He told WW that DHS chose to replace its current screening processes in 2021 “with the goal of improving equity, accuracy and consistency.” DHS decided to discontinue the Safety at Screening Tool not because its algorithm yielded racially biased results, but because of the tool’s incompatibility with the agency’s new screening model, Structured Decision Making, Sunderland added.

“Because the Structured Decision Making tool uses family specific information, the risk score produced by the Safety at Screening Tool (a predictive analytic tool based on aggregate child welfare data) could not be incorporated into Structured Decision Making,” Sunderland wrote.

DHS will replace the algorithm with the less automated Structured Decision Making model June 30.

Willamette Week’s reporting has concrete impacts that change laws, force action from civic leaders, and drive compromised politicians from public office. Support WW's journalism today.