The emergence of algorithm-based health care delivery models boasted the promise of objectivity since algorithms are theoretically free from the types of biases and errors to which humans are prone. In practice, however, data are not neutral, and these algorithms can perpetuate biases and reinforce existing health disparities.
In a new article in Health Services Research titled Evaluating a Predictive Model of Avoidable Hospital Events for Race- and Sex-Based Bias, Hilltop researchers Leigh Goetschius, Fei Han, Ruichen Sun, and Morgan Henderson—along with UMBC researcher Ian Stockwell—assessed a large, productionized predictive model of avoidable hospital (AH) events for bias based on patient race and sex. This model, which Hilltop created and runs, assigns monthly risk scores to all Medicare fee-for-service (FFS) beneficiaries attributed to primary care providers that participate in the Maryland Primary Care Program (MDPCP).
The researchers implemented a simple bias assessment methodology and found no evidence of meaningful race- or sex-based bias in the model. This research demonstrates that simple techniques can be used to assess predictive models for statistical bias and is an example of Hilltop transforming ongoing operational work into meaningful engaged scholarship.
Read more about Hilltop’s work in predictive modeling.
For media inquiries, go here.
|