Study Unveils Biases: Machine Learning for Augmented Detection of Perinatal Mood and Anxiety Disorders
Recent research study focused on evaluating bias-mitigated predictive models of perinatal mood and anxiety disorders (PMADs) using machine learning for augmented screening. The study aimed to mitigate biases in predictive models trained on electronic health records (EHRs) data collected from 2020 to 2023 at Cedars-Sinai Medical Center in Los Angeles, California. The study included birthing patients aged 14 to 59 years with live birth records who were admitted to the postpartum unit or the maternal-fetal care unit after delivery.
Model Training and Evaluation
Patient-reported race and ethnicity data obtained from EHRs were used as exposure variables. Logistic regression, random forest, and extreme gradient boosting models were trained to predict moderate to high-risk (positive) screens using the Patient Health Questionnaire (PHQ-9) and the Edinburgh Postnatal Depression Scale (EPDS). The models were assessed with or without reweighing the data during preprocessing to evaluate bias mitigation and model performance.
Patient Analysis and Model Performance
Among the 19,430 patients in the study, racial and ethnic minority patients were more likely to screen positive for PMADs compared to non-Hispanic White patients. Models achieved modest performance with mean AUROCs ranging from 0.602 to 0.635 without reweighing and 0.602 to 0.622 with reweighing. Baseline models showed disparities in predicting postpartum depression, with reweighing reducing these differences in demographic parity and false-negative rates among racial and ethnic groups.
Addressing Health Disparities in Predictive Models
The study highlighted the importance of using target variables that are less likely to reflect existing disparities to prevent widening health disparities in PMAD diagnosis and treatment. Machine learning can augment traditional screening procedures to promote more equitable and routine PMAD screening. The study emphasized the need for model designs that integrate knowledge of health disparities to limit algorithmic bias and provide a nuanced understanding of potential biases in predictive models of PMAD.
Model Assessment and Bias Mitigation
The research included supervised classification algorithms, hyperparameter tuning, and repeated K-fold cross-validation to evaluate model performance and bias. Various fairness metrics were employed, such as demographic parity and false-negative rates, to assess biases across racial and ethnic groups. Methods like reweighing were used to minimize bias in the models, although deterministic reweighing based on frequencies may potentially introduce new biases against certain groups.
Study Findings and Recommendations
The study results demonstrated that the models did not perpetuate biases against racial and ethnic minorities relative to non-Hispanic White patients. The research recommended further exploration to optimize model weights to achieve specific performance and fairness goals. Acknowledging that machine learning alone cannot resolve all health disparities, the study proposed machine learning as a part of achieving more equitable mental health care, alongside restructuring clinical workflows and enhancing mental health services.
Key Points
\- The study aimed to evaluate bias-mitigated predictive models of perinatal mood and anxiety disorders (PMADs) using machine learning for augmented screening.
- Data collected from 2020 to 2023 at Cedars-Sinai Medical Center in Los Angeles, California, from birthing patients aged 14 to 59 years with live birth records were used for model training and evaluation.
- Patient-reported race and ethnicity data from electronic health records (EHRs) were utilized as exposure variables to predict moderate to high-risk PMAD screens using logistic regression, random forest, and extreme gradient boosting models.
- Racial and ethnic minority patients were more likely to screen positive for PMADs compared to non-Hispanic White patients, with models achieving modest performance in predicting PMAD risks.
- The study underscored the importance of using target variables less likely to reflect existing disparities to prevent widening health disparities in PMAD diagnosis and treatment.
- Various fairness metrics and bias mitigation techniques, such as reweighing, were employed to minimize biases in the predictive models, with recommendations for further exploration and optimization of model weights to achieve specific performance and fairness goals.
Reference –
Emily Wong et al. (2024). Evaluating Bias-Mitigated Predictive Models Of Perinatal Mood And Anxiety Disorders. * Emily Wong et al. (2024). Evaluating Bias-Mitigated Predictive Models Of Perinatal Mood And Anxiety Disorders. *JAMA Network Open*, 7. https://doi.org/10.1001/jamanetworkopen.2024.38152
Disclaimer: This website is primarily for healthcare professionals. The content here does not replace medical advice and should not be used as medical, diagnostic, endorsement, treatment, or prescription advice. Medical science evolves rapidly, and we strive to keep our information current. If you find any discrepancies, please contact us at corrections@medicaldialogues.in. Read our Correction Policy here. Nothing here should be used as a substitute for medical advice, diagnosis, or treatment. We do not endorse any healthcare advice that contradicts a physician's guidance. Use of this site is subject to our Terms of Use, Privacy Policy, and Advertisement Policy. For more details, read our Full Disclaimer here.
NOTE: Join us in combating medical misinformation. If you encounter a questionable health, medical, or medical education claim, email us at factcheck@medicaldialogues.in for evaluation.