Adaptive Bias Mitigation in Large Language Models (LLMs)

Click the Poster to View Full Screen, Right click to save image

Grant: CAHSI

Angelo Perez

College:
The Dorothy and George Hennings College of Science, Mathematics, and Technology

Major:
Computer Science

Faculty Research Advisor(s):
Yulia Kumar, J. Jenny Li

Abstract:
This study explores adaptive strategies for mitigating bias in Large Language Models (LLMs), with an emphasis on AI framework designs to dynamically minimize discriminatory biases. Given the expanding role of LLMs in decision-making and information dissemination, the objective is to ensure their outputs remain unbiased. The proposed approach encompasses dynamic data filtering, iterative model retraining, and targeted post-processing corrections aimed at reducing biases in LLM outputs.

Preliminary findings indicate a reduction in bias across various dimensions, such as gender, race, and socio-economic status, while maintaining model performance. This research contributes to the field of ethical AI by presenting actionable strategies for developers and users to improve fairness in LLM deployments. LLMs will be investigated and tested on their current effectiveness on withholding discrimination or biases based on client input. If LLMs identify discrimination or biases, identification leads to further analysis on LLM's algorithms/methods for detection for future prevention. LLMs sampled will be listed.


Previous
Previous

A Comparison of Joint Kinematics Measurements of Treadmill Walking and Even Ground Walking with a Sensor-Based System: Progress Report

Next
Next

Access to Maternal Healthcare