Mitigating Vulnerabilities in Large Language Models

Click the Poster to View Full Screen, Right click to save image

Grant: LSAMP

Anthony Diaz

College:
The Dorothy and George Hennings College of Science, Mathematics, and Technology

Major:
Computer Science

Faculty Research Advisor(s):
Daehan Kwak, Patricia Morreale

Abstract:
ChatGPT has changed the world forever. Every aspect of day-to-day life, including work and school, has been made more efficient thanks to the tools developed in the field of AI. Even with how extraordinary ChatGPT is, it contains one glaring issue: It is not open source. This means that if a company wants to tailor a model to its specific use cases, or protect proprietary data, it is not able to. This has restricted government contractors and other companies that work with sensitive data from using ChatGPT. In this way, government contractors are at an AI disadvantage to private sector companies, who do not deal with sensitive data and do not need to worry about the lack of open source with ChatGPT. In order for defense contractors or private sectors to compete in the future, they will need to be able to both understand AI and be able to utilize it in a safe manner. This research’s objective is to understand the transformer architecture behind ChatGPT and utilize an open-source model of the same architecture. The goal is to identify vulnerabilities and develop solutions, enabling defense contractors and other companies with sensitive data to create their own models and tailor them to their specific needs.


Previous
Previous

Social Behavior and Spectator Aggression in Sports

Next
Next

Applying Chromatography to Investigate Volatile Ignitable Liquids in Fire Debris Residue