Debiasing GPT-3 Job Ads


listen on castbox.fmlisten on google podcastslisten on player.fmlisten on pocketcastslisten on podcast addictlisten on tuninlisten on Amazon Musiclisten on Stitcher

--:--
--:--


2022-10-10

Debiasing GPT3 Job Ads

Today, Conrad Borchers, a Ph.D. student in Human-Computer Interaction, joins us to discuss his work on debiasing large generative language models, particularly GPT-3.

Conrad began with a rundown of his educational journey. He spoke about the changes in techniques and approaches that have happened since he started sentiment analysis research. 

Since the discovery of attention-based models, the field of NLP has been completely revolutionized. Conrad however spoke about some of the challenges that come with these large attention-based models. He discussed how to detect and quantify bias in large NLP models.

Speaking of the importance of debiasing large language models, Conrad began with retrospective questions. First, what societal issue do we want to improve upon? Second, what’s stopping us from achieving it? He cited an example of how language model bias can affect the way men and women apply for jobs. He also spoke about the problems with quantifying how gender/racial bias affects society wholesomely. 

Conrad discussed the realism dimension of GPT-3. He talked about the present quality of GPT-3 generated ads and whether they can attain inch-perfect accuracy soon. To create the baseline model for his research, he revealed the data he collected and the discoveries he made after his analysis. He also spoke about methods to effectively fine-tune GPT-3 models for eliminating bias.

Conrad then discussed prompt engineering. He explained what prompt engineering was and how to potentially de-bias language models with prompts. He discussed social practices that can help expedite the improvement of generative language models in the future. Wrapping up, he spoke about the future of his work. You can follow up with Conrad’s work on his website or follow him on Twitter @conradborchers.

Conrad Borchers

Conrad is a first-year PhD student at the Human-Computer Interaction Institute (HCII) at Carnegie Mellon University, School of Computer Science. His research interests span the theme of leveraging data science to improve educational processes. They include the study of institutional and teacher use of social media, course workload and course selection in higher education, and the roles of teacher attention and student motivation in the efficacy of educational technologies.


Thanks to our sponsors for their support

Astrato is a modern BI and analytics platform built for the Snowflake Data Cloud. A next-generation live query data visualization and analytics solution, empowering everyone to make live data decisions.
https://astrato.io
ClearML is an open-source MLOps solution users love to customize, helping you easily Track, Orchestrate, and Automate ML workflows at scale.
https://clear.ml/