fair-hierarchical-clustering | episodes


listen on castbox.fmlisten on google podcastslisten on player.fmlisten on pocketcastslisten on podcast addictlisten on tuninlisten on Amazon Musiclisten on Stitcher

--:--
--:--


Fair Hierarchical Clustering

In this episode, we interview Anshuman Chhabra, a Ph.D. candidate from the University of California, Davis. Anshuman’s educational background is in electronics but he gravitated towards machine learning through conferences. His current research focuses on building robust machine learning models from two standpoints; adversarial robustness and social robustness. 

Anshuman started off by discussing how adversarial networks began and what exactly it means. He then gave an intuition into what the social robustness of a machine learning model means using an example of a k-means clustering model that predicts the best candidates for a job. A model that is not socially robust may learn an inherent historical bias from the dataset. Anshuman explained how harmful this can be using a real-life example. 

In addition, Anshuman talked about the possibilities of using hierarchical clustering to build a fairer model. He delved deeper into how hierarchical clustering comes in handy in determining the best k to use and in interpreting the result. 

Anshuman also talked about how to find the best tree from a dendrogram in a hierarchical clustering model, and how to optimize the result to be as fair as possible. He related it to his study that involved building the model that had an in-process approach to fairness. Going forward, Anshuman discussed the dataset he used for testing the results from his study. Speaking of model evaluation, He talked about finding a balance between accuracy and fairness, explaining which is more important. 

In a bid to build a fair model, however, some features can be intrinsically more important than others. For instance, age is strongly correlated with wealth, which can make a credit card prediction model biased towards age. Anshuman spoke about how to approach problems that have intrinsically social consideration against the backdrop of fairness. He also spoke about whether there is a need to have some regulation of machine learning models. 

Wrapping up, Anshuman then had a shallow dive into the workings of his fairness model discussed in his paper and how you can use it to determine the fairness of your machine learning model. He spoke briefly about his next line of research study. You can find him and learn about his works on his website or on Google Scholar

Anshuman Chhabra

Anshuman Chhabra