On the show today, we are joined by Julian Michael, a postdoc at the New York University Center for Data Science. Julian works in the Alignment Research Group, where he supervises highly capable AI systems to behave desirably. His conversation was centered on the NLP Community Metasurvey: a survey aimed at understanding expert opinions on controversial issues in the NLP community.
Julian began by introducing AI alignment and its progression over the years. He pointed out its similarity with other terms such as AI safety, AI ethics, and responsible AI. He shared some approaches to improving AI alignment with a given objective function.
Julian then dived into the NLP community meta-survey. He shared the motivation behind kick-starting the survey. He discussed the demography of people in the survey population, how he reached out to them, and the challenges he encountered. In addition, he shared some critical decisions his team, and he made during the survey design process. He also mentioned some of the questions asked in the survey.
Julian discussed exciting results from the survey. He also touched on the response bias the survey had. Also, he highlighted how the NLP communities have reacted to the survey results. Wrapping up, he shared future plans for the survey. You can learn more about Julian on his website and follow him on Twitter @_ julianmichael_.
I am a postdoc at the NYU Center for Data Science working with Sam Bowman. I earned my PhD from the University of Washington where my advisor was Luke Zettlemoyer.