A Psychopathological Approach to Safety in AGI


listen on castbox.fmlisten on google podcastslisten on player.fmlisten on pocketcastslisten on podcast addictlisten on tuninlisten on Amazon Musiclisten on Stitcher

--:--
--:--


2023-05-23

A Psychopathological Approach to Safety in AGI.

In this episode, we are joined by Vahid Behzadan, an Assistant Professor of Computer Science and Data science at the University of New Haven. He is also the director of the Secure and Assured Intelligent Learning (SAIL) research group. Vahid joins us to discuss the safety of AGIs from a psychopathological standpoint.

Vahid began by discussing how new AI trends are a force for good but can also be a force for evil. He discussed how AI systems’ misgivings are hinged on the inability to adequately capture their objectives during training. He also discussed the complexity of the universe, which force machine learning engineers to develop models with abstractions.

Vahid stated that there’d always be some unpredictability with AGIs and why. He also gave his take on the possibility of AGI’s unpredictable actions cascading into multiple failures. Vahid discussed the communication barrier between human and machine agents. If the communication policy for a reflex agent is complex enough, its behavior will become more sophisticated. He cited the work of the Foerster Lab for AI Research (FLAIR) lab focused on emergent communications for machine learning. He also mentioned the paper on the Emergence of Adversarial Communication in Multi-Agent Reinforcement Learning.

Vahid spoke about the potential of training machines to replicate human-level cognitive abilities. He discussed two-side effects that will emerge from this, citing his work, A Psychopathological Approach to Safety Engineering in AI and AGI. He also shared how the development of psychopathology and AI studies began. Vahid shared his reservations on whether Isaac Asimov’s three laws of robotics guarantee the safety of AGIs.

Vahid discussed how modeling large language models such as ChatGPT differs from the modeling behaviors of AGIs. Using the boat racing game as an example, he iterated how RL agents can behave unusually in a complex system. Wrapping up, Vahid gave his thoughts on future developments in AGI.

You can learn more about Vahid’s line of work on his Google Scholar page or the SAIL research group website. Vahid also recommends Less Wrong for conversations and blogs about AI safety. The AAAI Workshop is another great platform for academics to learn about the field.

Vahid Behzadan

Vahid Behzadan is an Assistant Professor in Computer Science and Data Science. He is also the founder and director of the Secure and Assured Intelligent Learning (SAIL) research group, and mentors the University's hacking team. Dr. Behzadan's research is primarily on the safety and security of artificial intelligence and complex adaptive systems. His pioneering work on the security of deep reinforcement learning is recognized as seminal contributions to this growing field of research. Dr. Behzadan's recent work includes publications on the applications of machine learning to security and safety assessment of complex systems, such as driverless cars, unmanned aerial systems, and smart cities. He has presented numerous invited talks on the security of machine learning, and applications of data science in cybersecurity at venues such as the Open Web Application Security Project (OWSAP) foundation, the Black Hat conference, and UK's Transportation Research Laboratory (TRL). His research on the psychopathological approach to AI safety has been featured in national and international news.