AGI Can Be Safe

listen on castbox.fmlisten on google podcastslisten on player.fmlisten on pocketcastslisten on podcast addictlisten on tuninlisten on Amazon Musiclisten on Stitcher



AGI can be Safe

We are joined by Koen Holtman, an independent AI researcher focusing on AI safety. Koen is the Founder of Holtman Systems Research, a research company based in the Netherlands.

Koen started the conversation with his take on an AI apocalypse in the coming years. He discussed the obedience problem with AI models and the safe form of obedience.

Koen explained the concept of Markov Decision Process (MDP) and how it is used to build machine learning models.

Koen spoke about the problem of AGIs not being able to allow changing their utility function after the model is deployed. He shared another alternative approach to solving the problem. He shared how to engineer AGI systems now and in the future safely. He also spoke about how to implement safety layers on AI models.

Koen discussed the ultimate goal of a safe AI system and how to check that an AI system is indeed safe. He discussed the intersection between large language Models (LLMs) and MDPs. He shared the key ingredients to scale the current AI implementations. Wrapping up, he discussed plans for his research. To learn more about Koen, follow him on LinkedIn or his website

This conversation was recorded on 5/27/2023, previous to the EU Parliament vote.

More information about the EU AI Act legislative process:

Koen Holtman

Koen Holtman is a Dutch systems architect who has been working as an independent AI/AGI safety researcher since May 2019. Koen defines AGI systems as hypothetical future AI systems that can solve general problems with a human or better level of competence. He has published mathematical work on the design of safety mechanisms that could be built into such hypothetical future AGI systems, if they are ever invented. Koen has a PhD in Software Design from Eindhoven University of Technology, 20 years of experience in industrial R&D, and 10 years of experience in standards creation. In November 2022 he joined the CEN-CENELEC JTC 21 standards committee to work on the European AI risk management standards that will support the EU AI Act.