AI Decision-Making


listen on castbox.fmlisten on google podcastslisten on player.fmlisten on pocketcastslisten on podcast addictlisten on tuninlisten on Amazon Musiclisten on Stitcher

--:--
--:--


2018-02-16

AI Decision Making

Making a decision is a complex task. Today's guest Dongho Kim discusses how he and his team at Prowler has been building a platform that will be accessible by way of APIs and a set of pre-made scripts for autonomous decision making based on probabilistic modeling, reinforcement learning, and game theory. The aim is so that an AI system could make decisions just as good as humans can. 

Today's episode is sponsored by the Mendoza College of Business at Notre Dame, offering a MS in Business Analytics in their downtown Chicago campus.

In multi-agent systems, most of the time, an agent does not have complete information about the preferences and decision making processes of other agents. In the video gaming world, which is the first area Prowler is tackling, Dongho is curious about why humans make the kinds of decisions they do. This gets into the notion of bounded rationality, a theory originally proposed by an economist and social scientist named Herbert A. Simon, which describes a decision-making process in which one attempts to make a decision that will be good enough, rather than the best possible decision due to resource limitations. Models of bounded rationality seek to formalize decision-making with limited information processing and information gathering capabilities for arriving at a decision.

Multi-agent systems can be based around a variety of different approaches, many of get blended together. For example, reinforcement learning, some deep learning and some Bayesian stuff are all getting synthesized. Dongho explains why his team is most interested in methodologies based on Bayesian processes, rather than deep learning technology. However, his team likes to use reinforcement leaning in the video gaming industry. As creators develop more realistic and graphic interfaces, there’s a big push for making them more human-like. However, there are challenges in approaching multi-agent problems. For example, one big challenge is pinpointing what the actual problem is. Another challenge is coordination — how agents share information and communicate.

Recent research efforts often take the form of two seemingly conflicting perspectives, the decentralized perspective, where each agent is supposed to have its own controller; and the centralized perspective, where one assumes there is a larger model controlling all agents. In this regard, we revisit the idea of the master-slave architecture by incorporating both perspectives within one framework.