We are excited to be joined by Josh Albrecht, the CTO of Imbue. Imbue is a research company whose mission is to create AI agents that are more robust, safer, and easier to use. He joins us to share findings of his work; Despite "super-human" performance, current LLMs are unsuited for decisions about ethics and safety.
Josh started by discussing how the ethics of LLMs can be evaluated. He gave some questions from the ETHICS dataset that was used to prompt the language model. He also shared how the model compared to humans. He mentioned the possibilities of the model receiving adversarial inputs and its impact.
Josh discussed how training data can enhance a model's ethical behavior. He discussed the outlook of LLMs in the future. He also highlighted essential considerations before LLMs are adopted across the board. Josh discussed other means to evaluate LLM aside from accuracy.
Follow Josh on Twitter @joshalbrecht.