George discusses Google’s Dataset Search leaving its closed beta program, and what potential applications it will have for businesses, scholars, and hobbyists.
Alex brings an article about Activation Atlases and we discusses the applicability to machine learning interpretability.
Lan leads a discussion about the paper Attention is not Explanation from Sarthak Jain and Byron C. Wallace. It explores the relationship between attention weights and feature importance scores (spoilers in the title).
Kyle shamelessly promotes his blog post using LIME to explain a simple prediction model trained on Wikipedia data.