Amongst the hardest problems in computer science are challenges around getting machines to understand and generate natural language. This season is an exploration of current research, applications, and thought leaders with something to say about artifical intelligence.
Github is many things besides source control. It's a social network, even though not everyone realizes it. It's a vast repository of code. It's a ticketing and project management system. And of course, it has search as well.
The earliest efforts to apply machine learning to natural language tended to convert every token (every word, more or less) into a unique feature. While techniques like stemming may have cut the number of unique tokens down, researchers always had to face a problem that was highly dimensional. Naive Bayes algorithm was celebrated in NLP applications because of its ability to efficiently process highly dimensional data.
In a recent paper, Leveraging Discourse Information Effectively for Authorship Attribution, authors Su Wang, Elisa Ferracane, and Raymond J. Mooney describe a deep learning methodology for predict which of a collection of authors was the author of a given document.
Word2vec is an unsupervised machine learning model which is able to capture semantic information from the text it is trained on. The model is based on neural networks. Several large organizations like Google and Facebook have trained word embeddings (the result of word2vec) on large corpora and shared them for others to use.
In the first half of this episode, Kyle speaks with Marc-Alexandre Côté and Wendy Tay about Text World. Text World is an engine that simulates text adventure games. Developers are encouraged to try out their reinforcement learning skills building agents that can programmatically interact with the generated text adventure games.
In the second half of this episode, Kyle interviews Kevin Patel about his paper Towards Lower Bounds on Number of Dimensions for Word Embeddings. In this research, the explore an important question of how many hidden nodes to use when creating a word embedding.
One of the most challenging NLP tasks is natural language understanding and reasoning. How can we construct algorithms that are able to achieve human level understanding of text and be able to answer general questions about it?
Kyle interviews Julia Silge about her path into data science, her book Text Mining with R, and some of the ways in which she's used natural language processing in projects both personal and professional.
Machine transcription (the process of translating audio recordings of language to text) has come a long way in recent years. But how do the errors made during machine transcription compare to the errors made by a human transcriber? Find out in this episode!
Bilingual evaluation understudy (or BLEU) is a metric for evaluating the quality of machine translation using human translation as examples of acceptable quality results. This metric has become a widely used standard in the research literature. But is it the perfect measure of quality of machine translation?
ELMo (Embeddings from Language Models) introduced the idea of deep contextualized word representations. It extends previous ideas like word2vec and GloVe. The ELMo model is a neural network able to map natural language into a vector space. This vector space, out of box, proved to be incredibly useful in a wide variety of seemingly unrelated NLP tasks like sentiment analysis and name entity recognition.
Modern messaging technology has facilitated a trend towards highly compact, short messages send by users who can presume a great amount of context held between the communicating parties. The rules of grammar may be discarded and often visible errors are a normal part of the conversation.
>>> Good mornink
Yet such short messages are also important for businesses whose users are unlikely to read a large block of text upon completing an order. Similarly, a business might want to offer assistance and effective question and answering solutions in an automated and ideally multi-lingual way. In this episode, we discuss techniques for designing solutions like that.
When users on Twitter post with geographic tags, it creates the opportunity for a variety of interesting questions to be posed having to do with language, dialects, and location. In this episode, Kyle interviews Bruno Gonçalves about his work studying language in this way.
Kyle and Linh Da discuss the class of approaches called "Named Entity Recognition" or NER. NER algorithms take any string as input and return a list of "entities" - specific facts and agents in the text along with a classification of the type (e.g. person, date, place).
Priyanka Biswas joins us in this episode to discuss natural language processing for languages that do not have as many resources as those that are more commonly studied such as English. Successful NLP projects benefit from the availability of like large corpora, well-annotated corpora, software libraries, and pre-trained models. For languages that researchers have not paid as much attention to, these tools are not always available.
In 2017, Facebook published a paper called Deal or No Deal? End-to-End Learning for Negotiation Dialogues. In this research, the reinforcement learning agents developed a mechanism of communication (which could be called a language) that made them able to optimize their scores in the negotiation game. Many media sources reported this as if it were a first step towards Skynet taking over. In this episode, Kyle discusses bargaining agents and the actual results of this research.
Video annotation is an expensive and time-consuming process. As a consequence, the available video datasets are useful but small. The availability of machine transcribed explainer videos offers a unique opportunity to rapidly develop a useful, if dirty, corpus of videos that are "self annotating", as hosts explain the actions they are taking on the screen.
Tim Niven joins us this week to discuss his work exploring the limits of what BERT can do on certain natural language tasks such as adversarial attacks, compositional learning, and systematic learning.
While at MS Build 2019, Kyle sat down with Lance Olson from the Applied AI team about how tools like cognitive services and cognitive search enable non-data scientists to access relatively advanced NLP tools out of box, and how more advanced data scientists can focus more time on the bigger picture problems.
The modern deep learning approaches to natural language processing are voracious in their demands for large corpora to train on. Folk wisdom estimates used to be around 100k documents were required for effective training. The availability of broadly trained, general-purpose models like BERT has made it possible to do transfer learning to achieve novel results on much smaller corpora.
Alex Reeves joins us to discuss some of the challenges around building a serverless, scalable, generic machine learning pipeline. The is a technical deep dive on architecting solutions and a discussion of some of the design choices made.