Welcome to the fourth edition of ML News Monthly – Jan 2021!!
Here are the key happenings this month in the Machine Learning field that I think are worth knowing about. 🕸
1) SWITCH TRANSFORMERS: SCALING TO TRILLION PARAMETER MODELS WITH SIMPLE AND EFFICIENT SPARSITY
Google researchers developed and benchmarked techniques they claim enabled them to train a language model containing more than a trillion parameters. They say their 1.6-trillion-parameter model, which appears to be the largest of its size to date, achieved an up to 4 times speedup over the previously largest Google-developed language model (T5-XXL).
2) On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
The past 3 years of work in NLP have been characterized by the development and deployment of ever larger language models, especially for English. BERT, its variants, GPT-2/3, and others, most recently Switch-C, have pushed the boundaries of the possible both through architectural innovations and through sheer size.
In this paper, authors take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks?
3) A criticism of “On the Dangers of Stochastic Parrots: Can Languae Models be Too Big”
The criticism has two parts:
- The paper is attacking the wrong target.
- The paper takes one-sided political views, without presenting it as such and without presenting the alternative views.
4) Machine Learning: The Great Stagnation
5) AIs that read sentences are now catching coronavirus mutations
NLP algorithms are now able to generate protein sequences and predict virus mutations, including key changes that help the coronavirus evade the immune system. The key insight making this possible is that many properties of biological systems can be interpreted in terms of words and sentences.
6) DALL·E: Creating Images from Text
DALL·E is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions, using a dataset of text–image pairs.
7) Machine learning is going real-time
There seems to be little consensus on what real-time ML means, and there hasn’t been a lot of in-depth discussion on how it’s done in the industry. In this post, author shares what they have learnt after talking to about a dozen companies that are doing it.
8) Let’s review productized GPT-3 together
Authors have created a comprehensive and, more importantly, collaborative map showing what entrepreneurs have built with GPT-3.
9) Finding the Words to Say: Hidden State Visualizations for Language Models
By visualizing the hidden state between a model’s layers, we can get some clues as to the model’s “thought process”.
10) Insightful AI Books To Read in 2021
11) How NLP can make travelling more accessible
IIT-Madras scientists have launched AI4Bharat to boost AI innovation in India. AI4Bharat is a community of engineers, domain experts, policymakers, and academicians collaborating to build AI solutions to solve the problems very specific to India. The ongoing projects from AI4Bharat around NLP include Signboard Translation from Vernacular Languages, Fonts for Indian Scripts, Word embeddings for Indian Languages, and many more.
12) A new open data set for multilingual speech research
Facebook AI is releasing Multilingual LibriSpeech (MLS), a large-scale, open source data set designed to help advance research in automatic speech recognition (ASR). MLS is designed to help the speech research community’s work in languages beyond just English so people around the world can benefit from improvements in a wide range of AI-powered services.
13) BERT for easier NLP/NLU
This article introduces BERT and covers how to use it for much better NLP / NLU tasks, sentiment classification is also presented as a case study with code.
14) A quick guide to managing machine learning experiments
This article talks about how to organize your machine learning experiments, trials, jobs and metadata with Amazon SageMaker and gain peace of mind
15) Leveraging language technology for national good: Initiatives by the Indian govt.
16) Studying Catastrophic Forgetting in Neural Ranking Models
in this paper, Authors study, in what extent neural ranking models catastrophically forget old knowledge acquired from previously observed domains after acquiring new knowledge, leading to performance decrease on those domains.
17) Can a Fruit Fly Learn Word Embeddings?
The mushroom body of the fruit fly brain is one of the best studied systems in neuroscience. At its core it consists of a population of Kenyon cells, which receive inputs from multiple sensory modalities. These cells are inhibited by the anterior paired lateral neuron, thus creating a sparse high dimensional representation of the inputs.
In this work authors study a mathematical formalization of this network motif and apply it to learning the correlational structure between words and their context in a corpus of unstructured text, a common natural language processing (NLP) task.
18) DAGA: Data Augmentation with a Generation Approach for Low-resource Tagging Tasks
Data augmentation techniques have been widely used to improve machine learning performance as they facilitate generalization. In this work, authors propose a novel augmentation method to generate high quality synthetic data for low-resource tagging tasks with language models trained on the linearized labeled sentences.
19) Saying No is An Art: Contextualized Fallback Responses for Unanswerable Dialogue Queries
Despite end-to-end neural systems making significant progress in the last decade for task-oriented as well as chit-chat based dialogue systems, most dialogue systems rely on hybrid approaches which use a combination of rule-based, retrieval and generative approaches for generating a set of ranked responses. Such dialogue systems need to rely on a fallback mechanism to respond to out- of-domain or novel user queries which are not answerable within the scope of the dialog system.
Authors make use of rules over dependency parses and a text-to-text transformer fine-tuned on synthetic data of question-response pairs generating highly relevant, grammatical as well as diverse questions. We per- form automatic and manual evaluations to demonstrate the efficacy of the system.
Courses / Resources
20) 5x Speedup on CI/CD via Github Action's Strategy.Matrix
This post will show you how to use strategy.matrix and Github Packages to reduce the time on Github workflows significantly. For the authors, these tricks manage to cut their testing time from 40 minutes to 8 minutes!
21) [Podcast] Building ML teams and finding ML jobs
22) Gartner: SaaS Will Be Even Bigger Than We Thought in 2022+
Gartner has increased its estimates for global enterprise and IT spend for 2021 and 2022, with Enterprise Software and SaaS the biggest beneficiary, projected to grow a stunning 10.2% in 2022.
ZenML is an extensible, open-source MLOps framework for using production-ready Machine Learning pipelines - in a simple way.
GENIE is a leaderboard for natural language generation tasks. To provide more accurate assessment of progress, it uses human evaluation of the entries, gathered dynamically using crowdsourcing (Amazon Mechanical Turk).
25) The Big Bad NLP Database
CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3.
TextBox is developed based on Python and PyTorch for reproducing and developing text generation algorithms in a unified, comprehensive and efficient framework for research purpose. Library includes 16 text generation algorithms, covering two major tasks:
- Unconditional (input-free) Generation
- Sequence-to-Sequence (Seq2Seq) Generation, including Machine Translation and Summarization
StrategyQA is a question-answering benchmark focusing on open-domain questions where the required reasoning steps are implicit in the question and should be inferred using a strategy. StrategyQA includes 2,780 examples, each consisting of a strategy question, its decomposition, and evidence paragraphs.
29) The Pile
The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together.
30) Dashboarding with JupyterLab 3
That's it !!
Let me know if I missed anything or if there's anything you think should be included in a future post.