ML News Monthly – Mar 2021

Welcome to the sixth edition of ML News Monthly – Mar 2021!!

Here are the key happenings this month in the Machine Learning field that I think are worth knowing about. 🕸

Turing Award goes to researchers who made programming easier and more powerful

Alfred Aho and Jeffrey Ullman win the 2020 Turing Award for pioneering compiler and algorithm work.

COVID-19 Prognosis via Self-Supervised Representation Learning and Multi-Image Prediction

Anuroop Sriram, Matthew Muckley, and colleagues at Facebook, NYU School of Medicine, and NYU Abu Dhabi developed a system that examines X-ray images to predict which Covid-19 patients are at greatest risk of decline.

Brain2Pix: Fully convolutional naturalistic video reconstruction from brain activity

What brain sees ?

Lynn Le, Luca Ambrogioni, and colleagues at Radboud University and Max Planck Institute for Human and Brain Cognitive Sciences developed Brain2Pix, a system that reconstructs what people saw from scans of their brain activity.

Could The Simpsons replace its voice actors with AI deepfakes?

Scientists developed a clever way to detect Deepfakes by analyzing light reflections in the eyes

Natural Language Processing Named a Foundational AI Technology, According to the 2021 AI in Healthcare Survey Report

An AI is training counselors to deal with teens in crisis

The Trevor Project, a nonprofit organization that operates a 24-hour hotline for LGBTQ youth, uses a “crisis contact simulator” to train its staff in how to talk with troubled teenagers

Away From Silicon Valley, the Military Is the Ideal Customer

As large tech companies have backed away from defense work, startups like Anduril, Shield AI, and Teal are picking up the slack. They’re developing autonomous fliers specifically for military operations.

THE AI INDEX REPORT : Measuring trends in Artificial Intelligence by Stanford

Researchers at the Stanford Institute for Human-Centered Artificial Intelligence compiled AI Index 2021 by analyzing academic research, investment reports, and other data sources.

Can Auditing Eliminate Bias from Algorithms? Extends Entity Identification Capabilities to Personal Identifiable Information (PII) via NL API

The languages that defy auto-translate


Google Overhauls Ethical AI Team

Marian Croak, an accomplished software engineer and vice president of engineering at Google, will lead a new center of expertise in responsible AI. The move came after the exits of her predecessors Timnit Gebru and Margaret Mitchell.

How Facebook got addicted to spreading misinformation

The company’s AI algorithms gave it an insatiable habit for lies and hate speech. Now the man who built them can’t fix the problem.

Google’s new experimental app can scan and categorize your documents

Google has developed an experimental app called Stack, which scans and categorises your documents.

Amazon introduces choreographed motions with Alexa Presentation Language 1.6

Amazon has unveiled a new version of its Alexa Presentation Language (APL)—apparently APL-based multimodal apps are seeing more than 3x the number of monthly active users compared with voice-only apps.

Google’s AI reservation service Duplex is now available in 49 states

The Duplex reservation service is now available in 49 US states—apparently Louisiana has local laws that rule it out for the moment

The science behind semantic search: How AI from Bing is powering Azure Cognitive Search

New Fiat 500 ‘Hey Google’ Edition Brands Car as Google Assistant on Wheels

Fiat has partnered with Google to release a ‘Hey Google’ version of the Fiat 500 that is integrated deeply with Google Assistant

Constructing Transformers For Longer Sequences with Sparse Attention Methods



ML models are increasingly deployed in settings with real world interactions such as vehicles, but unfortunately, these models can fail in systematic ways. To prevent errors, ML engineering teams monitor and continuously improve these models. Authors propose a new abstraction, model assertions, that adapts the classical use of program assertions as a way to monitor and improve ML models.

Graph Neural Networks Including Sparse Interpretability

Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals

There has been a recent resurgence of interest in explainable artificial intelligence (XAI) that aims to reduce the opaqueness of AI-based decision-making systems, allowing humans to scrutinize and trust them. In this paper, Authors propose a principled causality-based approach for explaining black-box decision-making systems that addresses limitations of existing methods in XAI.

Google Open-Sources AutoML Algorithm Model Search

Finetuning Pretrained Transformers into RNNs

Local Interpretations for Explainable Natural Language Processing: A Survey

Hurdles to Progress in Long-form Question Answering

Understanding Robustness of Transformers for Image Classification

Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges

Rethinking Spatial Dimensions of Vision Transformers

A Neighbourhood Framework for Resource-Lean Content Flagging

Authors propose a novel interpretable framework for cross-lingual content flagging, which outperforms prior work both in terms of predictive performance and average inference time.

Courses / Resources

How to break a model in 20 days. A tutorial on production model analytics.

How I Would Explain GANs From Scratch to a 5-Year Old: Part 1

Performance comparison: counting words in Python, Go, C++, C, AWK, Forth, and Rust

Can Language Models Be Too Big? 🦜with Emily Bender and Margaret Mitchell

Worldwide Non-pharmaceutical Interventions Tracker for COVID-19(WNTRAC)

Accessible Multi-Billion Parameter Model Training with PyTorch Lightning + DeepSpeed

Introducing PyTorch Profiler – the new and improved performance tool

Using NLP to extract quick and valuable insights from your customers’ reviews

AutoNLP by Hugging Face

That’s it !!

Let me know if I missed anything or if there’s anything you think should be included in a future post.