ML News Monthly – Oct 2021

Welcome to the tenth edition of ML News Monthly – Oct 2021!!

Here are the key happenings this month in the Machine Learning field that I think are worth knowing about. 🕸


AI Politics

Americans Need a Bill of Rights for an AI-Powered World

The White House Office of Science and Technology Policy is developing principles to guard against powerful technologies—with input from the public. Top advisors to the U.S. president announced a plan to issue rules that would protect U.S. citizens against AI-powered surveillance, discrimination, and other kinds of harm.

Read more: https://www.wired.com/story/opinion-bill-of-rights-artificial-intelligence/

China’s rules to limit the influence of recommendation algorithms : http://www.cac.gov.cn/2021-08/27/c_1631652502874117.htm

Europe’s non-binding ban on law enforcement’s use of face recognition and a moratorium on predictive policing algorithms : https://www.europarl.europa.eu/doceo/document/A-9-2021-0232_EN.html

Stopping Killer Robots

Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control.

Weapons systems that select and engage targets without meaningful human control are unacceptable and need to be prevented. All countries have a duty to protect humanity from this dangerous development by banning fully autonomous weapons.

Read more: https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-autonomous-weapons-and

AI in the News

Tesla’s latest self-driving update will be available only to drivers who have demonstrated safe driving.

How it works: Drivers can request the software through a button on their car’s dashboard screen. The car then collects data about five factors: forward collision warnings per 1,000 miles, hard braking, aggressive turning, unsafe following, and forced disengagement of self-driving features when the car determines that drivers aren’t paying attention.

Read more: https://www.tesla.com/support/safety-score

Problems with huge datasets for large models

Abeba Birhane and colleagues at University College Dublin and University of Edinburgh audited the LAION-400M dataset, which was released in September. It comprises data scraped from the open web. The automated curation left plenty of worrisome examples y— including stereotypes and racial slurs— raising concerns that models trained on LAION-400M would inherit its shortcomings.

Dataset : https://laion.ai/laion-400-open-dataset/

Paper : Multimodal datasets: misogyny, pornography, and malignant stereotypes – https://arxiv.org/abs/2110.01963

AI Applications

Amazon’s Guard Bot

Amazon unveiled a robot that patrols users’ homes, scopes out strangers, and warns of perceived dangers. Astro maps users’ homes while using face recognition to decide whether or not to act on perceived threats such as intruders

Read more: https://www.aboutamazon.com/news/devices/meet-astro-a-home-robot-unlike-any-other

The completion of Beethoven’s 10th Symphony (using AI) live in Bonn and in free livestream

The Beethoven Orchestra in Bonn performed a mock-up of Beethoven’s Tenth Symphony partly composed by an AI system, the culmination of an 18-month project. Ludwig van Beethoven died before he completed what would have been his tenth and final symphony. A team of computer scientists and music scholars approximated the music that might have been.

Read more: https://www.telekom.com/en/media/media-information/archive/world-premiere-the-completion-of-beethovens-tenth-symphony-637336

Listen it here: https://www.magenta-musik-360.de/beethoven-10-sinfonie

Deepfake Voices Can Help Trans Gamers

Players of online games can be harassed when their voices don’t match their gender identity. New AI-fueled software may help.

https://www.wired.com/story/deepfake-voices-help-trans-gamers/

AI Research

Detecting the Long-Tail of Unseen Conditions

Models trained using supervised learning struggle to classify inputs that differ substantially from most of their training data. A new method helps them recognize such outliers. Researchers developed Hierarchical Outlier Detection (HOD), a loss function that helps models learn to classify out-of-distribution inputs — even if they don’t conform to a class label in the training set.

Paper: https://arxiv.org/abs/2104.03829v1


That’s it !!

Let me know if I missed anything or if there’s anything you think should be included in a future post.