Unexpectedly Intriguing!
24 January 2020
Figure being scanned by lasers - Source: Unsplash - David Anderson - https://unsplash.com/photos/FahhGNl16iM

Artificial intelligence, or AI for short, is creeping into all sorts of real world applications. Want to order cat food from Amazon? Ask Alexa via your smart speaker! Want to take the drudgery out of driving on a long road trip? Have your self-driving car do the driving for you!

Now, if you followed the links in the preceding paragraph, you've not only found stories involving how people are using AI-equipped devices today, you've also found some cautionary tales where machine learning hasn't been as up to the tasks to which it has been put, and certainly not as advertised. Some of these stories are kind of funny. Others have very tragic endings.

That's because many of the most successful AI systems that have been developed to date have limitations that aren't well understood. A recent article in Quanta explores some of those limitations and how researchers are turning toward mathematical physics to address them:

The revolution in artificial intelligence stems in large part from the power of one particular kind of artificial neural network, whose design is inspired by the connected layers of neurons in the mammalian visual cortex. These “convolutional neural networks” (CNNs) have proved surprisingly adept at learning patterns in two-dimensional data — especially in computer vision tasks like recognizing handwritten words and objects in digital images.

But when applied to data sets without a built-in planar geometry — say, models of irregular shapes used in 3D computer animation, or the point clouds generated by self-driving cars to map their surroundings — this powerful machine learning architecture doesn’t work well. Around 2016, a new discipline called geometric deep learning emerged with the goal of lifting CNNs out of flatland.

Now, researchers have delivered, with a new theoretical framework for building neural networks that can learn patterns on any kind of geometric surface. These “gauge-equivariant convolutional neural networks,” or gauge CNNs, developed at the University of Amsterdam and Qualcomm AI Research by Taco Cohen, Maurice Weiler, Berkay Kicanaoglu and Max Welling, can detect patterns not only in 2D arrays of pixels, but also on spheres and asymmetrically curved objects. “This framework is a fairly definitive answer to this problem of deep learning on curved surfaces,” Welling said.

Already, gauge CNNs have greatly outperformed their predecessors in learning patterns in simulated global climate data, which is naturally mapped onto a sphere. The algorithms may also prove useful for improving the vision of drones and autonomous vehicles that see objects in 3D, and for detecting patterns in data gathered from the irregularly curved surfaces of hearts, brains or other organs.

That's pretty cool stuff! If gauge theory sounds vaguely familiar, it may be because mathematician Karen Uhlenbeck made the news last year when she was awarded the Abel prize in mathematics, in part for her work in the field. If you're not already familiar with the concept of gauge invariance, or equivariance as physicists prefer to call it, the following video provides a blissfully short introduction:

If you want to know more about gauge equivariant convolutional networks and how they apply to deep machine learning, Michael Kissner's easy guide is a good place to begin exploring the topic.

Getting back to the main story, the math behind gauge CNNs are showing real promise in the applications to which they have been placed:

A gauge CNN would theoretically work on any curved surface of any dimensionality, but Cohen and his co-authors have tested it on global climate data, which necessarily has an underlying 3D spherical structure. They used their gauge-equivariant framework to construct a CNN trained to detect extreme weather patterns, such as tropical cyclones, from climate simulation data. In 2017, government and academic researchers used a standard convolutional network to detect cyclones in the data with 74% accuracy; last year, the gauge CNN detected the cyclones with 97.9% accuracy. (It also outperformed a less general geometric deep learning approach designed in 2018 specifically for spheres — that system was 94% accurate.)

Beyond weather monitoring, gauge CNNs may find use in advancing how AI-vision systems used in self-driving vehicles see the world, with their capabilities of processing what these systems are seeing in three-dimensions improving their safety performance.

The future is seemingly determined to have self-driving cars and other AI-powered devices in it. Having those things work well enough to be unremarkable is the real challenge.

Image credit: unsplash-logoDavid Anderson

Labels: ,

About Political Calculations

Welcome to the blogosphere's toolchest! Here, unlike other blogs dedicated to analyzing current events, we create easy-to-use, simple tools to do the math related to them so you can get in on the action too! If you would like to learn more about these tools, or if you would like to contribute ideas to develop for this blog, please e-mail us at:

ironman at politicalcalculations

Thanks in advance!

Recent Posts

Indices, Futures, and Bonds

Closing values for previous trading day.

Most Popular Posts
Quick Index

Site Data

This site is primarily powered by:

This page is powered by Blogger. Isn't yours?

CSS Validation

Valid CSS!

RSS Site Feed

AddThis Feed Button

JavaScript

The tools on this site are built using JavaScript. If you would like to learn more, one of the best free resources on the web is available at W3Schools.com.

Other Cool Resources

Blog Roll

Market Links

Useful Election Data
Charities We Support
Shopping Guides
Recommended Reading
Recently Shopped

Seeking Alpha Certified

Archives