听
How HAL lab uses machine learning to better understand our own brains
Well known companies such as Google, Amazon and Facebook, as well as many smaller tech companies, are hiring computer scientists with backgrounds in machine learning.
Machine learning鈥攖he art of teaching machines from data鈥 has matured considerably in the last few years. Such methods are now behind many advanced data mining techniques such as speech recognition on android phones or image search on Google. Indeed, machine learning is a major technique for analyzing big data, a marriage made in digital heaven.
Within machine learning, there is now a new old kid in town named deep learning. Deep learning mostly refers to good old neural networks that were popular in the late 1980s and early 1990s. Similar to what is now visible, business journals were raving at that time about the possibilities that such methods were bringing to data mining and forecasting. By the late 1990s, however, it seemed that progress had halted and the level where machines could compete with humans in tasks like object recognition and speech analysis could not be reached. Methods like causal modeling then took over and neural networks even got a bad name.
While neural networks have been lying low during the following two decades, much progress was made in understanding them. It is now understood that not enough example data and computer power existed in the 1990s to get to the regime where these networks could outperform humans.
Deep learning on the rise
The availability of fast graphic processor units (GPUs) has played a big factor in progress. GPUs are very good at crunching numbers in matrix operations behind most graphic rendering鈥攁nd neural networks use similar operations. With the help of GPUs, networks can now be trained in three weeks, rather than months on regular workstations.
Big data has also played a major factor in progress. Many companies are collecting lots of data but do not yet know how to use it efficiently. Most basic approaches of neural networks are based on supervised learning where labeled data is needed. Companies have invested in using such data through crowd sourcing and now have databases with millions of pictures that are labeled with thousands of different categories.
All of these advances mean that large neural networks can now be built. Not only can these networks be large, they can now have many stages of representations of the data.
These many layers are what deep learning is all about. Deep networks are now winning many different data mining competitions, resulting in new state of the art approaches to things like speech recognition and computer vision.
鈥淚t is certainly an exciting field where new applications are within reach,鈥 says Dr. Thomas Trappenberg.
HAL Lab
Dr. Thomas Trappenberg of the Faculty of Computer Science, runs the Hierarchical Anticipatory Learning (HAL) Lab. The HAL Lab works in three areas that are essentially connected: computational neuroscience, machine learning and robotics.
鈥淲e are most interested in understanding how the brain works鈥攊n particular how activities in neurons and the architecture of the brain enables high-level thinking,鈥 says Dr. Trappenberg. 鈥淎 central ingredient for all of this is how humans and animals learn.听This brings us to the scientific area of machine learning.鈥
A lot of progress has been made recently in understanding the important principles of learning. The lab also has the added benefit of using these methods for data analysis or data mining.
鈥淲e usually make computer simulations to study brains but we want our research to lead us to building models of how the brain really works,鈥 he continues.
鈥淲e now think that an even better way to study and evaluate these models is to build artificial agents鈥攔obots鈥 to show that they can do high-level tasks like finding objects or planning movements.鈥
Hal Lab projects
Many research projects from the HAL Lab cross over between two of the three research areas (computational neuroscience, machine learning and robotics), combining the strengths of the entire team.
As an example, the lab works with local company, Mindful Scientific, to apply machine-learning techniques to understand EEG data in order to evaluate possible brain injuries, combining machine learning with neuroscience.
The lab works with another company, Pleiades, to make a drone (flying quadcopter) that can follow objects while learning that the appearance of these objects can change, combining robotics and machine learning.
The team is also building a biologically realistic robot arm controller that can move to a target even when the camera input is sometimes interrupted, combining robotics with neuroscience.
Ultimately all three areas are tightly interwoven and the HAL Lab hopes to play an important role in the continued progress of deep learning.
Recent News
- Dr. Rita Orji wins Arthur B. McDonald Fellowship, NSERC's most prestigious prize for early鈥慶areer researchers
- 麻豆传媒 competitive programming team to compete against Harvard, MIT this weekend
- Grad profile: A master adapter
- Dal researchers receive federal grant to launch new cybersecurity training program
- How I spent my summer: completing a co鈥憃p in another language
- This Dal researcher wants to ensure AI doesn鈥檛 ruin the environment
- Decoding Destinations
- Dal prof changing the way we think about accessibility in the classroom