We call them the Network Information Management and Security Group; also known as NIMS lab. Tucked away in their headquarters on the 2nd floor of the Goldberg Computer Science Building, Nur Zincir-Heywood and Malcolm Heywood have become a fundamental staple within the Faculty of Computer Science.
Their lab is equipped with twenty brilliant graduate and undergraduate students who dare to imagine systems that work autonomously!
Their work is never complete. With the endless use of mobile devices, tablets, laptops, desktops, network management and security has become a big issue all around the world. We sat down with Nur, Malcolm and two of their students, Fariba Haddadi and Stephen Kelly, to learn about how they combat these novel threats.
What type of research do you do in your lab?
In the NIMS Lab, we do research on autonomous systems. These are systems that try to learn and monitor their own behavior. We study how these systems monitor their behavior and how they modify themselves depending on the feedback from the environment. If we can understand the behaviour that we observe in these computers, then we can understand the people who are using these computers.
One example of an autonomous system would be network management. In this case the network traffic represents the environment and the agent interacts with that environment. These agents pinpoint suspicious traffic. Whether the traffic is actually malicious is left up to the human to decide.
How do you “train” your agents?
In our lab we create test bed environments where we make the agents run against each other. For example, we use robotic soccer as a test bed. The idea is you have a group of agents who do not know how to play soccer, but, through interacting with one another they slowly learn, through trial and error, how to build the appropriate skills. They achieve this through their environments. How transferable are the agent skills? How well can we generalize these agents? Will the agents that we train work in another network? For example, if you learn to pass the ball, that’s one transferable skill. Then you can learn something else to compliment the skill.Â
Another example we have is the agents are trained to detect malicious URLs. We’re not sure how bad these URLs are, but they can be detected. A URL is a combination of characters, and our agents are trained to detect one character at a time. Once you reach the end of the sequence we ask the agents if they think this is a malicious URL or not.
How would you describe a user’s level of trust when they log onto a WiFi network at a coffee shop?
The average user probably doesn’t think that anyone is going to hack them. They trust what they see. If they see a familiar WiFi name, or a familiar person working in the coffee shop, they trust that it’s safe to use the Internet. The average user is aware that there are certain security systems in place to protect them, but people still make mistakes.
For example, we ran a “Red team vs Blue team game” last year to understand how susceptible users were to an attack. As part of the study, one of our students asked her friend to log onto her computer using her password. Without questioning the request, the friend typed in her username and password, which our research student then recorded and stored. This is just one example of mistakes people tend to make.
What type of insight have you gained in how attackers operate?
Attackers look for computers that are vulnerable. They don’t just attack because they want your credit card number. They attack because they also want to make use of the resources on your computer. Bigger attacks typically aim to bring down some type of a larger service. They want to make that service vulnerable so that they can get through its security system. For example, the idea of a botnet attack is to target as many machines as possible, perhaps thousands of them, “zombiefy” them and then use them for an even bigger attack.
The “Master” of these attacks is very difficult to locate. They’re not using just one computer they’re using multiple servers. They’re also using multiple versions of their own systems so that they can’t be traced.
Why have these attacks become more frequent in the last few years?
The use of social media has become more prevalent in recent years. Sites such as Facebook and Twitter are completely open to the public. They’re repositories for personal information. Social media is one of the most used targets for attackers. Users browse their friend’s profile page, click on a link and don’t actually know where they’re linking to. Hackers start using social media to pick up on generic characteristics, things that may not interest you in email addresses. Regular users are not aware this may be happening.
What can users do to better protect themselves?
- First and foremost, don’t trust everything you see. Users don’t always look at URLs when they’ve entered a site. They may think they’re using the same regular webpage they use everyday, but they’re not. For example, your URL must always be “HTTPS” if you want to transfer any personal information. If it only reads “HTTP,” don’t transfer the information.Â
- When it comes to using your credit card on the Internet, make sure you’re using a card that has a low limit. You should also try and limit the number of computers you use to access your online banking.Â
- When you leave your house, turn off your wireless connection. Only use your wireless network when you need to. Attackers look for any open doors when trying to hack into your system, and if your WiFi is on all day you’re more susceptible to an attack.Â
What do you hope to achieve in the next five years?
In the future we plan to continue working on our autonomous systems to make them more and more aware of their environment. We want to create a system that is helping the person using that system to detect problems and resolve the issue. We want to build a system that lasts. The system will last because it understands its mission, and what the user wants it to do.
Why does this type of research interest you?
In our lab, we are interested in discovering “emergent behaviours.” Security is an interesting application domain in which there are many forms of emergent behaviours. Attempting to design systems capable of autonomously looking within these environments is very exciting for us.Â
Recent News
- Dr. Rita Orji wins Arthur B. McDonald Fellowship, NSERC's most prestigious prize for early‑career researchers
- Â鶹´«Ă˝ competitive programming team to compete against Harvard, MIT this weekend
- Grad profile: A master adapter
- Dal researchers receive federal grant to launch new cybersecurity training program
- How I spent my summer: completing a co‑op in another language
- This Dal researcher wants to ensure AI doesn’t ruin the environment
- Decoding Destinations
- Dal prof changing the way we think about accessibility in the classroom