10/19/2023
Q&A: Challenging assumptions makes great computer security
Q&A
Challenging assumptions makes great computer security
TL;DR: Computer Science professor Gang Wang found a home in computer security, an almost gamified part of the computer world where it's you versus the hackers. In a world where things are built first and secured later, he teaches students to break things apart, find issues and reverse their thought processes.
Interviewed by Eleanor Wyllie
How did you become interested in computer science
and cybersecurity?
When I was a kid, I was pretty good at math and physics, and I liked them. I picked electronic engineering for my undergraduate degree. It was not a very informed decision – I just thought I’d like it. But I quickly realized that working on circuit boards did not give me much enjoyment.
In my junior year, I took a computer networking class. That got me super excited. The idea that you can transmit network packets from one location to another around the world and exchange information that way was really fascinating. I worked really hard and got full marks. The class made me decide to do more computer science, and during my Ph.D. I started getting into security-related topics.
The shorter answer is that although I knew that I was into science and engineering early on, it took a while to figure out what I was really interested in and really good at. I'm very lucky that throughout undergraduate and graduate school I had mentors and advisors who gave me the flexibility to explore different projects. Now, I feel like computer security is my home, and I have a lot of fun doing what I'm doing.
What's exciting for you about this field?
My main interest is security, privacy and the intersection with machine learning and human-computer interaction. My work takes a lot of data-driven approaches: I collect real-world data to understand attacker and user behaviors.
As computer scientists or industry practitioners, we make a lot of assumptions about how attackers and users behave and often those assumptions are not correct. The data-driven approach is really trying to challenge these assumptions and validate what is true. The insights from the data can help us to rethink how the system should be designed. It’s a lot of fun.
The other aspect is machine learning. Machine learning is a very useful tool to build defenses, understand human behaviors and help accomplish security tasks. But machine learning is not all good. Deep fakes are an example of how this generative model can be used for malicious purposes. They can generate very realistic images, videos and voices that can be used for social engineering attacks, impersonating other people and generating disinformation. For machine learning and security, there are two sides: using it to build useful tools, but also trying to understand the harm and risk it can introduce.
What are your biggest concerns about computer security?
There is a tradition, not a healthy one, that we build something first and secure it later. It's easy to understand why people do this. We’re incentivized to push out a system or feature very quickly. Then we build our security as an afterthought.
We have so many examples of things getting built that are difficult to fix later. We learned this lesson the hard way: Internet protocols like SMTP (Simple Mail Transfer Protocol) were built for the email system we use today. As attackers started to exploit those protocols to send phishing emails, people realized that we needed a security protocol. But the internet was already there; we couldn’t just stop and restart it, so we had to build extensions and add them to existing protocols. Network protocols only work when everybody does the same thing, and getting everybody in the world to do the same thing after the Internet expanded was really difficult.
You see similar trends with people building systems using AI focusing on functionality. We start with functionality; we build something that works. But security as an afterthought can be very dangerous. That's my biggest concern as new AI technologies start to mature. This is the best time to think about how to make AI systems secure and protect users’ privacy proactively before things get out of control. The ideal is security by design. We need to think about the problem early on.
Do you encounter any misconceptions about
computer security?
Security as an afterthought is pretty common, but there's also “security by obscurity.” The idea is very simple: if you don't tell people about your system, they don't know how to attack you. You hope the attacker doesn't find out your weaknesses, but they always do.
You should assume that your system design is out there for people to look at. A lot of system services are open-source projects, meaning they put their designs, code and protocol out there. Open source is powerful because everybody can help. When a particular service or protocol is scrutinized by a lot of people, you remove the vulnerabilities. If everything's behind closed doors, nobody can understand how it works.
However, if you hide enough details, it does make the overhead of attacking a bit higher for certain systems. For companies, there's an incentive not to open-source certain products. I'm encouraged that in current security and AI developments, people tend to open source a lot of things. I’m very much in favor of open source. It's complex, and people have different opinions, but this is my stance. The protocol should be open for everybody to challenge the design and the security should be built in.
What have you accomplished that makes you the proudest?
Something NBA coach Gregg Popovich said really resonated with me: wins and losses will fade away, but relationships stick with you forever. As a faculty member, you build relationships with a lot of students. Years later, they become an expert in a particular field and start to do great things. Whenever we hear about the new projects they are working on, it’s super exciting. That's probably the proudest moment.
In terms of research, it’s fun when we make a difference in the real world. It could be that we find security issues within existing systems or services. Often, those services affect millions of people. Recently, my student reported a problem with Firefox Privacy Relay that can be exploited by attackers to send spoofing attacks. They rewarded the student with a $1,000 bounty. They’re inviting people to find bugs in their systems and rewarding them for reporting responsibly and ethically. So that's what we do. It feels good.
What are your goals for the future?
One goal is to understand the challenges and limitations of using machine learning models in real-world environments. For example, we're working with the National Center for Supercomputing Applications to understand how machine learning can help make security analysis tasks more efficient.
We're continuing to research online deception and attack activities. We have worked on detecting threats to stop them before they reach users. At the same time, we started to focus more on ways to prepare users to handle those threats.
Cyber-attacks are getting more and more sophisticated. Some of them are potentially using AI, either in selecting targets or crafting their attack payload. Humans still play a big part, but with AI, they can scale up their campaigns and cause more damage. It's an interesting development, and we're paying close attention to it.
How can people get involved in cybersecurity?
It depends on the stage you’re at. For current undergrads or Master's students, there's a lot of ways to get involved. Start by taking some basic classes involving computer networks or computer architecture. You need to know how things work before you can break them. Security is about breaking things apart, trying to find issues and reverse the thought process. Think outside of the box. Security is about asking: what will make the system stop working? Security courses allow you to understand the basic concepts of reverse engineering, crypto, and basic application-level security, network security, etc.
For students who are more senior or even Master's students, the best way to get involved is talking to faculty members and doing some projects. It's really fun for us to involve students in our research projects. They have a lot of energy, and they can help out on different research tasks. It's good experience to build basic skill sets.
Security is a lot like a game. The attacker is trying to find clever ways to break in and the defender is trying to stop them. There’s a cybersecurity competition called Capture the Flag (CTF), which emulates that process. You compete to see who find the most vulnerabilities and hack into a system fastest. There are CTF teams on campus and student organizations (e.g., SIGPwny) that train younger students to perform those hacking activities. Even if you’re a high school student, there are local, state or even national-level CTF activities. Join a team, start to compete and have fun.
Do you need to major in computer science to do cybersecurity?
That question can be answered from different angles. If you want to become a security domain expert, basic computer science training is necessary. But it's not necessarily limited to computer science as a department or a degree. You can be a great security expert coming from a computer engineering background or an electrical and computer engineering degree. There's no fundamental difference – our courses are cross-listed.
Security is a very broad discipline interconnected with a lot of other fields. If you are coming from another discipline, you could make a big impact in computer security. With privacy violations on the internet, our legislation is often falling behind. What if your personal information is stored and shared without your consent? How do you protect users and help them manage data properly? A lot of those problems cannot be solved with technical skills alone. Policymakers and lawmakers are a big part of the solution. People who are not trained as computer scientists can make a big difference as people from different disciplines work together to address these issues.