Tales of Frogs, Scorpions and AI/ML with Tony Dahbura
Share
Podcast
About This Episode
This week joining the podcast is Anton (Tony) Dahbura, executive director of the Johns Hopkins University Information Security Institute and co-director of the Johns Hopkins Institute of Assured Autonomy. We deep dive into the realm of AI Machine Learning technology and the exponential applications for it across every aspect of our lives.
And the criticality of building trust, implications of bias, the realities of planning for “edge cases” that just can’t be planned for, and the growing sophistication and personalization of AI-leveraged attacks. He also shares details on the most awesome CyberCorps: Scholarship for Service program. Learn more here.
Podcast
Popular Episodes
Podcast
Tales of Frogs, Scorpions and AI/ML with Tony Dahbura
[00:35] Machine Learning and Assured Autonomy
Petko: We've got an interesting guest today that's going to help us be more secure and give us some interesting context of geopolitics. Well, I'll let you introduce him actually, Rachel.
Rachael: For our listeners, we're going to talk about one of my favorite topics, and if you've been listening for a while, you'll know what that is. Please welcome to the podcast, Anton Dahbura. I am so excited to have you here.
He is executive director of Johns Hopkins Information Security Institute, and co-director of the Johns Hopkins Institute for Assured Autonomy. And can I just put a little bit of plug there about your work on analytics for baseball teams while we're at it?
Tony: Absolutely. Hi Rachel and Petko, it's a pleasure to be here. I wear many hats like a lot of people at Johns Hopkins and have a great time, so sure, feel free.
Rachael: Love it. We'll talk more about that later. But Petko, I know you really wanted to jump into assured autonomy. Hot topic.
Petko: I do. Because I'm afraid if we talk about baseball and Moneyball and everything, it's going to be a seven-hour podcast and our listeners are going to get mad at us. But Tony, I'd love to get your thoughts on what is assured autonomy? Walk me through, what does that mean?
Tony: Sure. Well, as you know, we're using some forms of AI, primarily machine learning in all of its flavor for all kinds of applications, including transportation, healthcare, decision making, whether you're eligible for credit, sentencing prisoners, identifying suspects from grainy photographs, doing all kinds of things.
Just really across the gamut, manufacturing, education and so forth. And increasingly we're adding autonomy into our lives, so technology that's doing more and more on its own.
Machine Learning and Assured Autonomy: Building Trust
Tony: In our institute, which is relatively new, we are looking at ways of building trust. And trust is a very loaded word. It includes a lot of things like reliability and security and making sure that there's no bias, that it's ethical, that it works.
If it's a vision system, a navigation system on a car, that if it observes a gray pickup truck against a blue background versus a blue pickup truck against a gray background, that it doesn't get totally confused and decide that the pickup truck isn't there.
So there are things that just don't work well, and there's an argument to be made about whether they should work a hundred percent of the time or can work a hundred percent of the time, because these are difficult problems. If they were easy, they would've been solved a long time ago. And also making sure that our social contracts remain in place.
So there are different kinds of bias. There's algorithmic bias and there is societal bias. And so there's a broad spectrum of challenges that need to be addressed in order for us to be able to trust these autonomous systems that are increasingly using AI. That's what our institute is about.
Petko: So what does your typical day look like then? I can only imagine. Testing cars, testing different things, what does it look like?
Tony: Well, it's very broad. And in fact at Hopkins, this is so broad that this is a partnership across the university and primarily between our school of engineering and our applied physics lab, which has over 8,000 people doing all kinds of interesting work.
Machine Learning and Assured Autonomy: Development Handling
Tony: We are funding internal research on different kinds of things. Just to name a couple, looking at what happens when someone puts a sticker on a stop sign and an autonomous vision system navigating a vehicle might, just because there's a tiny little sticker that you or
I may not notice, it misinterprets that stop sign as a 55 mile an hour speed limit sign. It's pretty easy to do. Even if noise is injected into a photograph. What are attacks that are possible and how to mitigate those attacks? So that's one kind of thing.
Another thing is looking at how do we handle the development of these software systems operationally. For example, a certain kind of autonomous vehicle that's commercially available is known for sending automatic updates to the vehicles overnight or whenever.
A simple example, you may have a machine learning algorithm that's really good at identifying dogs from photos, but not cats. You retrain it, and now all of a sudden it's really good at cats, but maybe you regressed on dogs. So how do you know? That's kind of important if you have a car where you know that on your route to work in the morning,
it's always navigated this curve just fine. But all of a sudden there's a software update and maybe, just maybe, there's a little bit of a regression that goes on and it may have forgotten how to navigate that curve.
Those are the kinds of things that we look at. Those are just a couple, just really across the board. Also importantly, looking at how we can augment these machine learning based systems, either with partner subsystems that understand more about physics, or different kinds of rule-based systems to create guardrails, safety zones, those kinds of things.
Machine Learning and Assured Autonomy: Augmentation
Tony: It may be more difficult for machine learning based system to figure out the infamous problem we all have as drivers. Something's going across the road. Is it a boulder or is it an empty bag or box? And we can sort of tell, not that we can always do something about it.
But maybe we can tell by the way it kind of moves and bounces that it's probably harmless, or "Wow, this is something I really need to pay attention to right away." Hard for machine learning, easy for a physics-based algorithm. We all know how balls bounce versus a bowling ball or a boulder, things like that.
More and more people are recognizing that machine learning can't go it alone in all cases. And so we want to embed trust, earn trust into these systems. We want AI to be successful. But in order to get there, there's a lot of work that needs to be done. It's not magic despite what people want to believe.
So those are some of the things that we're doing. We have a number of partners. We're building relationships with companies, other universities. Of course at Johns Hopkins, we have a large healthcare component.
We're looking at the best ways to apply machine learning and other techniques in an operating room or an intensive care unit or in a hospital, a patient room, or outside the hospital setting in the field, or in a patient's home, or an elderly person's home.
There are so many potential benefits and we have to pay a little bit of more attention to the pitfalls as well in order for us to succeed as a society using this family of technologies. That's what we're about.
[09:12] Threats to AI Machine Learning
Tony: It's a lot of fun. We touch on topics that we read about in the news every day as well, which is also fun and interesting, and meaningful.
Petko: Tony, you touched on so many different aspects there. You started off with the adversarial way you can attack AI. You talked about quality impacts that vary from version to version. Then you have some AI that's done inside an organization and no one knows why it's working.
You might be watching some video and all of a sudden, it's suggesting things because its machine learned what you like. There's good things and bad things. It becomes a time sink. What is the biggest threat you think with the assured autonomy you're looking at?
When you look at AI, is it adversarial? Is it a quality control? Is it just the way it's being used from an ethics or unethical stand potentially? What do you think is the biggest threat that we have right now that we should be paying attention to?
Tony: There are several things. I'm sure I won't be able to touch on all of them. But for me, one of the biggest underlying aspects of machine learning is uncertainty. And the way that these systems are designed, kind of statistical.
In another way you are applying a bunch of inputs and kind of crafting the system so that it can accommodate those inputs and produce sensible outputs. For example, my dog and cat. This is a dog, this is a cat, this is a dog. And then you give it a picture of a hamster or a dog it's never seen before, and the output is undefined.
It's however the system happened to configure itself. So that's a big problem.
Will Machine Learning Be Perfect?
Tony: These applications, like for autonomous vehicles, have huge tails as we like to say. We can train them to go around a block, to go around a parking lot, more structured. We can train it in the summer, we can train it in the winter, in the fall, but we're always going to have edge cases. Always.
So in transportation, I predict that's going to be the story for the next few decades. When we read about mishaps on the highways, it's going to be because somebody slapping my forehead didn't think to train under this kind of circumstance. Or maybe in some cases, it was so bizarre, such a convergence of situations that nobody could have foreseen it realistically.
Petko: Do you think it'll ever be perfect? I'm just thinking through the example of the boulder. Is that a bag or a bowler? I struggle even with that sometimes.
Tony: Not in our lifetimes, is a safe prediction. It all comes down to what we expect of a machine learning itself and how we can engineer systems around it. Aircraft have been engineered to fly pretty reliably and we hardly think about it. We get on a plane, it works. It works because of a whole lot of things and decades of hard work, research and investment in all kinds of areas all working together.
I think that we need to adjust our expectation for AI in a similar way. It works fine for some applications that aren't life critical. It can work just fine, but it's going to require much more systems level engineering for many, many applications, and careful thought. Thoughts about the ethical and societal implications if we don't do that.
Uneven Resources and Misinformation
Tony: I also predict it's going to be very uneven. Not everyone, not every organization is going to have the resources, the time or the sensibility, the knowledge to create an AI-based system that we can trust. We may not know it though, that's the problem.
So that also suggests that it requires a fair amount of regulation and safeguards so that we have a better idea of what to trust. Just like we get into a car, and it's not perfect, but we have some confidence that at least there's a process in place so that we can trust that the car is going to do what it's supposed to do.
So we shouldn't abandon those principles as a matter of fact.
And you mentioned misinformation. And that's a big problem in a couple of ways. I think that hackers are going to be able to, for one thing, craft better and better emails for phishing attacks, for example. I mean it's just going to be ridiculous. You're going to get an email that's so personal and has so much information that you can confirm.
You just subconsciously say, "Yes, this is my Uncle Harry telling me about his vacation and wanting me to click on the links with his vacation photographs or whatever." That's a tough battle.
Just overall, the ways in which social media and other channels can be used together with this technology to mal-inform us. We have deep fakes. So I can dwell on all the negative things. And it's good to keep them in mind because we're going to have to navigate that minefield.
[15:59] The Machine Learning Flip Side
Tony: There are people working hard on how to detect deep fakes and how to detect email and documents that have been generated by ChatGPT. So that's good and we need to encourage that, and hopefully funding agencies are getting the message.
I believe they are, and they're hard at work making sure that those technologies for mitigation are also supported so that they can be developed, so that it's not a one-sided arms race. There's a lot to unpack.
Yes, it's complex, but it really is the future we face. That's where we're headed.
Rachael: It's wonderful though. With every advancement there's always kind of the flip side of it. And kind of segueing into my favorite topic of these algorithms that learn you and learn your interest.
We know TikTok is a big hot topic. I think we're like the last year or whatever it is. As we were talking about before we got on, I just discovered it over the summertime. I really didn't understand what it did or how it worked.
Just spending a little bit of time on there, it figured out that I like animal videos, particularly things with puppies and kittens, and kind of tortoises for some reason. And it just keeps serving that up to me, and the next thing I know, it's one o'clock in the morning on a Wednesday and I'm like, "Where have the last three hours gone?" And it's genius, but it's also bad. And then you kind of pile onto that.
What Should We Be Concerned About?
Rachael: TikTok in particular, I would love your perspective here. There seemed to be a lot of concerns on it being a Chinese-owned app. I'm sure with the end user license agreement. I'll be honest. I'm in security, but I didn't read the whole thing. I'm sure it's like 30 pages and two-point type.
I know I pretty much signed my life away. As Joe Q Public here, should I be that concerned? I know there's been articles you've kind of commented on or been referenced in recently. Members of Congress, for example, that are on the app, maybe not on government devices. There's talk about banning the app use in government buildings. This is a really huge topic. How do we unpack this, Anton?
What should we be concerned about? And what are we getting a little overworked over?
Tony: The good news is that the government has taken steps to put TikTok under the microscope. That's the first thing. It's kind of like turning on the light in the room. Whatever critters are around are going to scatter hopefully. I think that situation is in hand, but it's a lot whack-a-mole frankly, because it's more than apps. So many ways in which adversaries can obtain huge amounts of information about us.
Speaking specifically about Americans, I know of one case in particular because in my security institute, we do research on drone vulnerabilities. This made its way out to CNN a couple of years ago in kind of general terms. But in this instance, a commercial drone was sending information. Depending on where it was located for its GPS, it was either sending information with videos and photos to servers in the US, or servers in China in some cases.
The Tale of the Frog and the Scorpion
Tony: So you can extrapolate between TikTok, this drone instance, the fact that the Chinese government has a long tradition of espionage, it's almost like they take pride in it. Most of my interviews were given before the infamous, the so-called weather balloon. It's just how they operate.
As I recall during the Obama administration, there was a major agreement that was struck. A big component of it was, "China, no longer will you hack into our IT systems and steal proprietary information." That lasted maybe three weeks, probably less.
I think it's the Indian proverb of the frog and the scorpion. And the scorpion needed a ride across the river and asked the frog for a ride. And the frog said, "Well, I can't give you a ride because if you sting me, it's not going to be good." And the scorpion said, "Well, why would I do that? We would both perish."
So the frog said, "Yes, that makes sense, let's go." So they're halfway across the river and the sure enough, the scorpion stings the frog and the frog says, "Why did you do that? We're both going to die." And he said, "It's my nature."
I think that's what we're dealing here with here. I'm glad I'm not in the government and the diplomat shoes because it's a never-ending saga of fighting the nature of the Chinese government to do these kinds of things.
So for me, it's not surprising at all. And I wish that people would pay attention that this is in the realm of, not only possible, but reality. And people also shouldn't think that, "Oh, well, I'm just a college student. Who cares? I'm not doing anything."
Machine Learning: The Good and the Evil
Tony: We don't get paid to sit around night and day to figure out how to put together these little trails of breadcrumbs from people, like thousands of people in the Chinese government do. And if we did that, we would find all kinds of ways that seemingly innocuous bits of information can be used to really jeopardize even national security.
And it's little by little. That's the other thing. It's injury if not death, but certainly injury by a thousand tiny cuts. And that's what our adversaries count on.
In this era of social media and software, there's the much larger conversation about privacy, And what's appropriate to be shared. I can tell you that I have mixed feelings about it because some of my research is for simulating how COVID spreads in the community.
We make use of mobility data that the companies in the industry made available to us and to other researchers during the pandemic. Without that driver of mobility to see realistically what the movement patterns are, we wouldn't be able to do our research. But that same mobility data can be used for nefarious purposes if people want to.
And I think you can go down the line with just about every technology and you can say it can be used for good or it can be used for evil.
What I don't want to see is that we just throw out the baby with the bath water. I'm concerned that the European Union is on the verge of doing that by just blanket banning certain kinds of technologies like facial recognition.
[24:25] Cyber Awareness and Machine Learning
Tony: The City of Baltimore has banned facial recognition interestingly. Facial recognition and crowd can be used for surveillance. It can also be used to identify someone who's in medical distress. How do we balance that?
There's opportunity for technology in a lot of cases. To obfuscate the parts that we don't want to be used for nefarious purposes, but could be used for benign purposes. I think people are starting to think about that. It's important. It's a huge conversation.
Privacy's been a conversation for years. It's only becoming more critical that we and policy policymakers, decision makers, all come to the table around this.
Rachael: That's a tall order. To try to find alignment too on what that right path forward looks like in regulations. How do you scope it so that you're not tipping too far one way or the other?
Tony: I don't have the answer for that.
I do know that we need to talk about it more and make it more of a public issue as well. People need to realize that it impacts them and no one is out of harm's way, in that sense.
In my institute, we're conducting a survey of Maryland residents about cyber awareness. We'll have some results in a small number of months. I know what the answer is.
It's hard to grasp just on the security side. I don't think there's an argument there. People still say,
"Well, I don't understand how this would affect me." All you have to do is look sideways, at your bank statement. "You have a bank account? Then you're vulnerable. You have a credit card, you're vulnerable." et cetera. It's just crazy what's going on. But awareness is a huge component that we need to play catch up on.
Privacy and Security Go Hand in Hand
Petko: Do you think even residents are aware that cyber and privacy are related to each other? I feel like they always keep them separate. "Oh, that's someone hacking in the basement and they'll probably get something." But then privacy is, "Oh, I don't want to be recorded," yet they could be.
Someone could’ve hacked into your camera and recorded you.
Tony: Yes, I agree. I think that people don't get it. A controversial example, I've been interviewed on this topic is the whole abortion question and how technology can be used against people where suddenly abortion and some aspects of that are criminal.
A few months or years ago, someone said, "I'm not doing anything wrong. Why do I care?" I've always told people, "Well, the rules of the game could change one day. You might find yourself in a situation where something you thought was harmless is. You may just not be in the same position now."
Privacy and security go hand in hand. It's something that everybody needs to be aware of. I might be preaching to the choir and your audience might be extremely well versed on these topics.
If they are, I would encourage them to invest some time in getting the word out and doing some outreach so that more people become aware of this.
Petko: Tony, I'm reminded of a retail store, I think a couple of years ago that in the household, they ended up getting a letter saying, "Congratulations on a new baby."
It was the daughter that had gone and gotten some tests and bought some stuff at Target, but the combination of the tests and other stuff and chocolate or whatever it was the mix of, suggested there was a baby coming.
Detectives, Hackers, and Machine Learning
Tony: It's incredible. We're entertained by detective shows and novels where detectives can put one and one together and come up with pretty amazing deductions. But this is what hackers are doing, and now what machines are doing incredibly well.
I mean, the detectives have nothing on the computers putting all of these pieces of information about our lives together. And some of it's commercial too.
I know this story. Haven't verified it, but years ago,
Walmart figured out through their data collection, data mining, that men often came in to buy diapers and beer in places where one is supposed to be. So it allowed them to place the products appropriately in the store. You knew.
Petko: I remember that study, it was actually tied to hurricanes. So in Florida, for example, they ordered extra after the hurricane at first.
Tony: Right. And it's important and there's validity to it. And so that's how things go. Diapers and beer is innocuous, seemingly innocent. But you can imagine how that methodology can be extended to all kinds of things.
Back to TikTok. One of the pieces of data I focus on is location data, and it's not only where you work. Most people don't realize that there are so many categories of critical infrastructure in the United States that have been defined as critical infrastructure.
It's not just nuclear power plants and military installations, it's food manufacturing, it's financial banking, it's education, it's biotech, and on and on and on.
Piecing Them Together
Tony: You could be working in one of those critical infrastructure categories and not even be thinking about it. But it's also who you associate with. If you're in the same household with people, you have friends who have access to critical infrastructure where you go to school, somewhere where there are laboratories.
Or you could even be in a coffee shop with someone working with the same wifi where someone's figured out how to hop.
For one thing, having all of this location data allows nation-state actors to filter. So here are the people we really want to target. They send the list over to the other department, that's the spearfishing department, and they say, "Okay, how are we going to get into Anton Dahbura?
How are we going to get him to unknowingly give us access to the Johns Hopkins IT system?" Then they're off to the races with me and a much smaller number of people that they really want to go after.
Petko: But it's not just location data. I'll give you an example. I've seen certain apps track via Bluetooth what other devices you're around, what other people you're around. If I go to a coffee shop or to my work, my phone, depending on the apps that are on it will identify,
"Oh look, there's these other Bluetooth devices near me." It just happens that when they look on the other side of the database, "Oh, it's Tony or it's Rachel," and we notice they went to the same coffee shop every day at Tuesday or something. Maybe they're related, maybe suggest them as a friend.
Tony: Right. So it's interesting, but it's also disconcerting. Those are the kinds of things that people do to get what they want from us.
[33:23] Continuous Risk Management
Petko: Going back to TikTok, is it just TikTok we should be worried about? Is it because they're Chinese owned? Are there other apps that have similar techniques that we should be worried about as well?
Tony: We know that a whole host of apps collect all kinds of data about us. I think that hopefully the government has. It's been kind of a wake-up call. We certainly need to figure out how to cut the pipeline of data to China and other countries. That's number one. After that, it gets much more difficult.
That's where individual awareness followed by choice comes into focus. Apple talks about setting all your privacy settings appropriately. I can't even tell you that I do that in all cases. It's kind of a pain and it requires a lot of time. In some cases, I am torn. I'm saying, "Maybe I do want this app to know where I am. It's kind of cool. It helps me in some cases, it's helpful." These are really complex issues that aren't going away anytime soon.
We need to talk about them.
Rachael: It's like this continuous risk management. It's like, "I really want to be able to find my iPhone, and it's been very helpful when I have that setting on because it's at the Starbucks down the corner where I was 30 minutes ago." With the balance of, "Well, someone else could use that potentially if they were interested in following me."
It's like that's what our lives have become. This is continuous risk balance and how much risk am I willing to take today?
Tony: It should be so the new generation of risk management needs to be informed risk management. After the last three years, we know how that goes.
Ask Questions and Be Responsible
Petko: I do want to point out, in terms of Apple and Android, they started doing a great job of when you do install apps. They inform you of what it's asking for. You start running, "Wait, why do you have access to my microphone when you're just a ..? Why do you have access to Bluetooth when there's no headset capability or anything?"
Most of us need to start asking those questions and wondering what's the right balance. You can still use the app and just disable those features.
Tony: When those questions come at you, there isn't a lot of information and the timing is awkward. It's like, "I haven't even used the app yet. How do I know whether I want this to be enabled or not?"
Rachael: Exactly. How far back do we go in terms of responsibility too, is the other thing. I worked with the Apple App Store back in the day.
They have a requirements list, but how much are we putting on these app stores to police that things are being developed the right way or with the right interests in mind versus, they're just going to go for everything they can and just try to get it on the store and see what happens.
Tony: It's not a fair game in that case. The sense of ethics, responsibility, it's not infused evenly throughout the whole industry. We can't count on them to have done "the right thing."
You can't put it all on them only because they need to be watched. It's an unfortunate reality of life. It's maybe not in their best interest especially you have very small companies, startups, tight budgets, tight timelines, lack of caring, some combination of factors that leads to where we are now.
Change of Thinking
Rachael: Are we slowing down the pace of innovation? If you put all these extra controls, then this next great amazing technology can't get out into the world. Then we'll fix it later.
Tony: We are looking to Europe. The European Union is saying, "Well, tough luck." They're taking a different approach. “We're taking the fix it later approach.” They're "Damn the torpedoes, we're going. We're taking care of this now." I don't know which one's better. They have pros and cons.
Rachael: I always point to the young generation today, those that have been born into iPad at age one or two. They've only known technology. They've only known social media. This life of ostensibly living online, and your life is online. Do we see this generationally starting to change with those that they know, is their life online?
How does that affect us going ahead? Us on this call, I remember the days no call waiting, and the VCR, I'm really dating myself, the 8-track tape.
Tony: I think that some of us have a different level of awareness of what's being gathered, what's being used. Not that we completely understand it, but there's awareness. I would venture that in most cases, it's lessened with younger people. That would lead me to say that it should be part of the education of a young person.
It should be taught in school. It's such an intricate part of everyone's life right now. Us old dogs, it's harder to learn. For the young ones, it's just a change of thinking that this is part of our society. It's like social studies or anything else. We need to embrace it. Also help people arm them with the knowledge they need to manage those risks.
[40:14] CyberCorps: Scholarship for Service Program
Rachael: That actually reminds me, I was reading on your website. There's the National Science Foundation, you received 3.66 million grant through the CyberCorps Scholarship for Service program, which sounds amazing.
Would you mind telling our listeners a little bit more about that? I kind of just sprung it on you. As we talk about the next generation coming up, and the cyber workforce gap that we have, training up that next group is critical.
Tony: Thank you for mentioning that. We've had for many years a National Science Foundation and scholarship called Scholarship for Service or CyberCorps. It is an amazing scholarship for young people who are interested in careers in cybersecurity.
It's a free ride. It is the most generous scholarship I've ever seen. Not only does it cover your tuition, in our case, for our master's program in cybersecurity. But there are other schools across the country for undergraduate education as well in cybersecurity. And it follows you throughout if you want to continue on to grad school as well.
Beyond tuition and books, it gives you a monthly stipend that's considerable for your living expenses and even professional expenses. If you want to go to a conference, you need a new laptop, it pays for that as well. Now all you need to do, the before service part is, the number of years that you received the scholarship, you go work for the government.
It's primarily the executive branch, but it's not limited to the executive branch. You are getting paid. It's not like you're working for free. So you're well paid and you pick where you want to go to work. They have job fairs and you go around, and it's your choice.
The Future of Security
Tony: I think it's a wonderful program, and it's also kind of under-publicized. You'd think that thousands and thousands of kids will be going after these scholarships. It's actually not the case. Please get the word out. Let's get some more young people involved in taking advantage of this wonderful scholarship program that also gives you an amazing start to a career in cybersecurity, which is lifetime job security. I can say very safely.
Rachael: Absolutely. I love that they start off in the government too, because the government is in desperate need of security talent as well. Getting folks early and in the government track is critical as well.
Tony: It is, yes. So it's a good program all around.
Rachael: Wow. I feel like so bummed. I was born too early to take advantage of it.
Petko: I was thinking the same thing, Rachel. I never had any of that.
Rachael: That's fantastic. I know we're coming up on time, so I do want to be respectful of your time today, Anton. With all of the work that you're doing, this amazing, exciting work. How are you feeling about the future of security? Are you feeling really positive on what's to come? Hazardly cautious? Or what are your thoughts on the next 5 to 10 years?
Tony: I think that it's going to continue to be a big issue. I think that it shifts, the threat vector is always shifting. Now we're looking at AI enabled applications. We really never figured out security for our traditional, our legacy systems, and now we're getting this AI where we don't even know what's going on inside.
So how do we know if it's doing what it's supposed to do, if the system is doing what it's supposed to do?
Keep The Bad Guys Moving
Tony: That gives me a bit of trepidation, but at the same time, we have lots of brilliant people working on mitigation efforts. And there are people who are really bound and determined to go after the hard targets. But I think that once we figure out how to adequately defend most targets, the bad guys kind of move on. It's like house thieves.
You have a good security system in your home. Most of them are going to just keep going. And that's what we hope for. So the more we invest in security research and really galvanize around the idea that security is a critical threat that we need to address, I think the better off we'll be. And just keep those bad guys moving, keep them on the run.
Rachael: That's exciting. Well, I would love to have you back. All the things that you're working on are so fascinating, and I know our listeners would love to have an update in the future.
Tony: Thank you. Well, I've enjoyed talking to you and would love to come back to discuss anything you'd like at any time, just let me know.
Rachael: Fantastic. So thanks again, Anton. And to all of our listeners, thank you for joining us for another awesome discussion this week.
About Our Guest
Anton (Tony) Dahbura is the executive director of the Johns Hopkins University Information Security Institute, co-director of the Johns Hopkins Institute of Assured Autonomy, and an associate research scientist in computer science. His research focuses on security, fault-tolerant computing, distributed systems, and testing. He received his BSEE, MSEE, and PhD in Electrical Engineering and Computer Science from the Johns Hopkins University in 1981, 1982, and 1984, respectively.