The following is a transcript of the “Are Vulnerability Code Scanners Worth It?” panel - it has been minimally edited for clarity.
Gabriel Friedlander: Good morning. Good afternoon. Good evening. Depending where you're coming from and where you're joining us from. We have a really exciting webinar today. We have great speakers. And I'll let them introduce themselves in a few minutes, but before that, I just want to say that this topic today is about security scanners around code.
First of all, why do we need this even? This is sort of part of our secured code training in general, where we're trying to educate development teams about how to become more secure, what tools to use and what best practices - and I think this is a big part of the development stack - the things that they need to look at first.
So first of all I'll let you both introduce yourself and then we're gonna kick it off. So Roeland, do you want to kick it off?
Roeland Delrue: Yes, I'll kick it off. Hi everyone, I'm Roeland. I'm one of the co-founders of Aikido Security.
We're a company based out of Belgium in Europe and we're a scanner product and we have a bunch of scanners and very much focused on the SMB market because we combine a bunch of scanners together. I have a background in product management. Now I'm more on the commercial operational side, but I do have a good understanding of how the product works, obviously.
So, I'm here in that capacity today.
Gabriel Friedlander: And Itzik, he's also Wizer's CTO, but I'll let you introduce yourself, Itzik.
Itzik Spitzen: Yeah. Hi everyone. I'm Itzik. I'm the CTO at Wizer and my background is I'm a software developer. So for many, many years I’ve developed many different types of products and always security was top of mind.
And so now with Wizer, we're also working on secure code development and so forth for our product but also for our own team. I had to go through this question about the scanners and so forth. So it's really an interesting topic for me to talk about and discuss.
Gabriel Friedlander: Let's start with that.
I want you to take the seat of the R&D manager, which you are, and explain why do we even need scanners before we go into how they work. What's the problem we're trying to solve? There are, as you know, all types of scanners. I won't go into them all now, but explain at a high level.
Why do we need them?
Itzik Spitzen: I think it's a great question. So first of all, the level of coverage that we can have on code is limited which developers can review by themselves. Even if they are very knowledgeable - we are going through trainings and we are teaching our teams secure coding and OWASP 10 - we're going deep as much as we can. We have special projects to handle and prioritize and so forth.
With all that, the level of coverage and the coverage we can have on the code is limited. So we were starting to think about how we can expand the range of things that we can check at the same time, and we can also adjust to and align with the rapid change of the code base because this changes all the time.
We're adding code pieces. We're changing code or changing infrastructure, architecture and configuration. We're changing pieces of code, we're adding more products. All of that needs to be considered. And if we had to count on manual process of review I don't think people are able to cover this amount of of code at the same time. We can't have people do it manually. So we were looking for something that will be incorporated into our process which we’ll constantly check a wide spectrum of things that we can talk about today, but you know, a wide spectrum of things that we want to test.
Gabriel Friedlander: So let's explain a little bit more about the risks that you were mostly concerned about. You're talking about developers being overwhelmed with so many things to look at. I get it. But what raised the flag where you were like, ‘okay, this is a big risk. If this happens we're in trouble.’
Itzik Spitzen: So again, a good question. I think the issue is not just one, right? The issue is overwhelming with the different types of attacks that could happen. We've seen situations of IDORs. We've seen situations of XSS in our code base and it continues popping and our developers are really knowledgeable.
They know what it is. When they figure it out it gets fixed very quickly, but you know, it is very hard for them to cover all this space.
So what raises my attention is that pen tests actually show that no matter what we do, we continue to get those things and we continue to see those vulnerabilities that maybe if something could have tested our best practices and things that are broad and really covering everything, or at least a lot, we could have caught a lot of these vulnerabilities earlier.
So that was the trigger, I think. So that type of thing.
Gabriel Friedlander: So you think the pen test sort of reveals that. When we do it again and again and again, no matter how good we are and we even do peer reviews and all of that, there's always something to find. And potentially scanners can reduce that even more.
They won't take over all of that, but that's sort of what was Okay. So Roeland, you are a scanner vendor in this space. First of all, is that sort of what you're trying to solve? And, and B, how does it work? How do you cover the entire OWASP 10? What are the areas that you cover?
Roeland Delrue: I think one of the great examples is like the log4J incident. Like when it happened, the whole world was in a panic. And then at that moment calling a pen tester and by the time that person arrives, it might be too late. So that's like a great example of where a scanner can pick it up. And coincidentally yesterday something called polyfill.io got hacked - they lost access to the domain and now a 100 thousand websites are affected and prone for a supply chain attack. We immediately jumped on it and saw if our scanner could pick it up. We found it did, made some tweaks to make sure and then we just started blasting all of our customers alerting them that ‘you have this vulnerability, you need to get rid of it right now.’
We have automatic alerts that happen with criticalities, but when there's something this critical, we even take the time and effort to email them personally to make sure that they definitely take that thing out of there and update it. That's where a scanner - with that continuous scanning - definitely has a lot of value. If you do a pen test, that's like a great picture that you make sure you dress up and you make sure you look nice. But that’s maybe once a year, but you know, the world is continuously evolving. The NVD database is updated every six hours with new vulnerabilities.
While you don't have to fix everything right away, you might want to get that alert for that new critical vulnerability that arrives for sure.
Gabriel Friedlander: So you're talking about a specific library, right? So a lot of developers may not know, it's not part of their code. They're just using the package.
So even if they do a peer review or, or anything like that, that won't help because they don't have time to go to the libraries they use and get an update about something that happened with each library - it's almost impossible, especially when you have nested libraries, right?
So like one library’s using another library. How do you know? So those scanners go deep, right? They don't go one just one level - some of them go three levels, six levels down. I would say that’s beyond overwhelming and you’re always missing things. Without a scanner, it’s inhuman to even expect anyone to go that deep.
Roeland Delrue: Yeah, for sure. That's entirely true. That's the difference between a bad scanner and a good scanner, right? The good scanner will be able to go through those transitive dependencies and take that context. Bad scanners, or simple scanners, will just do a ‘Find all / Report all’, and then it's up to you to triage them.
Gabriel Friedlander: So I would love to touch that point but before that, I want to set expectations. So a question for both of you. So we were talking about, how a pen test is sort of many times the eye opener where we're like, okay, we are finding a lot of issues. We have to do something about it. But can a scanner replace it?
If I put a scanner, do I still need to do pen test? Like, yeah, any one of you can start.
Itzik Spitzen: I can start as representing the consumer, the person who's using it. I would never think about not doing pen tests because I have a scanner. I think they're supplementary in many cases. The pen testers are human beings with a ton of experience, and they're able to look very deep and find those tricky paths into something that could consist of some vulnerabilities, but also there is the common sense and other things that I don't think scanners are yet able to do that deep.
So I think pen tests are, as you mentioned, Roeland, a picture in time. It's a snapshot, right? And even that is relatively low resolution, because again, we're talking about people.
However, they're trying to find those very big impact things, and they go for the crown. Right? They really actually look at those things that could be very dangerous. So I think they're supplementary. I don't think you can replace them. And I don't think a code scanner can even replace anything. It just adds. And adds something really important, which is, from my perspective, the coverage.
Gabriel Friedlander: So, just as an example for the audience, you know, we were talking about, for example, IDORs, which many of them are just logical bugs, and expecting a scanner to sort of understand the logic of your app and to figure out that there is an access control issue with a way to bypass.
That's almost impossible for a scanner to do that. They're usually looking for either specific patterns or libraries. And IDORs have been some of the worst attacks out there, right, Itzik? We've seen a lot of them, and every time they're different. They don't have a specific pattern even.
Itzik Spitzen: Right, right, because there are multiple levels. There are very simple ones. You just change the ID and get to another object. That's trivial, but there are other ways to do, like, multiple levels that you get to an IDOR and you get to a reference to an object that you're not authenticated or authorized to get to.
And that takes a little bit more time to find. But also pen testers that are really talented find them. And they find those things that go into the database for the scanners in the future. Right. But again, just like the pen test is the picture in time I think the scanners have some database of options that they go through. And it's limited. It's finite. So it doesn't include all those sophisticated cases where in time, I think it will include more and more, but the world is moving and again, as we said, it's not only our code is moving; the infrastructure and the libraries are all constantly changing, moving - it's a moving target.
It's really hard. So maybe we'll do another webinar because I don't want people to think that, you know, a pentest is an absolute snapshot in time because it also depends who the pentesters are, what coverage you gave them, and what their skill set was. So like that's a whole different conversation, how often you do it.
Do you go and do bug bounty programs on top of that? Do you invite the red team? Like that's a whole new thing. So scanners are just a part of that. So Roeland, you talk a little bit about when I use a scanner, is there like one scanner that does it all? Like, when we look at which scanners do we choose how many scanners are even out there? Do they overlap?
Gabriel Friedlander: How do I even get started when it comes to scanning? If you can give us sort of an example with your product and also without your product. Let’s say I don't have any products. Can I still do something with scanning? If I don't have money - I don't have budget - what should I do tomorrow?
Roeland Delrue: Yeah, for sure. I would say there's many areas or domains out there and for each area, there's a scanner out there. For example, you can have a secret scanner, which is very different from an open source dependency scanner, but there are two areas that you should care about. So secrets and open source dependencies.
Your static code analysis for the typical SQL injection and cross-site scripting is another one. Then you also probably want to scan your infrastructure as code if you are using Terraform Helm charts. So that would be another domain. And so for each of these domains, there's scanners out there. And there's some good free versions out there as well.
For example, in GitHub, you get something called Dependabot and Dependabot is great at finding your outdated dependencies or the vulnerable dependencies. And that's definitely a great starting point if you're trying to get acquainted and your code base is still relatively small and not that much legacy. It'll definitely do the job.
As you grow, there's a point where you graduate out of that free tool but to explore it, it's great. Another alternative there would be RenovateBot. If you're looking for something else there to consider the same thing with secret scanning. There's a bunch of specific secret scanners out there like, um, Truffle is one of them.
They have an open source. I think GitGuardian has a free version. We ourselves also have a secret scanner in our product. It's based off GitLeaks. GitLeaks is an open source or considered the best in class open source secret scanner. So you could also go for a free scanner or you could go for an open source.
The free typically comes with some strings attached or some paywall at some point. Or an absolute virginity versus the open source is more free forever, but then you need to do the maintenance of setting it up. But if you, like GitLeaks is not too bad, you could run that as a test to see what comes out of it for sure.
Gabriel Friedlander: Can any developer just go out and find how much work is it to go out and find what are the skills that I need to maintain it? Again, let's say I have a small team, I don't know, like 10 developers just got started, even less. First of all, for 10 developers, 5 developers, like, is that a good time to implement a scanner?
And which scanner do I start with? Is there any one specific? Or is there any domain that I should start with? Or do I just go and select all and implement all the packages that I can find and hope for the best?
Roeland Delrue: I think roughly speaking, putting it a little bit black and white, there's like two types of five developer companies.
One is where you have a CTO that has experience and is knowledgeable and has gone through these things. And then that person knows that they need something for secrets for open source dependencies, and they might have that list in their head somewhere, or they might've read about it or educated themselves.
The second type of customers, again, black and white would be the ones where they don't do anything until they're forced, and the force typically comes from the customer because they're trying to sell a FinTech product or a health tech product to a hospital or a bank, and then they get the security questionnaire.
And then the security questionnaire is typically the driver of the scanners that they'll look for, because the security questionnaire will be, what do you use for secret scanning? How do you manage your dependencies? How do you, and if you want to check those boxes you'll find yourself exploring these scanners.
And if you do this enough, you'll probably go for an ISO or a SOC2 because it kind of accelerates some of these check boxes, but then you'll probably crystallize eight, nine, 10 different scanners in total, because those are the ones that keep on coming back. And those are the ones that we in Aikido try to productize and put into one product.
Gabriel Friedlander: Let’s say I do that. I take those nine scanners. I took the time. I took the effort. I installed them. I checked the boxes. What should I expect to happen next? Like, is it going to be like a bunch of alerts? How overwhelming is it? And how much overload it will put on my team. And how do I start with the output that they deliver?
Roeland Delrue: If you are going for a compliance standard like ISO 27001 or SOC 2 type 2, it might become a burden because those compliance standards will force you to have an SLA and the SLA will mandate, you know, a critical vuln needs to be fixed within two days. A high vuln needs to be tackled within seven days. So now you're on the clock, right?
And now it's not ad hoc so then it can start becoming an overload and that's when you start to triage andmake sure that they get planned and get fixed. And then that's typically the moment where you're like, okay, maybe we should buy a product that can help us do that. Because at a certain point that's the challenge - the amount of false positives and lack of context and wasting time of developers.
And That's Why There's Products Like Ours In The Market To Help You Combat That.
Gabriel Friedlander: Who's getting those alerts? Is it the security officer, the CTO, and they're distributing it to the developers or are the developers when they're compiling their code or during the build, they're getting each one individually and they need to decide and they need to be trained about, like you said, severity.
If it's this, do this. If it's that, do that. Like, how does it work? Who's in charge?
Roeland Delrue: I've seen it in many different shapes and forms. Like I've seen the Slack channel where everything gets boosted to the one Slack channel with webhooks and they use some Zapier things to get it all in there. Other people will try to get it in Jira, create it as tickets.
Other organizations do it scanner by scanner. What I mean by that is for example, the panel bot can open up PRS. So the PRS will be sitting in your GitHub. So then that's sitting in there, but then for example, they're all running AWS and in AWS, you have a security hub, and then it might only be the cloud engineer or the CTO that looks in that particular place.
Some people set up a CI gate where in the pipeline they can have the pipeline set to fail if there's a critical SAST or critical dependency. And then at that point you're like forcing workflow on your developer and anything in between or any combination of those things.
That's what they typically refer to with the term shift left is because you shift left in the software development life cycle. And then it's up to you to decide, do you start in the IDE already? Do you then scan the branches, the pipeline, the repos and like, where do you put the hooks? And so it also typically depends on your industry.
When I'm talking with health tech they will be, ‘this needs to be blocked if I'm pushing a license and that's more about IP risk.’ But if they push an LGPL or a GPL three that build is going to be blocked. They don't care. It's going to block the development process. Because they're in such a regulated space versus other smaller startups that are, for example, in gaming, where they don't deal with sensitive PII.
They'll be like, okay, we'll fix that later. We don't want to block the developer in their process. Let's get it shipped. And then let's worry about it later.
Gabriel Friedlander: So what do people usually see in terms of time spent on managing? And for developers, I mean, like, should I expect like 10%, 15%? Like what now? Like how much longer will it take to release a build?
Now that I have to deal with this, how much time will it take for my developers?
Roeland Delrue: I would say 5% to 10% maybe.
Gabriel Friedlander: Okay, cool.
Roeland Delrue: There's also ways that you can bring that to 20% or 30%.
Nobody wants that, but you could find yourself in that. It depends on legacy, right? If your company has been around for 10 years and you started coding back in 2014, that'll be a different landscape or a different situation than when you start fresh and vanilla right now.
Gabriel Friedlander: One more question, and then we'll move to a different topic, but in terms of quality of the alerts, so we talk about false, but sometimes I don't know if this false under false, but I'm using a library. It has a CVE. It has something, but I'm not using that functionality, like how If I'm going to spend time investigating over and over and over again, something that exists in the library, but just totally unrelated to me because we're not using that functionality.
Like, what do I do there? Because that can take, especially if I'm using a lot of libraries.
Roeland Delrue: I think, and this is not to plug our product, but I think we're one of the few products, so you would need a product to manage that because you cannot do it directly in your IDE or your gits. So it would require a product like ours.
And then I guess that's what we do, right? Like we go and do the reachability analysis. We go through all the transitive dependencies. We'l identify the dev dependencies there to see if something goes to production. Yes or no. On top of that, you can configure your repos in Aikido. Like you can tell it if it's like your crown jewel, a repo, which is internet connected or the other way around, whether it's like an old repo, that's not internet connected and that'll influence the scoring as well.
And then we were users before we created this company of many scanners before. And we found that there was a lack of productivity features and with productivity, I mean, snoozing things, ignoring things. And when you ignore something, for example, we will give you the option to ignore just the issue or ignore the issue across all the repos or just in one repo or just on the path within a repo.
So we give you that level of granularity to deal with all the small frustrations that when you're in that situation where I want to ignore it, but only for this type or only in that scope a product like ours allows you to do that.
Gabriel Friedlander: Which basically says look guys, nothing is really free, right? Like you use something for free, then you have to put in the hours, right? You have to put the resources and I think that's true for most of the products out there. If we take security awareness, for example, thinking that you'll just push a button and you'll get a security culture.
I think that's an overstretch, right? Like you still have to have someone that actually cares and takes care of it and follow ups and all of those things. So from what I understand, you know, we've talked a lot about some of the issues. Your product deals with probably many of those things. When you say productize, it basically makes it simple to install, maybe consolidates the alerts, looks at the reachability, all of those things. And, you know, it's not just your product. I guess that some other products in the market that do the same, but that's sort of the idea between free and paid. Like totally free it's up to you to do all of that. Paid - it's sort of like some of the automation has been done for you.
Roeland Delrue: That's a typical trade off. Although we do need to keep, obviously we need to keep on innovating, but the Panda bot is not going to get worse over time. And GitHub is very interested in taking that in house so that you don't have nobody wants you to go out and buy another tool, right? And you don't want to buy another tool, but sometimes you have to, or there's no other option, but like with their co-pilot and their dependable and like some of their security features.
We as a pure security product company cannot become lazy and think that we will be able to charge for certain functionalities indefinitely. So it's definitely it's becoming more and more for free. Our added value needs to become more and more advanced and more and more ahead of time or more and more innovative for sure.
Gabriel Friedlander: Do you think AI plays a role in the future of scanners?
Roeland Delrue: AI will definitely, or rather, LLMs will definitely play in the future of static code analysis because these large language models are really good with languages and not only human languages, but also coding languages.
And so, in their creation, they've been trained on a ton of code. And so they're really good at spotting the human errors. So with static code analysis, there's definitely a lot of opportunity with LLMs there to identify issues that are a classic rule, a classic SAST rule could not always uncover.
It's really good at validating because SAST is notoriously bad in terms of signal to noise ratio. There's tons of false positives and an LLM will be able to triage. And an LLM is also able to suggest remediations because it's smart enough to do that. Now, that is something that we are working on right now.
And I'm sure we're not the only ones, but in a natural way, LLMs and static codes go very well hand in hand. For other areas, it's going to be less. I mean, they can always enhance, but I think that the static code part is definitely the one where you'll see a lot of improvements.
Itzik Spitzen: I actually find the best to be potentially interesting for AI LLM.
And here's why, because I do feel like when you're trying to tackle a black box test you're trying to kind of like tackle API's and so forth. You have some kind of intelligence - artificial or real human - to be able to find those paths and find those ideas that are a little bit more comprehensive than just logical gates.
It needs to be kind of like some experience and database of situations and combinations of which, and the way you compound them and so forth. So I definitely think that AI could definitely help there as well.
Roeland Delrue: Yeah, that's a good remark. I agree with that one.
Gabriel Friedlander: So I actually wanted to ask that and Iztik you sort of answered it, but for the audience, can you sort of explain the difference between dynamic static analysis and just static analysis?
Itzik Spitzen: Absolutely. I think you know, obviously Roeland can do even a better job than me, but I can tell you what I'm thinking as a customer, as a consumer of that product.
I think of SAST as just like reading the code base, trying to find patterns, trying to find dependencies, trying to find dependencies is a different thing. Like we know, but it's part of the static from my perspective, because they include there and you can find those and find those elements.
You can find practices, you can find areas where, you know, you have potential usage of things that could match and things like that. So that's the static code analysis. It actually goes through the code that the DAST is actually running on the domains themselves, the URLs. They actually try to tackle, it's almost like a pentester, pentester with an open box.
Texting would look at a white box, sorry, would open kind of like the code base and look at it and start from there. But a black box, which is typically the more common one, is you give them the APIs, you give them the app, and they start from there, they start recording stuff, they start to figure out how things are being played, and they're trying to match that, and change that, and alter that, and find those vulnerabilities.
That's exactly what DAST is doing. From my perspective, that's what I'm expecting it to do. I just found that DAST, I know you haven't asked this, but, but I know that from our experience, is that we couldn't really find a really, really good DAST tool out there. And I think it's just a little bit premature.
Like I think DAST is a great idea. I don't think it materialized to the extent that SAST has materialized so far. Because SAST is really providing really intelligent, deep insights as we look at it and a lot of stuff. And DAST is, is just kind of like maybe on the surface. Thank you. If you put a pen test there, they're typically going to find way more things quite easily than DAST.
But again, I would love to hear you know, Roeland’'s vendor perspective on that because it seemed to me like DAST is really premature yet.
Roeland Delrue: I noticed that when I'm talking with customers, when we were talking about SAST, there's typically not a lot of confusion around what that is and what it means versus DAST.
Some people come to me and they ask for automated pen testing. Others come to me and ask about surface monitoring. Other people ask about DAST. And when you ask more questions and you're probing, like sometimes they mean the same thing. Sometimes they don't. Sometimes the customer doesn't even know.
Sometimes the compliance standard is not always clear. Like for example, I trust is like a regulation in health tech and high trust will say you need a SAST and a DAST and then, so then the buyer sets out on the journey of finding a DAST, but then they're not always able to articulate what the DAST actually needs to do.
So yeah, it's definitely term that has been claimed very often, but it is indeed not solidified. And it's also one where we're still like, it's not in search, but like continuing or, development on it because we feel like there's many more things that we can do there.
Gabriel Friedlander: So do you feel a lot of people are just slapping DAST, you know, on their products as a marketing term, just because regulations require that. So they're like, oh, we also do that.
Roeland Delrue: Yeah, I think so. SAST has been around the longest. And then that's the static one. And then the dynamic one came and then it was like SAST so there's this play of the SAST and DAST thing.
And then a lot of vendors will also claim that DAST just likes to be able to, to check the box. But I think it's almost up to the customer to decide for themselves, what do we expect from a DAST? Otherwise, a vendor will educate you in a way that suits them best. Or you should ask your auditor or a knowledgeable auditor that really knows what DAST means in your context, in your business, for your regulation and make a recommendation there, what a good DAST should look like.
And you might then find a DAST, which doesn't even claim to be a DAST, but is Surface monitoring that allows you to achieve the things you're looking to achieve.
Gabriel Friedlander: It sounds like the expectation is to check the box currently for that specific thing. But from what I'm hearing, you guys say is for now, focus more on the static analysis because it's more mature.
It will probably bring you better results from expectations point of view from what it's claiming to do. And the dynamic one is early stages. It's very promising, hopefully with the AI coming in and all of that. Maybe you want to already be there and sort of evolve with it.
Roeland Delrue: I think the term has maybe not sold yet, but there's good DAST tools out there.
So for example, Zap, which used to be sponsored by OWASP, it'll double check all of your headers. Like if your CSP header is set, that is just a good practice to do. And like, if you can run Zap like some will call it DAST, some will call it whatever. It's a good thing to do. And if, even if the category of DAST hasn't settled yet, it's a good idea to double check all your headers.
Itzik Spitzen: I think of them as supplementary as well, right?
It's just like there are multiple domains of different things, categories of things like depend dependencies is one of them and, and secrets is one of them. And there's each and every category has its place. I think the DAST starting with the fact that it's runtime, right? You run it on a real thing, right?
It can do a lot of things. Like you mentioned the headers, the domains, the encryption, everything like that is done in the real world. Not only here's the code base when I deploy it, it will run. No, this is the actual product and you're testing it. So I think it definitely has a ton of potential and there are probably more, a lot of good tools out there.
But it feels like, again, it hasn't matured into something that I can understand exactly the boundaries of what it is. What is it covering for me? And what can I expect from it?
Gabriel Friedlander: So, moving to another topic around this. How do I know if what I've implemented is working? Like, it's producing good results.
What's the KPIs that I should look at? How do I know that I'm making progress from scan to scan or from, you know, year to year, or month to month? I want to feel that I'm improving. I've installed the tools, I've installed the tool, and now I'm here as the manager. I wanted this dashboard.
I want to know that it's getting better. What do I look at?
Itzik Spitzen: There are two questions there, actually. I intentionally jumped in before Roeland replied, because I think of it as a customer, there are two things. One, how do I know that the scanner that that I picked is a good tool, right? And that it's doing a job properly.
And secondly, what should I start measuring once I start using it? Those are two questions. Okay. Some overlap there, right? But what we've looked for again, that's from my perspective of our team.
What we looked for is first of all false positives again. It falls in the categories there wide like we don't want to be overwhelmed with a lot of information that someone needs to triage and work on and so forth. The more the steel, the more concentrated and crisp the information is the better for us.
So that was one of the things we looked at, false positives and kind of like, also related to that actionable results. Like, okay, I know that there is something going wrong here, but what do I do with that? Actionable results are another thing.
And then manageability, prioritizing and so forth, being able to actually cover more space but see everything in one place, and you know, You know, those are the things that we were looking for.
For us, a good tool is a tool that does all that right. That does it well. And we compared a bunch of tools and said, this is doing a better job because it found more things, but also very crisp and very clear and so there's a way to measure that.
And to the second question, what are the KPIs once we start using it?
Again, we already only started. So we were kind of getting used to it and so forth, but you know, what we're expecting to have is KPIs is False positives. That's an overlap. But also how frequently are we looking at it? What's the speed of remediation? Are we finding things that maybe others found?
Like we have some bounty testers that are coming to us and say, Oh, yeah, we found this on your app. Did we find the same thing in the tool itself? So over time, we're gonna find out more. And we'll probably be able to optimize that system.
Gabriel Friedlander: So you're saying enroll before I give you the opportunity to answer that as well.
From what I'm hearing, it's not only the tool we're using, it's also how it's impacting what we're doing, how it's impacting other areas of our business. So for example, like you'd say, how many bug bounties, you know, how many bounties are being found, how many other areas, how much time it's taking, what's the quality for our code now versus before.
So there is an impact that these tools are making that we want to look at that as a result. So it's not just like isolated to the tool. It's also things that we wish will get better in general, right?
Itzik Spitzen: Yep, absolutely.
Gabriel Friedlander: Okay, Roeland, the floor is yours. Yeah, you're a vendor that does that, you know, how do you tell your customers, look at this, you know, year over year, you should renew.
Roeland Delrue: Look, things are getting better our trend over time report is one of the most looked at because there you can clearly see if you're going up or down. And then typically that would be a quantitative way of measuring it, but the qualitative way of measuring it is how often your developers complain about false positives.
So if they complain less than before and they are happy with, you know, the security tasks that they've been assigned to. That's typically like a good way of measuring if, if the scanners do a good job or not. And then objectively, if something like yesterday, the polyfill.io or the log4j happens.
And you go into your scanner and you see that your scanner didn't pick it up. Like that's a good litmus test, I guess, to see if your scanner is of great quality. You can also run the second scanner in the background that maybe doesn't create tickets for your engineers. But if you want to compare or you can always do that.
I think the pen test - although we're a scanner company - we also do pentests. We have a bug bounty program as well. I also agree that they're complimentary. Like your pentest is like a great check in point, especially if you can use the same party every time. And you're very happy with that party and to do a good, a good, and a great job in a very creative way.
And they're having a harder time finding something real. Cool. Then that's typically also like a good, more qualitative, but it's a good feeler of okay, we're in good shape. And then lastly, I'm not sure if it's really a testament for good scanners, but the ease by which you get through security reviews by your customers if that becomes easier and easier with time, either your setup is getting better or you're just getting better at pretending or checking boxes.
But it's definitely like, if you're breezing through security procurement, then you're probably doing some things right.
Gabriel Friedlander: So, in terms of false positives, do developers have a way to impact that? Like, do they report, oh, this was false, and does the system or scanner sort of learn? Like, is there any machine learning, AI, all of those buzzwords in the system that are trying to do that, or it's just rules?
Okay, this is false positive rule, now it will never happen again.
Roeland Delrue: Right now in our system, it's still rules and you have the ability to report it to us. And then we get a snippet of it so that we can inspect it to see if we can improve our engines and rule sets. I think there will be vendors out there that will already claim AI and machine learning, but in reality, there's not that many that actually do it.
Plus you don't want to have your code being sent to open AI. So if you want to do it properly that means the company in less than six months was able to run their own LLM on their own servers, having it trained. And all of that, I think the big wave is yet to come. There's probably a lot brewing and maybe some alphas and betas running left and right, but like true AI is still yet to come there.
I feel like the learning mechanism on the false positives.
Gabriel Friedlander: So a question for both of you. So we are a vendor that does secure code training. How does all of this, because we were talking about, you know, helping developers find issues, do you see that people that write better code?
It's almost like to me, I think I know what the answer is, but what let's, I guess if you write better code, there are less, less issues in general. But the question is. When are the teachable moments supposed to happen? And what are those tools that help to become better at writing secure code? Like, can those two things work together?
Because can one feeds the other? That's sort of what I'm interested in.
Roeland Delrue: Yeah, for sure, right? Like, training is the ultimate shift left. Shift left is all about catching it before it becomes real, or making sure it never happens, or prevention is the better cure or how did they say that?
Or prevention is better than the cure. So yeah, training is definitely like super, super legit, right? Like that prevents it at its source. But I guess people retire and new graduates come into play. So it's always going to be a never ending education. But yeah, secure coding is definitely super important.
Like I think for ISO and also SOC 2, it's one of the requirements for PCI. There's a specific PCI trainings and courses that zoom in on specific aspects. And even if there was still an issue that slipped into the cracks. Then you have to go and fix it. And then you have to understand how to fix it and why it was vulnerable.
So, and even at that point, you need some knowledge, right. Or some experience around how do I now go and fix this and then a good training right there and then, or a great training in the past that can now help you resolve that quicker and faster is like, if we, as a scanner can say it's here, it's vulnerable because of this and you need to fix it because it's very dangerous. We try to give a human readable explanation and some pointers on how to fix it. But the human still actually has to go and do it. So there's, there's definitely a gap there to be filled.
Gabriel Friedlander: Iztik, do you want to say something?
Itzik Spitzen: Yeah. I think it is really important for developers that are looking at this, even if something is telling them, oh, here's a problem that we found. Here's why it's dangerous and so forth. It's going to be, you know, a sentence or two. It's not going to be like, they're not going to sit there and read documents, right?
Maybe if they're curious, they can search it up and look for it, but most developers will rely on their experience. So in their experience, they need to understand two different things. One is what's the impact and how is it being exploited and what can happen? Like if they understand that very well, it will say, Oh, damn.
I really need to fix this. Okay, because they understand it firsthand. They have an experience. They understand how it plays out and they understand why it's complex or sorry, why it's dangerous. And then in terms of remediation, just like Roeland mentioned you need to fix it. Sometimes, you know, the tool will tell you that you need to fix it, but it's not really actionable until you really understand what you need to do.
Do I need to replace the dependency, maybe that's easy, but sometimes there's a way to handle it. There's a way to sanitize inputs. There's a way to do certain things that I need to do in order to mediate it. So that comes with education that is not expected from the tool by the tool to be able to help me because there's also context and there's different things that are trying to build.
So I need to understand it very well the way to remediate because there've been so many cases where, you know, we as a team, we got pen testing and it gave us a report with a bunch of things. And we fixed it and went back to a retest and the pen testers came back and said, yes, you fixed it, but now you have this.
So if you don't really deeply understand what you're handling and what the vulnerability really is, the scope of it, you're gonna fall again and again and again because you're not fixing the right thing or you're not fixing it enough or you don't understand the core issue and the core cause of the issue.
Gabriel Friedlander: Especially when many times you have too few vulnerabilities that each one of which are each one on their own are very low, they can hardly impact. But together now it becomes a high vulnerability. And those are things that are very, very important. You know, if you just, for example, expose the company ID, you're like, okay, so what, you know, it's a company ID, like, what can they do with the company ID?
Well, there's another API that if you supply it with a company ID, you can hack that company, right? So there are many things that you sort of need to understand. And you can't just rely on the severity as the one dimension. You can't just say, high is, you know, high I need to fix, low I don't need to fix, you know?
You have to understand the context of where things are running, what can somebody can do with them, and only then can you really start fixing things. Because if you just go through low and high and just disregard low for a later time, like I've seen that over and over again. Two lows equals a high.
But one thing that I do tell customers, at least, is that if your training is good, you should expect to see less issues on your scanner. So this sort of can be something that the KPI of the scanner can decrease as a result. Training, which is a great way to know that both work right that both the training works and your scanners work because they basically relate to one another.
So that's something that usually, you know, when we go to customers and they have scanners, that's sort of what we talk about.
So I think we're almost at the top of the hour. So any last thoughts? I have so many more questions, but that will open up probably another 15, 20 minutes of discussion.
So any last thoughts?
Itzik Spitzen: I have one thought that I think it's really important, at least from my perspective. Once you make the decision to go and engage with some tool and, and run it, it is really important to put the process around it as well.
Because it's not enough to just buy it and say, oh, now I'm protected. It's like, there needs to be a process. People need to understand what the role is. You need to train your team, what this tool is, how to use it, how to look at things, what questions they should be asking, what feedback I'm expecting to get to optimize the process and et cetera.
Who's looking at it? When? Is it part of the CICD? Likely, yes, but how do we run it? And there needs to be a process. There needs to also be a process. Okay, we found a bunch of things. How do we triage and prioritize? And how do we fix them? Who fixes what? What is it if we don't have, we're not committed to an SLA because of whatever you know, we need to put the SLA in place for us to understand what's going on.
What is being fixed and how and by whom. So I think this is really important. It's not enough to think about a scanner in the abstract. The scanner is part of the process. If it's not incorporated into the process, it's not useful. It's just having another tool that does nothing. If you don't look at it.
Gabriel Friedlander: Roeland last thoughts. And also, if you want to talk a little bit about the company itself, because, you know, I think we sort of got it, you know, there are a lot of things we spoke about that your tool solves, but feel free to spend a few minutes. If you want to explain a little bit more how it works for the audience and how to get in contact with you, if anybody wants to.
Roeland Delrue: I'll first start with the thoughts or the final thought, which is around really thinking well about what you're trying to protect. And I often compare it with a house. Like you can buy asurveillance system for your house, but like, what's your house worth? And like, what if somebody gets in and of all the houses in your neighborhoods are you the one that is most likely to be attacked or is it an attractive target? And so, if your house is likea cheap house and you don't have much valuable things lying around, then maybe you shouldn't spend that much time and effort on trying to find the best scanners and spending much money on all these expensive systems, but if your house happens to be a bank and people come in and out like a couple of times a day, and in there, there's a bunch of money lying around and there's people with access to that money.
Now you might want to care about how that works. And then at that point you should start to thinkokay, where's that? Like, let's take the example of the bank, like where my gold bar is lying, who has access to it, and then what's the worst thing that can happen. And so thinking back or reverse engineering from that risk and then building your strategy around it, because I mean, you can put scanners on everything, but by the end of the day, it's your gold bars that you don't want to get taken away.
If you're a health tech company, the worst thing that can happen is that somebody steals the health data of the patients and like that could bankrupt your company. So that's where I would focus. And that's how I would build out my program and do my spending if you will.
Gabriel Friedlander: I would also remind people that a lot of software companies are part of the supply chain.
So if your product is installed or used in an operating room you know, hacking your software can result in injuries. So physical injuries. So people need to remember that it's not just about their money. It's also about their customers. And if their customers get hacked because of you, that's a big problem to you as well.
But definitely, like you said, take a risk approach. Ask yourself what's the worst that can happen because you can't defend everything the same way. It's just almost practically impossible to look everywhere. You want to look at your crown jewels more than you look at other stuff. So start there, look for the scanners.
Roeland Delrue: Then on Aikido we, me and my co-founders, used to work in software companies ourselves, mostly in the SMB space. And we were using a bunch of different scanners for the different areas like secrets, open source dependencies, infrastructure as code, static code analysis, DAST.
And we were using a bunch of different tools and we spent a lot of time vetting them, like for each of those areas, you can talk to four different vendors and then they will drag you into demos and qualifications and then present you with hefty quotes. And then we went through all of that and then we were spending a lot of money, but then we were finding ourselves with tons of false positives and alerts and notifications.
So we hired a security engineer to like triage and help with all of those things. But then that person left the company because it's not such a rewarding job to do. And so at that point we were left challenged with that situation. And that was the kicker to start Aikido because there wasn't simply a great SMB product out there.
There's only expensive CISO products out there for the majority, at least. And so we combine all of those areas that we believe or that we cared about in the past, and we found consensus with our customers that you should care about, and we have a scanner for each of them. And then we specialize a lot in taking the false positives out of them, making it actionable, making them human readable.
I think the reason why our company is doing well is because of our PLG approach. Like you can just try it out. You want to see it to believe it, or you want to, you know, the proof of the pudding is in the eating. And so we like it very much that people just sign up, try out the free version, which is quite generous actually.
And then if and when they feel likethey need to up their game we have some paid plans that allow you to go further and add some more customization, but that's, that's in a nutshell what we do and who we are.
Gabriel Friedlander: I love that approach. That's very similar to what we do. We started with a free security awareness training version which still exists. And if you want a little bit more than we have a paid plan. So how do people contact you? Did they go to the website and just, you know, fill up a form basically? Do they need to sign up or can they just go into the product and play around with it immediately? Or do they need to talk to a sales guy?
Roeland Delrue: They can just go to the website - aikido.dev. There's no need to talk to salespeople at any point in time. I think it's on the website, like no need to talk to sales, but obviously if you want to talk to somebody and get a demo, we're obviously there, but we're more of the philosophy of joining your buying cycle instead of enforcing our sales cycle on you.
So either way you can try it out, sign up for free. It takes two minutes. Literally. There's no credit card needed. We've made it really easy to sign up. You can also book a demo there or for people you can email me directly. It's just Roeland at Aikido. dev and then you'll end up with me.
I'll be happy to accommodate you.
Gabriel Friedlander: Cool. And it sounds like if people have a follow -up questions to this webinar, you'll be happy to answer.
Roeland Delrue: Sure. Yeah. My pleasure.
Gabriel Friedlander: Great. Okay, guys. Thank you very much. I've learned a lot and I hope the audience also this was a great discussion.
Roeland Delrue
Itzik Spitzen
Gaby Friedlander