
Just a small announcement, which I will share with everyone after the presentation of Svetomir Baleski, who follows after We will have a short coffee break from the outside to the left, so you can rest a little from this information you have heard. Just to say a few more words. It is interesting to hear all those who are present, what Minister Bojanov asked for. If any of you want to participate in this specialized group, which will eventually respond to cyber incidents, will help the state administration respond to cyber incidents, can send their very brief summary, that is, a brief description of what are the possibilities and the desire, of course, is confirmed by contact@bsites.bg. They will be transmitted without any filtration of Minister Bozhanov, I promise you, so
that he can come and choose a task force with which he will perform such an activity. I give the floor to Mr. Baleski. Hello, I suppose we can hear each other. Now I will make a short presentation about DevSecOps, famous buzzwords, etc. We will mainly talk about buzzwords, I apologize to the people who are watching this, especially the person we talked about before, but today we will make it a little more It's more informative, we won't talk about anything complicated, absolutely nothing that is not known, but it's nice to clear up a lot of big buzzwords. And that's the idea of the presentation. Ah, ok, great. Now, hello again, my name is Svetlomir, you can call me Svet. And most people do this, especially the long-speaking ones.
because it is difficult for me to contact you if I need you for something. We will leave the presentation and then I will come to you. As I said, I have been working in IT for about 8 years, I have been working in security for 4 years. At the moment I work as a DevSecOps engineer in an Israeli company called Natural Intelligence. There I am responsible for some of the things we will talk about in this presentation. First, let's talk about DevOps. There is no buzzword, I doubt that there is anyone in the room who doesn't know what DevOps is, hasn't heard about it. Extremely modern, it works very well, and so on and so on. But I have to do this thing. This
is philosophy, at least. It's not some exact science, and it's exactly what gathers in software development and the society. It's very nice, there are feedback links, everything is automated and everything happens with the so-called buzzwords. As we said in this lesson, we mainly talk about these very famous buzzwords, if there are DevOps engineers who are pure DevOps engineers, they know more than me and they develop solutions. In general, what is Continuous Integration, what is Continuous Delivery, what is Pipeline. Continuous integration - every time there is a change in code, when it is launched to the repo, everything is automatically built into tests. If all builds are passed, this goes to continuous delivery to testing environments and staging
environments, then we go to production environments. Nothing that is not done almost anywhere today. What is pipeline? Practices that developers and ops people implement together to build, test, deploy quickly, easily, conveniently, without any problems. This is very nice, but there is no security here. There is no place to mention what is security, how exactly we put security, what exactly we do inside. Now, what is DevSecOps? Or at least one of the understandings of what DevSecOps is, because I have been working for a few years and I have heard a lot of other variants, a lot of other hints of what DevSecOps is, how to put security in DevOps and so on. We integrate security in every phase of software development. Lifecycle, from initial design to the tests. One
of the main ideas, something that I think we will have another lecture on a little later today, is that everything should be shifted to the left most possible. What does this mean? This means that the problems we find in the development phase Security problems are solved very easily and quickly. The problems we find after the things are in production and a pentester comes to you or, as we say, a bug bounty hunter and says: "Hey, you have this as a problem." The solution to these problems usually means, maybe you are already hacked, it means a lot of time, a lot of nerves, downtime, people working 24 hours a day, At night, if something happens to
someone, it's very funny to start working at 10 o'clock, because some bug bounty hunter has sent you an email at 10 o'clock in the evening, because you are working at 5 o'clock. America sends you an email, "Hey, you are hacked here, at which time at 10 o'clock in the evening on Friday you gather the whole team, people are walking around the keyboards, they don't know exactly what happened and you get together on Sunday morning, for example. You work from home and go straight to bed, right? It's very pleasant and extremely expensive. That's why some of the ideas of DevSecOps and the modern application security are to find the problems as soon as possible. The solutions
are crazy, the orders are cheaper, the danger is non-existent. Now, what are the main problems that come from DevOps as a whole? Because, hey, waterfall, abala, releases, we have time for tests of a theory, we release once a month, once a year, I've seen releases that happen once every two years. And with DevOps, in theory, such things shouldn't happen. Everything is done with continuous deployment and continuous integration and things can be pushed, tested, built, production, staging and production later. In 2020, unfortunately, there are no new data, because Amazon didn't publish them, but they have deployed 23,000 times a day, which means that they have had one deploy every 1.3 seconds and personally it was hard to delete it. There are a lot of changes that
we can't catch up with traditional methods. There is no release, there is nothing. This is one of the main problems we have here. We don't have a clear design. There is no moment when the solution architect sits there, or the software architect sits there, or 50 architects sit there, they make a mess and in the end something comes out and the parts start to code and after 3 months, 4 months, 5 months we have a release. During this time this design has passed. People have looked at it, done a threat analysis, thought about it, and in the end it turned out that they have seen more problems. We are chasing this thing here. This is
one of the other problems. Fail early, fail fast. Everything has to be very modern and throw away. Everything is very nice, but if this code will actually be throw away, why should we test it for security? The chance that this thing will be thrown into the coffin is huge. And what are we doing our job for? We test it, we copy it, we have licenses for the test tools to integrate some things, but this will be thrown away at some point. And the other main problem that comes from DevOps is not always these things are a problem, but the things we have to do are the microservices. All the small microservices that talk to each
other, which is not such a problem, but a problem that the attack surface in practice, because all microservices talk in one way or another, the attack surface, that is, the things that can be attacked, become much, much more. So these are the main challenges, they are not problems, they are things that come from the shift of technology. The technologies are changing, we are starting to do new things and we need to solve them. As I mentioned, here we go, we don't have a laser here, but we'll do it later. How things look in Waterfall, as I said in the previous slide, requirement analysis, design, coding, testing, hardening, and finally we go to production. I have
seen projects for which this thing takes years. I have been involved in such projects for years. People do very complicated things and they can't let them go. They tried to go to DevOps models. DevOps models are coming out from here and there. And how do we actually compare the security? The idea of what is called Continuous Security is exactly like Continuous Delivery. Everywhere there is security in every area. We start with security. When we work, we work with security in mind. We work with security practices. We will talk about it today. These things are being compromised from the very beginning. Because as you can see, it's up there in the classic waterfall, these things become
hardening, testing, from there on there is already security, up and down to the middle of the road, when many of the things are already done. Really, when we made the whole code for the developers, if you have been in such a situation, I have been in a situation where the developer tells him: "This is what the tests failed us, I confirm that this thing needs to be fixed." The person goes and fixes it, and fixes it for two days, because his colleague wrote it three months ago. The person needs to be guided at first, but he has found a problem, but this should not be fixed in code, but this was written by Pesho, and
Pesho is no longer working on the project, or in the company, or has not made any documentation, I have no idea what to do. It takes a person two days to fix something that can be fixed in half an hour if this is not discovered in time. This is actually the idea of Shifflet. One thing is that when things are fixed before production, all the security issues are caught, you can fix them before a problem occurs. And secondly, their solution is really faster. Which means less time, less nerves, less money. Because the hard part of these people is money. Releases are... well, they are not releases, but they are made by the promised functionalities, because
we got the stuff very, very late. So this is the main idea of the so-called shift left, which is another buzzword here. "Nice, this sounds great, I'll explain some things to you." We all agree on the idea, but how to do it? I've put some ideas here, they are not full ideas, they are not written, just some ideas. Because the time is limited, we won't be able to cover everything today. I think people are taking courses on this topic, and they are much more talented than me, who have been doing it for a long time. But today we will put the main things. Security by default. Framers, templates, everything you can use and reuse, especially developers, to reuse something that is already secure, use it. Because all
these Framers, templates and so on, they are created by people who have been doing this job for years in a row and the chance of being better at these things is much greater. Don't try to rediscover the new, rediscover the warm water. I have seen things literally returned with From the manual security code review you have written, you have done the encryption yourself. You have implemented the algorithm yourself. It takes a lot of time, but for this I am not sure that this is secure. This is one of the ideas of Secure by Default. And to avoid such things. Security as self-service. I'm sorry that I can't see the text very well, because it's on white background, but it's all the same. Security engineer should not
be a blocker. In the classic waterfall, in the other processes we have seen, the security engineers are a bit, unfortunately, blockers. "Block here until I find time to look at it, you won't do anything because it may not be secure." Unfortunately, in the 21st century things don't work that way. We, I say we, dev-secops, security engineers, security people are the ones who have to work with development teams in order to be able to do the job as easily as possible. There is a small gap, the developers are very technical people. Most of them are super interested in security, which I have met and they actually know what they are doing. We just need to find
a common language with them, because I don't know how many people have been here, but I was in a situation where the security is so unincorporated, waterfall projects, like projects, we stop here, we wait for a security engineer to say that everything is secure, we stop everything we do and 20 people wait for a security engineer. This is in DevSecOps, DevOps, things just don't work. That's why Security as a Self Service. We don't block the developers, we provide them tools, we do things, possibly the easiest for them, so that in the end our goal is common. Infrastructure as Code. I suggest everyone knows what it is. You write some templates, whether it's Terraform, whether it's
CloudFormation, what this thing is and this thing is applied to the corresponding cloud or whatever you work with. Here we are talking about application security, DevSecOps, infrastructure as code, the same checks as security code that developers write. There is also the tool for the purpose and today we will not talk about the exact marks on the tool, but if anyone is interested, we can talk about it later. You have emails or we can just talk about the idea of the presentation. But infrastructure as code also passes our security tests first and secondly, it helps us to achieve things securely, without someone having to pass by and implement manually, security groups, will this thing be open to the world? No, it won't be open to the world. You will
write your infrastructure code, pass it through security testing and you know that you are much more protected, separately that it is faster. Small incremental changes. This is something that comes from DevOps, from agile. Things are small, changes are small, the chance to get into something, to get into a big security vulnerability, to appear in a big release where there are a lot of things connected around it and not caught by anyone is actually bigger. This is one of the benefits that comes from agile. When the changes are small, the chance to try to get in successfully without the disk, of course, something with a CVSS score of 10 is much less. High speed of delivery. Many of the things we have, like DevOps,
like applications, change day by day. The moving picture is getting faster. This is not something we do, this is something that comes ready from DevOps and Java. We have the so-called honeymoon. Potential attackers, in order to do something against your infrastructure, against your application, they need time. They need to see how it's going on. These things take time. And in fact, high speed of delivery of DevOps is what keeps us going. On the other hand, it's very difficult because we need to incorporate security practices, processes and practices. with the speed of DevOps. But on the other hand, this is a prejudice for us, because attackers also need time. That is, a little or a lot
of this is a prejudice, a prejudice is a remnant, but in both cases it is just something we need to get used to. Now we will talk about how security is implemented in continuous integration, continuous delivery, because I wrote only continuous delivery, another question. Or how things happen. Can you see the picture I drew? Can you see it? Not very good. Ok, sorry. I think there is no way to zoom in, so we need to talk. Does he have a laser pointer? No? Ok, I will try to explain it, you will get the presentation. This is not something that is so complicated or so new for most people. I will explain what happens before commit. This is when the developer actually writes code before the code is
committed. What security measures can we apply here? Threat modeling and terratics. Do you know what threat modeling is? Okay, a lot of people know. Great. I won't go into detail, there is a separate slide for threat modeling. Threat modeling is a way to think like an attacker. How exactly can it be in theory, architecturally, your app is hacked, as it is said, most elementary, these things happen iteratively, and as the changes, as we said before, are small, in practice, this process does not take as long as it takes in Waterfall. Simply because when you draw a design that is 20 diagrams, right, it is completely different from how you would say, "Come here, we are changing connection with the database, come here,
we are putting a new view here," and things here are relatively simple, they are just made iteratively, and it is not necessary to rediscover the hot water every time. Common sense. Now with Software Composition Analysis integrated into IDE. We will have a separate slide about what these other buzzwords mean, but as a whole, these are static scans. These things can happen at an IDE level. When the IDE says: "Don't put role queries into the database, please." These things happen at this level. They can be a big part of the problem, and they will be incorporated into in the pre-commit phase. Security code reviews can be done here, when you are not sure. This can happen on a higher level, it can happen when the
code is already being launched, but it is good to have people's access to the security engineers. To know who can help. Because In practice, we used to do new things with a colleague who needed an explanation, someone had to read the RF code and find out what he needed to write. And, accordingly, together with him, because I am a developer, I have technical people, sometimes people underestimate them, but they are actually no less technical than security engineers, sometimes they are much more. Together with him, with a colleague, we worked and we exactly found the RF code, so he could to be able to write them in the software. This happens while writing. Security engineers or so-called security champions. It's nice to have such in teams. If
you have more than 3-4 teams, it's very nice to have at least one person who understands. Usually this is a senior developer who can help with security at the level of other developers. It's a great result if this is a day-to-day developer, but just a senior, who has a desire to do it, there are a lot of people like that, who work very well. After that we are committed. What are the other things we can add? Up to here, things were a bit or a lot manual. Here, in the later part, we are already committed and everything needs to be able to move from the Continuous Integration, Jenkins server or whatever you use there. Software
composition analysis, SAST, as we said, we have these things separately. You will see a separate slide, these are automatic static tools, security unit tests, security tests, if such exist. and some of the things can be tested automatically. I will make a note of something that I will repeat. The idea of this whole process is to create secure software, not to replace penetration testers. The fact that we have such processes does not mean that we do not have a penetration tester, but if the penetration tester comes to you and says: "Here I found SQL injection or I managed to log in with admin, you have totally wasted the time of the penetration tester. And this time is wasting money, and can be used for a
lot more meaningful things than things you really need to be able to catch. From what I'm showing you, the idea is to catch not 100% of the things, but all the wall hanging fruits and all the things that can be caught and be caught without putting a human on the meat. Because in the end, the penetration testers work for the payment. The B&Rs are nice to sign. Then we are here. I don't know how well you can see. It's a difficult question. The first one was Pre-Comet, the first square. The second one is Continuous Integration. Things have gone in the binary. We are in the acceptance testing phase. The first square is a square. What
can happen here? What do we do? We will have another slide about what is Dust. Maybe I should bring it at the end, but this is a dynamic test. The target is because DUST is a very slow process. In most clean DUST tools that really migrate the penetration tester, these things happen slowly, but we can automatically add target DUST. Smoke tests with security, IaaS tests, I will talk about this again in a separate tool, here we can add tests for IaaS. This is the phase in which you are already working from the machine QA. Manual pen test, fuzzing and more detailed SAST can also be done in this phase, but these things are done out of band. What does that mean? Out of band
means that these things are not part of the pipeline. You don't wait for the pipeline to pass the test that can take days or manual pen test to say "pff, it's going on". These things are done on a certain time. They are manual processes. but they are incorporated again at this stage, after we have something more ready. Really, the more expensive, temporary tests are done at the end. At the beginning, they are made easier so we can fail quickly. Not to start with a manual pen test on something that has not yet passed the other phases of things, so these things are pulled back. And now production, there we have Standard run-time defenses, web application firewalls, RASPs, real-time application security protection, and
there's a separate slide for it because it's a more unknown buzzword. Bug bounty, all of this happens when the things are in production. Actually, how do I laugh? Ok, wrong button. So far we are talking about some things, where to put, where to leave, the presentation will be if something is not clear. I repeat, the idea is to set a baseline and talk about basic things. Really nothing complicated, but these are the well-known tools - SAST and DAST. with static application testing. A linter that passes through your code and tells you: "There is a problem here, there is no problem here." These are the things that are most used, really. The easiest to use. You can put it in Jenkins or in other tools you use.
You can put it in... in the SCM, in Git, in GitHub, in GitLab. They have their own system that works and checks. I suppose they have their own GitLab. You can use external, internal tools, everything. This is the thing that is most often done. Now, if it really goes through your code, it knows when it sees a role sequence query and says here there is a problem, when it sees, for example, hardcoded credentials, here there is a problem, here there is a problem, here there is a problem. This is the easiest tool to use, the easiest to work with. You can use it, as I told you, there are options for the idea that are
smaller, they check. After these checks, you can launch them. Usually, commercial softwares come out with options that are for the idea and that you can put in Continuous Integration or directly in GitHub or in Git, whatever you use there. But they give the most false positives, really. They are fast, comparable, easy to work with, very false positives, they give you a lot of direct information. In this order, in the code, there is a problem. Go fix it. They can't solve complex problems. It's harder to solve complex problems, it's harder to solve things that are too much, it's harder to give you information about things that are too much, they require some Dasta is an application
security testing or dynamic testing. It mimics a penetration tester. It tries to do what a human would do. Of course, it is much more unsuccessful. For it, the application must be built and deployed for test, stage and production. Don't use it for production. It has a big chance to ruin a lot of things. If you use it, it's mainly used for staging and testing, there are no false positives by definition. Really, as long as you have the app, if it finds something, it's usually not a false positive, that's not really a problem. The problem is that the result from the DAZ is usually an "hey, here, when you make this application, this thing happens too", which is for... there's no code, there's
nothing, you have to go in and find it in code, and the developers go into code, they are dealing with this thing. But yeah, very nice tools, relatively expensive, hard to incorporate in the CI/CD pipeline, often they are done in a week, once a week, once a month, depending on how you do the Open Test. There are lighter tools, for example, there are free tools like ZAP, if someone has worked with ZAP, you can put it in real time for 15 minutes to pass on every build. But to set up a commercial dust tool in 15 minutes, it just won't happen. These things often take days. But they are not necessarily tools. For example, I make a request to the database that was deployed 10 seconds ago with
admin-admin and if this thing doesn't fail, this is a test again. Whether you wrote it manually or took it from a ready tool and the end result is the same. So some of the easier things may not necessarily have their own tools, but it's nice to know what you're doing. This is so hard to understand. Software Composition Analysis. Software Composition Analysis is the most modern thing. It scans the vulnerability dependencies. And how many people were there at night when Spring 4 Shell was released? I personally was there at night. We had Software Composition Analysis, which showed us what versions of Spring we have. and a lot of things were much easier, but really the idea of
Software Composition Analysis is to look for problems, for no vulnerability in the dependencies. Here unlike the others, it finds nothing. Software Composition Analysis goes, pulls the list from the dependencies and checks its database directly. Here we use the log4j version, I don't know what it was already, and you have a problem with If you have a potential problem, fix it. The more modern commercial tools can open pull requests, which is extremely convenient and we won't stop at new tools. You can be incorporated again in many places, including in GIT. Even in Github there are tech-nices, you can use plugins from abroad, but in practice it is a great place to incorporate these things. Or in FIDETO. Again, I say that these things can be put together, the versions
can be put together in FIDETO. Usually these FIDETO are checked for direct dependencies, for dependencies below, they check the more complex tools recursively. But it's quite easy, again, these things have to happen as soon as possible. Because for one of your developers it is much easier to change the version of the log4j, log4j, I don't know what the library was called, when it develops, than when this thing is already in production. Or even testing. Because of this reason, everything has to be placed on the left and this is one of the ways to achieve it. Those who have given themselves the money, if you give yourself the money, then you can open pull requests. They
can do more and in practice, you often only need to accept pull requests. If everything is set up correctly, you will not have any problems and it is a very good practice to use some, whether open source, commercial, it depends on the budget you have. AYAST, RASP, now, I mentioned this, this is one of, I'm sorry for the font, maybe it's a little hard to see. AYAST is one of the best ideas in theory. Personally, I have worked with AYAST only for test, I have not worked fully with AYAST, but one of the newest things, the newest of the newest, 3-4-5 years it is already on the market, I have not seen any alternative, how does AYAST work? AYAST, in practice, It is used either
in combination with dynamic testing or instead of dynamic testing. It works inside your application. In practice, it follows the traffic between the various components of the application and thus looks for a known attack pattern inside the traffic that is inside your application. so that you have less false positives and actually catch almost everything. If the sub vendors claim that they can catch 100% everything, I with a 10% chance, I don't know what I'm doing, why I'm doing it, we take it as a grain of salt, but in reality the IaaS should be able to catch the most problems in theory and in practice. very well working systems and problems, although there are several of them. Because this thing needs to be incorporated
into your application, inside. Every application is instrumented, the agents of IaaS or Raspb, because they are two identical work, inside your application, which is individual for every application, not all frameworks are supported, Java and Spring are surely supported, but not everything can be supported. but non-standard frameworks and languages are not supported. It works very well, but it is very difficult to instrument. It requires intervention. Not that it is difficult, I mean, usually you have to put in some Dockerfile code or something like that. It is not very difficult, but it requires a lot of intervention, much more than the other tools. But I say again, what do I do? I look at, for example, frontend, backend, base, I look at all levels, traffic inside and I follow
known patterns. It needs to be generated traffic. Really, the IaaS is not a forgetful, but someone needs to generate traffic to the application. This will be either your automation QA, their tests, or even the manual QA, or you can use Dust tool for the purpose. You can automate it completely. These are probably the most expensive solutions I've seen. Many people claim that they work well. The comparison of how tools work, good, bad, what you find, A topic for another lecture, I'm not sure I have an answer for myself on how to compare two tools. Two types of tools, that's why I've shown you all of them. There's an option instead of IaaS to use the so-called RASP. This is the same, but for real-time protection. If something
is not found and sees traffic that turns on the drop table from the base, it stops and raises an alarm. You have to see if these things are for you, attack or not. I repeat, the more you understand the problems, the better. Everything else is just a crumb. I wrote positive and negative, but it's good to know the presentation better. Although I wrote them at night. Now, the positives, I repeat, a little false positives, a lot of true positives. The problems are relatively easy to identify, especially compared to the dusts. If an app is being instrumented, there is a vendor option. There was at least one vendor that offered both of them. They were instrumenting once, which is the most time-consuming part
of the app, and then things work. We are not talking about vendors here. The negatives are: Limited Technology coverage, used together with other tests, the integration is not straightforward. Hello. Fuzzing, we will only mention fuzzing. Dust pools, ISTT, these things, you can't integrate them at all. You can't. This is a manual process, where a fuzzer is launched, you know, it launches a pseudo-random, random, random, random, random, random, random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random
random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random random
random random random random random random random random random random random random random random And we have predicted that this will not happen in the band. Do not expect that this can be put in your Jenkins server or wherever it may be. It is not part of the CI/CD process. Hand-review is necessary. I tried to guess something at the beginning. Automatic tests do not mean that there is no need to withdraw. It does not mean that the people who are dealing with penetration testing, fuzzing, software code review, security and so on are useless. or that they need to be replaced. But it is expected that when these people find something, it will be a real problem and it will be a problem that could not be caught in the
beginning. Because Fuzzing does things that, as we said, should be caught, but are not. Threat modeling. I have included threat modeling here because this is one of the things that many people who are not interested in classic application security do not know what threat modeling is. Threat modeling is Everything from the interview to the classic Amovit in waterfall, since we said that the changes are small, so usually there are answers to 3-4 questions. What can we do, what can go wrong, what can be done and we do a good job. Full model for big waterfall things, big achievements from the other half of the world, it can take months to be built. And these analyses are a bit and a lot architectural, and the answers are architectural.
In DevOps we use Rapid Threat Modeling and Iterative. I say it again, the things are small changes, and every change can be considered. This is Vue, it adds new problems, it doesn't add. Yes, the connection to the change of the database from MySQL to MSSQL or vice versa. Does it make any changes? Yes, no. The change of the encryption algorithm. Is it a problem? Should it be more detailed? What exactly is the encryption algorithm used for? This is what Threat Modeling answers to. These are examples. But really, these questions are Depending on how big the changes are, they take different time. There are the tools that do this for you. They are mostly on the infrastructure level, but they incorporate
various things. I have seen them, I have not used them in projects. If you use well-known components, if you use AWS, you can say that this is new, you have not done it like people, you have built the models yourself. If you use it, use it. Now, we're not going to talk about vendor stuff, I haven't seen such things in the last few days. There's no such thing in open source. Or at least I haven't... I'm sorry. Can you please return the presentation? I just forgot to show something. The last slide. Now, I've written a few interesting things that you can catch and read. And with the links, I've added a book, I've added the links to Dema. which are not mine, I just
found them, I was actually on a stroll, but I think we don't have time. Everyone can open it and see them. NoSweep has made a good example of how to make security pipelines with Jenkins, using SonarCube for SAST and DependencyCheck for Software Composition Analysis. You can absolutely download and download them yourself. There is even a Docker file version. You can download it for about half an hour from the app itself. You will see it yourself. At the moment we don't have the time limit and we don't have a good server at home, but it won't happen right now. We'll show it to you. Yes, I recommend this book very much. Jim Bird has other good books. The
next one is also good. Is something clear? Do you have any questions? Tell me. S-BOM is Software Bill of Materials. 100%. The idea of Software Composition Analysis is to take S-BOM. S-BOM comes in a way, for different languages and frameworks and technology it is different, but what does Software Composition Analysis do? It goes through S-BOM and sees. And there is a connection with the database. You use the version of the library 3.4. In 3.4 there is an XSS option. This is put on the developer, the developer fixes it, the more modern tools, the commercial tools, go and automatically open the pull request. In practice, he also uses this, an automatic analysis of this SBOM. When
you say SBOM, it was in material school, but there is a moment when you have hidden dependencies. For example, this library depends on this library, depends on this library, depends on this library, until we reach the bottom of the resource. These things should also be in mind when using such tools. But SBOM is exactly that. Software Composition is exactly what it does. It puts SBOM in its problems, they have a connection with databases. Now, all vendors with whom I personally have worked, I have participated in the selection of vendors, claim that they have their own databases, I don't know how true it is. I'm saying again that the types of technology are very clear and
it is very clear what they are used for. If you have to compare this vendor with another vendor with Sony Open Source solution which works better, I wish you success. The zero is very small.