
Hi everyone, my name is Krzysztof Chudek. I would like to tell you about how and why to prepare a company for introducing the tools of CIEM. I want to treat it a little high-level. I do not intend to refer too much to specific tools, unless by chance. Why? Because I've been implementing, analyzing incidents for 4 years now, or developing tools for CIEM, mainly in my case ArcSight, in many companies and government organizations. Can you hear me? Can you speak louder? Hello, hello. Hello. I don't know, you probably can't hear me. Yes? Okay. Okay, I hope you can hear me now. As I said, I would like to talk about it a bit high-level. Why? Because during the dozen of implementations I've been through, practically no company
knew what it was doing. I mean, it didn't know anything. What it wanted, what its goal was, what it was expecting. And it doesn't work that easy. Okay, so as a starter, I worked as a programmer, then I worked in security for 4 years, mainly with ArcSight. Now I work for Hewlett Packard as an implementer or as a consultant for CMs and ArcSight. And at the same time as an independent consultant. Sorry.
I can't be sure that everyone knows the basic concepts, so I'll just briefly tell you what is SIEM, what is log management, what is SOC. So let's get to it. So, in my own words. It's about the fact that some companies are trying to buy SIEM, which is a very expensive solution, but in fact they don't need it. That's why I want to conduct this lecture in the form of questions that should be asked before making any decisions and before the company comes to implement something. So what is log management? It's management of logs, as they say in English. It's supposed to allow us to collect events from all the devices we have and systems in one place. where we will have the possibility to view and search for
them. The events will be protected from their loss and change. The main advantage, which should be, but not have to be, is the normalization of logs to one format, so that we always have the time marker in the same place, the IP address in the same place. I say it doesn't always have to be like that, because Splunk, for example, does it automatically, theoretically, because it has its parsers, so it sometimes goes out differently and puts it in different places, and everyone is happy with it. What else can we get from log management? We can estimate how many logs we have, what is their quality, whether they contribute anything to us, or are just a mass of lines that mean nothing to us, and
what is their usefulness in the context of our goal. The definition of Gartner describes SIEM as a technology that supports the detection of dangers and provides mechanisms for reacting to threats based on data collected in real time. In my opinion, this is not true, because it is not real time, but rather pseudo-real, because we get these events only when they have to reach us, so it is not real time. They could go even very long. So it's not entirely true, but we correlate it with historical data. Collected information should come from various sources, so that this correlation and overall this whole shape is as broad as possible, this view that we have. A key element of any solution of this type is the ability to analyze
and correlate events based on the collected data collection. Thanks to this, it is possible to conduct in-depth analysis of each incident, as well as to create detailed reports. So, in fact, In CME it's about collecting data, encrypting, compressing, so that they don't take up too much space, so that they have a time marker, so that we know when the event took place. It's best to have a few more time markers, so when it took place, when it reached us, when it ended, because it depends on the event. Parsing logs, so standardizing and normalizing to one format. Adding metadata and prioritization is also something that CM does. So when an event comes in, he builds it
with many additional data. I take into account our network model, which is defined, so I add details about the host from which the event came from, which we have added, for example, its category, whether it is a web server, whether a critical application works for us, whether this server is in the DMZ. And what else we can come up with in our model. You can also add a report from the tax scans to the CMA, so as metadata you can add that the server is taxable for this and that. And on the basis of this, build the appropriate priority for the event. Indexation in the database, i.e. the speed of the search. Decisive processes, correlations, i.e. the rules that we have in the CMA, i.e. what we define as
a scenario. CEM allows us to run analysis. So we can, I don't know, is it the same as in Log Management or Splunk? I mean, I personally don't consider Splunk to be a CEM. Maybe there will be some voices against it, but in my opinion it is not a CEM. Well, now I know that in the latest version there is something already, okay, in the previous one it wasn't either. Well, well. We also have the possibility of visualisation and reporting. All of this can help us in detecting anything. I decided to present SOC as four points. What is SOC? It's tools, like our CM, or even lock management, what we work on. Processes and procedures are
an important part of SOC, so how we should behave when something happens. People, because without them, those who would watch it and do something with it, it would be nothing. And knowledge about what happened before, how our organization works, what is normal, what is abnormal, what was an incident in the past and has happened again. Expert knowledge about the technologies we use, so that we can interpret what is an incident and what an event means. In my opinion, this is the essence. Ok, what is Merit? As I said, my goal is to present questions that a company should ask itself. Most of the time, especially those who work in ArcSight, they buy government organizations or big corporations, because it costs millions
of euros. In their case, the decision wasn't always well thought out. You know, sometimes in corporations, you have to spend some money, fulfill a plan, get a promotion from a manager, but they don't really know what's going on. That's why I think that security departments should try to talk to their directors and prepare for this implementation. I've collected such questions as the main ones. So what do we want? What is the business need? Why do we do it? I will answer this question later. What do we have and how does it work? Inventory. Unfortunately, even banks don't always know what devices they have, what they are used for, where they are located and what is happening to them. What are we afraid of? Monitoring scenarios. What do
we want to monitor? Because you can come up with hundreds of scenarios, you can implement them. But what about the cost? The developers will do it, they will pay a lot of money for it, but what can it give us? And does it fulfill our goal? Okay, the application has been implemented, and what's next? This is also a big problem, but I'll get back to it in a moment. Something is blinking in our CM, which is incident analysis. What are we going to do with it? And again, the same thing. Those who worked with SIEM know that you need to tune and filter the rules, otherwise you will have the same thing in the hook.
I can only give an example. Once I was in a SOC where you had to have a skill to click as fast as possible. Because there were thousands of people and to meet the KPI you had to click, click, click and they clicked everything. It was a big SOC. They were good at it. Well, often the first line of clickers are Hindus, but... I'm not a racist, I'm just... Okay, so it's a business need. Why do companies do it? For example, because they want to get a certificate, they want to get a PCI or some other certificate, where the requirement is to have a SIEM system, and basically to monitor events. or they have regular
audits and want to start working on them. I've also heard that when auditors ask about SIEM, for example ArcSight, they don't check it at all, they give a green light. So in films, a really big percentage is audits, compliance, that's why SIEM is needed. Another reason is PR, They don't want to have bad PR, they want to detect something or they have already been punished for something or they haven't detected anything. Now the EU law is also changing, but I don't want to talk too much because I didn't go into details, but everyone will have to publish the incident if it happened to him, if not, he will be punished, he has to do it immediately. And to not get
punished, he has to show that he has some kind of systems and that he knows about them after the fact. He didn't manage to find them, but he tried. Securing reliable business data. In cases when they work for government projects, NATO and other super-secret things. And they want to secure themselves. They want to have control. So, next point. about what's going on with them. Because, just like with lock management, CM gives a real impact when it's implemented in the company and the company does it quite consciously. it turns out that they know a lot, because they know what devices they have, what systems they have, what their network model looks like. Suddenly, it turns out that everything starts
to be clearer. They have to think about user management, i.e. about user management, about access control, because there will be rules for all of this, so it has to start to be somehow sensibly arranged. It is also worth responding to the criticality of the CM system with this business need. I saw different things. For some people, the CM was supposed to be the heart and the center of security. It was very important, it had to be high availability, disaster recovery and other things. For some people, they implemented it as an additional, it will be somewhere on the side, the security department works on its own, but they have this CM, it's there. You can look
there and do something. And the next question is: are we really going to follow what's going on and react? Because as I said, it's not always that obvious. I saw an implementation in the bank, it was implemented 3 years ago. We were asked to evaluate it and it turned out that actually nothing... I mean, maybe someone was looking at it, but no one was doing anything about it. It was self-made, they put in a lot of effort, but it didn't do anything. It's very common that companies leave something behind and then it's the same thing. Inventory. The devices and systems in the company. Another anecdote is that in the bank one had to be implemented, among others, we implemented Active Directory,
workstations were to be plugged into monitoring, there were to be mobile devices and critical applications. We asked for a list of network devices, along with their function, which are, I don't know, border, internal, IPS, etc. The company was not able to answer this question. It provided 400 files with logs, probably 400, they are not able to name, they have IPs, but they don't know what it is, what it is for and where it is. It's not their fault, the problem is that companies keep on connecting, dividing, changing, people leave, come and so on. But we should know what we have, what version it is in, how it will perform. Then the developers will ask us how many events they produce. This is also important, because we
need to design the system somehow. But it's also nice to know how to deliver logs from a given device or database. Because then it turns out that you can't have access to it, etc. Or you can't send it. Or we want to implement workstations, but we don't have a central place where they send logs. And the employees are still moving somewhere. And how do we get these logs from these employees? What is the criticality of our systems, devices, applications? Which are the most important for us? Which are critical? What is our own scale of assessment? Risk analysis, which should also be, so that we know what risk the system is exposed to. And therefore, then they can make decisions about the scenarios, among others. Audit settings.
It's something that everyone is afraid of, because if it works, it's better not to move. If we change it in the whole organization, we don't know what will happen. Of course, we need to test it. But to make it make sense, we need to provide it with logs that will be able to report important data, inform us about what we want to be informed about. But are we ready for change of audit? Do we have any policy of audit settings? That all windows servers have one, of course active directory servers have another. But do we have it standardized or not? And do we want to do it? If we want to implement applications, it would be useful to know what business processes are in this application. What it is
for, what is going on, who is involved, what kind of users are there. and how to use the application itself, what modules are used, maybe some modules are set to the world, so it's worth knowing all this. And if anyone in the company can tell us what our network looks like, what are the subnetworks, if it's divided, if there's any sense, if we can make a map of all this. Because this network model, or map, No, they don't know, we're the ones who say it. Yeah, that's what I'm talking about. And such an implementation may not work. I mean, it doesn't make sense, why spend that money? They say that corporations think differently, but... What do we
want to monitor and what are we afraid of? I think there are two approaches. I saw both in the action, but you can mix them up and you will have a third one - the hybrid approach. The first approach is that we have logs and we look at what can be monitored in them. I saw something like that, all the logs were sent, people were sitting, they were analyzing, "Okay, we have such logs, maybe we should monitor user creation, etc." They were looking from this side. I'll show you the cons in the next slide. The second one is more sensible. So the company first thought about what we are actually afraid of, what is the biggest risk for our business, I don't know, hacker
attacks, frauds, money, I don't know, anything. And then, of course, we don't have to do it ourselves, but we know what we are afraid of, so we recommend it to some company or an implementation company or something to develop scenarios for us on what we are afraid of. So, in my opinion, it's more sensible. Another cool thing is FRED modeling. Everyone probably knows risk analysis, but FRED modeling is to model some threats for our application. There are also cool programs for this, in which we can draw diagrams, find weak points and make a model that really opens our eyes. A cool source of what we want to monitor are penetration tests, if they are carried out in the company. In fact, the mortality scans too.
In general, the ideal solution, as it is already there, and it works somehow, and there are testers, or some penetration tests are ordered, is such cooperation. Testers test our internal systems, of course, without informing analysts about it. And analysts try to detect it. They are always aware that there may be internal attacks and they try to detect it. It works great to check the process, to check if our rules work, because if the testers do something, we didn't detect it, they will tell us what they did. We can then analyze logs from this system and try to find a pattern that allows us to detect this type of attack. In my opinion, this is a perfect solution for the development of monitoring rules. Not everyone has their own teams,
but it's a good approach. You can also use the taxable scans, which are automatic, you can also use them. He also detected something, we can see in the logs how it happened. Also from my experience, when I was an analyst, most of the attacks on banks were even... which went on, were very unsearchable. Nobody was playing to change Metasploit user agent to something sensible, but they left Metasploit written there. Or hostname is called Kali, etc. Or another one, SQL Map in user agent. So, such trivial things, but really, attackers don't always change it. So if we had a temperature test or a scan, we could easily find out that there is a Nessus in the logs. So
we could find out. I took a picture from the ArcSight, because I wanted to say one more thing about these scenarios. As there were two approaches, there are two. We can push all our logs Some of the CMs look like some agents or connectors that draw the logs from the source devices. They send them to the log management, in ArcSight it's called Logger. They have it there, it's faster than a correlation engine because it doesn't do anything special, it just collects it. And then there's the correlation engine, the main console, that's ESM. And you can either send everything to the OSM, which has to perform a lot of actions and its performance can be average. That's
why we'll have to have a lot of these OSMs, it costs and so on. But maybe more sensible, and that's what I'm trying to promote there, that we want to have all the logs, so our connectors collect everything, they even send everything to the loggers, who are better off, they can collect more data and they don't cost that much money. But further on, we only use security related logs, which are connected to everything. And believe me, in the case of an application, it can be 1% of these logs, because the rest are logs that programmers made for themselves. And I don't give anything to analysts. They say that a function has been activated, but they
don't tell who, what parameters, etc. It's better to think if we want to collect everything and if so, do we really have to send everything further? Maybe we just send part, or choose scenarios and send only what we need for it. And if the analyst will be wondering if something is an incident, then he will log in to our log management. I have also seen companies that used ArcSight, but had as a log management or logger. And there they also had all sorts of other versions of big data. like Hadoop, and analysts could log in and search everything. So, SIEM was not used for full analysis. What will happen after implementation? Who will use the system? And again, an example
from another bank. Who will use it? It was a very advanced implementation, so nobody knew the answers. There were ideas that maybe the anti-fraud team would look at something, maybe the security department would look at something. But nobody had an answer to that question. And it goes beyond the goal, because the teams that take part in it will have to use it. It's nice that they put their input, what they expect, what they want. After implementation it may be a bit too late. Who will maintain the system in life? As with every application, there must be administrators who will look at it, if it works, what is the performance, what resources are taken, etc. Will we train
personnel, hire new people? Here's another interesting fact: the company decided to implement one SIEM. They implemented it and it was a long way off, but it turned out that there's no one free in Poland who knows this technology. Of course, if they offered a lot of money, someone would have found it, but they had limited resources and they gave up. They withdrew from this SIEM and switched to another one, which has specialists in our market. So it's nice to pay attention to this. We can outsource it, but I'll talk about it later. It's also nice if there are people who are supposed to work on it, to look at what's going on, how it works, at the implementation team, at least I'm not against it, because then
they'll be alone and they won't be able to handle it. Okay, now we have to analyze incidents. What should we have, how to get to it? It would be useful to write down some procedures, processes, so that even an analyst, If the first line is in India, then the first line is usually runbooked. So they call, write an email, etc. They only do the activities from the card. So it's nice to plan it. Because the first line will be filled with a large amount. our incidents. We should get to the second one filtered, more important. I'll tell you more about that. We have to describe it somehow. Or if we detect a real incident, what will we do? We'll start running
and screaming. It's nice to think about it. If it's at night, who should they call? Who should they verify the information with? So that some networkers are available on duty. Whatever. We have to do something. Because it will be too late. When it happens. Analysts. It's cool how they have experience. And here, I think that this experience should... I don't mean only the experience in the analysis of incidents, but the experience, I don't know, you can share it with technology. Besides, you know, in security, people are very often derived from administrators, right? So, in the analysis of incidents, the best analysts are those who know the system from the basics, because they know how to
interpret all events and know what is normal and what is not. It should take a long time to work in a company, because each company has its own character, its own work model. For some, logging into a route would be an incident, for others it's a norm. Services or other services work on the route, etc. So it's worth getting used to it. We should also work out the reaction time, how it should work, possible ways of reaction and decision-making of analysts. What can they do? Sometimes they have to respond somewhere higher, and sometimes they can make decisions themselves about turning off the server or isolating it. It also depends on how experienced the analysts are. Will they have shifts,
will anyone be able to watch it at night? Will we change our company to eliminate the causes of the incidents? Will we get the same incidents in a circle? And actually, do we want to build a juice or just want to look at it? Here's a model I'm working with, which I think works well, but of course in a big way. This is the first line of less experienced people, who will filter incidents, click on those that are meaningless or are repeated. Those who look more like incidents will be put in the second line, where there are more experienced people who can analyze it better. And the third line, something that is now fashionable like threat hunting.
People are not dependent on the rules that give us alerts, but they have a task to look through everything and they can create very cool things like anomaly detection based on simple SQL questions. What is anomaly? Anomaly means that something happened differently than usual. You can describe it with a SQL question. logging into this server from a different address, I mean from an address that hasn't appeared for the last 6 months, but it appeared now. And here we have anomalies. We can describe them with questions. And sometimes there is something like CIRT, Computer Incident Response Team, who are supposed to make decisions. If they can't do it in the previous line, but they react like
"let's isolate this server" and so on. Or "let's contact some authorities on this topic". It's cool when there's something like security engineering, administrators who play hardening and have a lot of knowledge about these systems or network devices, so that such an analyst can call and ask if it's normal or if it's supposed to be like that, or if you can analyze it more precisely on the system, because analysts don't have to or maybe they shouldn't have access to it. I think it's a good way to report to our board so they can look at some nice charts and graphs and see what's going on there, how quickly we closed the incidents, how many we clicked. The development of our tool. I've already said that it's not very visible, but
it's obvious that security is a continuous process, everyone knows that. To develop our CM, it's very cool to have penetration tests, which will contribute to our infrastructure and application. We will have continuous monitoring, which will detect incidents and we will be able to change everything. Or it will be possible to make a better rule. Business involvement is very important. find a good way to get them involved and tell us what we are afraid of in the case of bank transactions. I don't know, some scenario of finding out that, I don't know, I know that some critical banking, they shouldn't really use some function called x. It should be monitored. Normally, people from security don't know about it. We don't know about banking apps. We don't know what functions
they use, what they can't do, what they can do, etc. So it's nice if business will tell us something. And tax scans, as I mentioned before. I've already mentioned it a bit. But this continuous process, filtering false positives, so that they are not there all the time. To not increase the ability of quick clicking, but rather... Because things that keep happening only reduce the alert of analysts. If they get, I don't know, out of these three analysts, and they even have a thousand of these alerts a day, they won't find the one that is an incident. So they will be busy clicking and they will also click one. So it's nice to filter it. We
use new rules as an answer to our threats, not to what is written on the Internet, but to what we see, what fits our infrastructure and our work model. Because we can often buy from some implementation companies, they give us a set of rules and we have them and it's great. But they don't answer what's happening with us. Another scenario that can be monitored is creating a user, logging in and deleting within 24 hours. It's a nice scenario, but it depends on how the company works. If it's a bank that manages the identity and so on, and there's something like that in production, then it's fine. Some people create test accounts in production, test them, delete them, and so on. Then this scenario doesn't make the most
sense. We have to... This is not my laptop, so it's not there. This is the previous one. In this CM, just like in log management, you have to write these parsers a bit, so that logs that unfortunately change with the change of the programming version, often change the log format, or new things come up and we have to improve our parsing. So that we still have the information that interests us in the same fields, and that's a lot of work. And learning from mistakes. These are the questions I asked. I think it would be nice to talk to the management about it. If they had an idea to implement some kind of SIEM. In a few percent
of companies were able to answer these questions. It even happened that the company couldn't order equipment. It was a big corporation. Everything was fine, the time of consultants was over, but it turned out to be a stupid thing like there was no place in the server room. It won't be there for 4 months. We won't connect this equipment. This project is still going on because there was no place in the server room. So it's good to be responsible for all these things. Consultants themselves, or any other company, will of course tell you some things. We make some initial product selection based on how the sellers sell it. But it's also worth asking about licensing, because they come up with different things. Once the licensing is based
on the number of EPS, or event per second, how much is included in the system. Sometimes about how many gigabytes we will store. Sometimes they want to get it from a component's instance, sometimes not. Sometimes the product is free, but to make it work, you need to buy a lot of extra stuff. Architecture. Usually someone will design it for us, but it's also nice to know if we want to have HA, if we want to do some backups, disaster recovery, etc. in some model. This network model, categorization and prioritization, as I mentioned, is good to prepare for, but consultants usually help a bit, but they don't know how the network works. You can also wonder if we want to have our own SOC, or maybe we want to
take some external company. Now, you know, a lot of leading companies have their own SOCs scattered around the world, which can monitor us, but they very often offer a standard set of rules, and additional Custom for us, it costs a lot of money. And if you want to have your own SIEM, or maybe outsource it completely, we will send logs to the cloud. What can also be done. I also wanted to mention logs. What is the problem with logs? The fact that each supplier has its own idea and format. And that logs are not a functionality, but a tool for programmers. I've talked about it in Security Use Case or Case Study, I don't know what the conference was called, but all about
these logs. And here I just wanted to mention that this is a very big problem, because I think that, I mean, I think so, that the log format, especially in applications, should be entered into the contract and it is not allowed to change it. This is like a change in functionality. This is not something that belongs to the delivery company, which is very often used, especially in large leading Polish software companies. When you ask for a log change, it turns out that logs are not a functionality. I mean, it belongs to us. We debug our own applications, so we won't be able to adjust it. So it is worth taking care of the application so
that it is entered into the contract. This is functionality. This functionality should not change without consulting and should contain what we want. I have already connected a lot of applications in my life and it is very difficult to find good quality logs, which have such basic information that are needed by an entity. So, time marker, something happened once, where did it happen, i.e. information about the source, user, IP address, host name. Information about the target, which can be an app or a further target. Also user, like the log-in, from which user to which user, etc. IP address, host name, type of action and its result. It's nice to have a session identifier, so we can connect the session with the
user. And parameters with which something was called. To connect such a data source, we should take steps like this: perform a quantitative analysis. This means that we need to know what types of events we have, how many of them there are, what is the layout of these events, how many logings we have, etc. It's nice to know this, because it will allow us to determine what this application logs us, i.e. find unique events. to run some tests, because in such a banking application it always turns out that nobody knows what are the events that are responsible for making a transfer or creating a bank location, or whatever. So it's nice to run tests. So, for example, a tester clicks an application, we
write down the times when something was done and then we look in logs what events were there. Or are there any at all. Quality analysis, so we evaluate whether logs meet the requirements. I've talked about this in the previous scenarios. I just wanted to say at the end of the video about the application that I experienced the most. The application had 500 GB of logs per day, 10,000 EPS. After the analysis, it turned out that only 2 GB per day is capable of 40 EPS. It was a critical web application for banking, so it was bad. What can an analyst do with such logs when there is no basic information? It is really worth analyzing and dealing
with logs from your applications, whether it makes sense or can be changed. Okay, that's it. Question. First of all, I've met with personalized attacks on SOC. There was an action some time ago, that all banks in Poland were accused of being attacked. The date was given, they were sent to some government agency. With a magic shortcut. So if something was announced, everyone had to stand there and wait for the attack. Nobody attacked us. So, yeah. But generally, Socks get the zinc from various agencies. But I don't think it's the goal of SOC. It's more like a bank, not SOC. I mean, the PENTESTERS group is the one that is responsible for the processes. But not SOC. But it's worth
noting that most of the attacks are done by internal employees. especially when they succeed, they are either dissatisfied or they are the cause of the problem. So they clicked on the wrong link. The second question is about... This is also my opinion, as I've met with the logs. Most of the software companies don't deal with security in their products. and most of these issues are completely irrelevant to them. By the way, or the certificates are fading, it's a popular problem.