← All talks

Everything You Always Wanted to Know About Linux Logging

BSides Tampa · 202146:1564 viewsPublished 2021-04Watch on YouTube ↗
Speakers
Tags
CategoryTechnical
TeamBlue
StyleTalk
Mentioned in this talk
About this talk
Kevin Kaminski: Everything You Always Wanted to Know About Linux Logging (Like Why It’s So Bad and What To Do About it) For blue teams, visibility is everything and logging operating system activity is essential to a good defense. Linux and *nix operating systems usually run many important servers in an organization’s environment, however they are often low on the logging priority list. Even when they are logging, the logs are unstructured, lack details, and sometimes miss activity altogether. Analysts performing investigations often come up empty handed when analyzing Linux servers and may be left wondering how such a critical, enterprise level technology can lack decent logging while even Windows XP Home Edition’s logging is so much more robust? ----------- WEB: https://www.bsidestampa.net DISCORD: https://discord.gg/FhdkSNa24P TWITTER: https://twitter.com/bsidestampa MERCH: https://bsides-tampa.launchcart.store/ About BSides Tampa: B-Sides Tampa is an Information Technology Security Conference hosted by the Tampa Bay Chapter of (ISC)², a registered 501(c)3 non-profit organization. The purpose of the B-Sides Tampa is to provide an open platform for Information Security industry professionals to collaborate, exchange ideas and develop long standing relationships with others in the community. The B-Sides Tampa IT Security Conference took place Virtually on March 27th, 2021.
Show transcript [en]

hey we have had some fantastic talks today and we are wrapping up with another great talk by kevin kaminski uh kevin nice to see you how are you kevin kaminski is the threat intel engineer at reliaquest he has worked in security operations space for seven years has worn many different hats throughout the sock he has experienced an analysis in many different logging platforms and specializes in threat detection research content development kevin works with companies to optimize their logging infrastructure and platforms to improve their security posture in response to threats and today kevin is here with his talk everything you always wanted to know about linux logging like why it's so bad and what to do

about it so with that being said kevin thank you for joining us and i will let you take it away all right thanks appreciate it let me share out here hmm having problems sharing now um is the share button uh working through chrome it looks like chris from v7 says share through chrome if you can i'll try to try to hop on there then it was working when i tested last time but

all right i guess i'll be back on on chrome then okay sounds good we'll look forward to seeing you sorry

so for those of us that might have just joined recently we have kevin kaminski here kevin uh i don't know if you can share now looks like you can so i'm going to go ahead and let you go ahead and do your presentation all right thanks everyone sorry for the rough start there yeah um so welcome to everything you wanted to know about linux logs uh as my name is kevin so thanks uh just to besides and everyone for putting on this event i'm really happy got the opportunity to do this this year um sorry let's uh let's get started with it um so why did uh why did i come up with this talk

about linux vlogging so i really started with my experiences just as an analyst and doing content development building rules out for different companies and having to work with linux logs so linux logs are you know the hosts are really usually running critical applications in customer environments but for some reason like building out content doing analysis they never really seemed to get a lot of good visibility into the endpoints um so it got to the point where like whenever you see a linux host in your analysis you're just like all right you know going in with some very low expectations on actually finding anything um so just like some examples of just some different cases that i've come

across so you know what i mean so like you know you're in the sock maybe you know security alert comes in search on the ip you see some linux logs coming up and uh you know you see a log like this and you're like okay a group was removed and you're just like okay like who did it you know is there any other information about this group like it's just a name like where is like the rest of the context for this log and you know windows gives me you know all this information when i find those logs so like why can't uh why can't i find this information or you have something like this where

you know somebody modified the crown jobs and uh just says somebody you know replaced the line like you know what line is it you know i look at windows scheduled tasks you get almost too much information with that so why is there not any information in these logs you know it might just be straight up that there's just no activity that you're looking for you know you you get the ip address of the box you you're going to search on it and then you just see like a wall of just cron jobs running and really nothing going on so you're just like where like why is there no activity like i know somebody's on this box like why can't i

see anything and i'd actually done a talk last year on logging failures and went over a linux scenario and you know linux the default logging is not really set up to catch a lot of this really detailed information unfortunately so um i'm just gonna during this talk i'm just gonna kind of take you through my journey that i went through to figure out what is going on with linux logging and uh how it can be improved in some of the questions that i had i also like to say that this is coming from someone who's not a linux admin so i'm not certified so if anything's wrong or it's too simplified or anything i'm sure

people will let me know so some of the questions i had going into this was you know what is what does syslog even mean um how do you actually create syslogs how do programs actually do that why is the format just honestly so bad what does sending syslog mean and then if it's not that great then you know what are the alternatives that we can look for um so in the beginning we had syslog but what is it though so you know is it a server you might hear the term like syslog server you know it's where you put all send on the logs to um you know so is it that is it a

protocol because you know you got port 514 you're sending syslog it you know as a network protocol is it a format because people talk about things being sent like in syslog logging syslog is it a service though is it you know this is like services running something like that um so the answer to all that is yes um if you actually go into the syslog rfc it defines these different layers of what syslog is it can be content referring to actually what's in the message it can be the application so you have like the program's generating syslog or this is like damien's actually storing it and routing it or you have the transport of actually

sending it over the network and applications receiving it and processing it from there so it's actually all of it which kind of just makes it more confusing uh in terms of like the history of it it was created back in 1981 originally for the bsd um distro is widely adopted into all different technologies you know not just the unix systems but also like networking equipment um you know lots of other things use it the first daemon was created in 1986 and then it was actually only standardized in 2001 so it took like 20 years to actually standardize it and uh in that rfc i think it's it's pretty funny in the intro they get really philosophical about how like

syslog is the natural evolution of communication since the beginning of time or something like that it's pretty funny to look into that and then finally in 2009 there's a revised standard um which we'll we'll go uh dive into those in a sec too that's what it is so how does says like actually get generated from a program so it all starts with the function calls that's actually start beginning to log that so you have open log which actually isn't required but you pass it the name of the program you know you got some options that i'm not really going to go into and then you pass the facility so this might look a little bit

familiar um kind of just there's really a tag for the log that labels it for what type of application is sending it which is going to be more important a little bit later so open log prepares the connection kind of customizes it a bit and then you have the actual syslog function which is going to actually send the message to syslog so this one's actually required if you don't use open log it kind of just fills in these default values and the first argument you have is the priority so this should look pretty familiar it's like the default levels that you see all over the place for logging you know emergency info debug so you send it what the

priority is of this log and then you kind of just put whatever you want in the log it's just this open string you can put some formatted arguments on the end but really it's just it's a wide open wide open argument there and looking at you know an actual example programs this is the add user program and you can see you know it's calling syslog there setting it as info and that's the actual message that you normally see in the linux logs add user to group um you know just calling those usernames and group functions so you know i was asking myself like who determines what goes into that string like why like they're actually

defining it there and yeah it's really just whoever wrote it and whatever they felt like putting in the log so that's kind of one of the issues with how linux is sort of like decentralized across all these different packages and services that there's no really standardized um standard for this logging and people can kind of just put in whatever they want even though there's there's rfcs so there's no real enforcement so getting into the actual format you have what's called the bsd old format which is like the original one it was documented in rfc 3164 which it wasn't standardized it was documented so they were kind of just like observing syslog in the wild what people were

doing with it and then just documented like there were some like safari researchers or something so it's not actually a standard it's just like what people most commonly were using so this is what you you'd normally see and it starts with the priority value which is the facility value from earlier times eight plus the priority value like using the ids so in case you ever wanted to know what that was uh you're welcome and then you got the time stamp which doesn't have a year which i never noticed before but that seems like a pretty obvious shortcoming there and then you have the tag which is the actual product usually the process name from the function calls that's going to be

actually logging it sometimes it has a process id in the brackets sometimes it doesn't and then you just have this unstructured string where you can just put anything you want so but there was a new version that got standardized so what what's different about this one so there was an actual standard put out not just the observations and so this is what it looks like and if you've ever seen it before you can usually recognize it with those three dashes in the middle as that's what the new format is it honestly doesn't change too much you have the version which is basically always one at the beginning the timestamp is actually a full-blown time stamp this

time the process name is the same it's got a separate feel for the process id you also have some extra identifiers like tcp or like firewall connection stuff and then you just have you know the same unstructured measures so it doesn't really change that much but if you look closely boom you've got structured data in here there's a field for actually structured data and it has like an example where you can actually put key value pairs in there look you got like the event id field and like you know the event source you know stuff you're normally seeing from actual other logs like windows you know this is so great so like you're probably asking like why haven't

you ever seen any structured data before and then you kind of realize that's because nobody just bothered to use it and the people who make the programs just were like yeah i'm just gonna keep doing the same thing and not not deal with that so it's like it could have been really cool but nobody decided to adopt it so it's a it's really great then putting it all together how does syslog actually get sent and stored in different places so it all starts here with the program so the program generates a log file and then it's going to call the syslog function and it's going to write it out to devlog which devlog is it's not a file

it's a socket so if you try to cat it out you're not going to be able to see anything you actually need to connect to it to pull the messages from it so you got all these programs dumping these messages into the dev log and then you have to read it you have what's called the syslog damiens and so these you know you got your uh syslog d r syslog just like ng syslog d is the first one but it's pretty much been replaced by these other two with more advanced features and you might be more familiar with those so those are sitting there and uh those are going to read the log message out of the socket and then they have to

determine what to do with it so they have the syslog configuration file which if we open that up it has the different configurations for what to do with it so this is where the facility and the priority come back into play so you basically have the first part of the line like you have auth.warning so it's like off facility the warning level of the the priority and then it tells you what to do with it so it's we're going to write it out to var log off that log and you can kind of go through that and filter on those different messages you can use wildcards multiple ones and determine where it's going to go and finally you

can also send it to remote servers as well so here you can write it to any messages you want or any locations you can send it to any servers and then there's sometimes other features where you can do different things with it or you can send it to you know different locations so that's really the full-blown pipeline of houses like um gets to to where it goes starting with the program so this is kind of considered the traditional method of collecting syslog where you have devlog socket with syslog demons pulling it out and sending it but there's also some other options that are out there so you have systemd which some people might be familiar with that's kind of the suite

of management services that you can use systemd like restart services or turn them off them on um so one of the the things in the suite is this systemd journal d which is gonna basically read the logs the same way from the dev log but it can also for the services it's managing it can also log the system out and system uh standard out and standard error from each service so you get some some more context into what's actually going on inside the service i mean all that is it's not written out to var log it's stored in this file called the journal which is this binary file that you can read with this journal

ctl utility and so this is what one of the one uh one of the logs looks like which you know it looks really cool it looks nice like there's all these fields in here everything's you know the key value pair everything's matching up this is for an ssh login so you can see the line there but all this additional context is given so it's really nice it's like why doesn't uh everybody just use this thing moving the pros for it you have like i said you have the more organized file it's easier to search out and it logs more things but one of the problems you can only read the logs using that journal ctl command you can't cut it out it's a

binary file so that's like the only way it can actually get out of there you can't filter the logs before they get stored so it doesn't have the rules like this is like damien where the stuff goes in the socket and then it's just you know it only pulls out what it needs everything goes into the journal so it can kind of get pretty big and actually you know take up a lot of space if you just leave it running um it can't actually just send anything by itself so you need a separate service called journal upload to send it or the other more modern syslogdamians can actually just read from the journal themselves and send it but then you're installing

two syslog programs which can be kind of redundant and then the sims also because it's a whole different log format and you have to connect in to the journal to get it sometimes sims require completely different connectors to handle it which is a whole different you know process from the regular easy to just shoot syslog at a server so it's you know it's a little step forward with the login level but it has some more drawbacks then you have also other programs that kind of fall under supervisors so you have like daemon tools there's something called like s6 there's a lot of different other these linux packages and uh what they do is they supervise

every service um with the logging by having like a separate process for each service that just collects logs from that service so it attaches it's like all these different services running and just collects the standard out or the yes and error from those services and you can also get the devlog syslog so you're not missing that as well so it's kind of nice because you can customize a blogging process for each service to pull different things so it's a little more flexible per service so like if sshd you want to get like more info from that versus other ones um it's more redundant so you know with syslog damien's you know if that process goes

down syslog logging stops but here each service has its own logger so you know it's a lot more redundant if anything goes down and then also you can get the additional logging there and if you go on the s6 website um this guy has his own pros and cons for why you should pick it he basically goes on this rant where he's just like syslog sucks he's like i'm not being judgmental just stating the obvious use s6 don't even use facelock it does nothing special don't even use it and uh even from reading reading the other um supervisor websites they all uh kind of have a similar sentiment but there's a lot of hate for syslog and they kind

of go on these little rants which i found funny um cons for this though because you have so many different processes running across there you know standardization might be kind of hard if you're customizing your logging for each individual service it might get a little bit too much and also to just manage all those different logging processes for all the different services and again it doesn't have any sending capability so you're just gonna have to use another syslog damien or something else to get the logs from it um so you know overall for syslog you know it's been used for a long time but it's not you know it's got some flaws so you know the big one here is that the

scope is really limited to the services logs so you're not getting people running commands you know you're not getting files being modified you're not getting processes starting it's really just whatever is logging out of those different service programs like you know the authentication ones or cron and whatever they decide to do it you know the activity is pretty inconsistent because again people are writing these different tools and services and there's no real standard on what should be logged so a lot of times you know activity is going to be missed um you know again it says up to the people they might leave out you know critical information that you would find really useful there's really unstructured sometimes

it's just like really long sentences uh for the syslog so it makes it really hard to actually parse that and then the documentation is also not really that great for what is actually generated so windows you know really good documentation here's all the event ids here's the log format here's everything for the different syslog applications and stuff it's really there's nothing really there i mean i was going to the code to find like what the messages were so i mean overall um you know it's it's not a really good state for what syslog is there and it's default and it's it's from what we see you know most people most companies out there are just using

the default syslog so it's really there's lots of gaps out there but what can actually be done about that though the answer is enhanced logging which we need to go to some additional tools like using audit d using os query or you know ideally doing some edr detection with similar tools like carbon black or crowdstrike and i getting some additional visibility that way so i was going to go into each one and kind of see what it offers you know the pros and cons each one and you know what's what's ideally should everybody be using so audit d is part of the linux auditing system that's actually built into the linux kernel um so for a lot of distros audit is

already installed so that makes it pretty easy right off the bat i can log process activity file activity and some user commands so it's also pretty good it does everything based on syscalls to the kernel so that's how it's going to be logging all these different activities and so sys calls are not the same thing as the shell commands so if you type in you know ls or something like that it's probably not going to show up as an actual syscall that needs to get go to the kernel so these are some examples of some commands that actually have associated syscalls so you can see the kill making directories getting the time things like that will actually be called

back to the kernel where they can be logged the big one here is the exe cv which is what is used to generate a new process and start a new process so that's really going to be the core of monitoring for new processes with with using audit d so overall how audit d works uh again it works by intercepting syscalls as they go to the kernel so you have this application it wants to start a new process and it's going to go through these three filters this user task and exit filter and they basically have criteria where if the syslog or they know it's just like the syscall matches it then it's going to get logged

back to audit d so it matches it it goes to this white list filter and then it sends the details of that syscall back to audit d so how do you define the rules that are going to be in these filters to intercept these syscalls so it's all defined in the audit.rules file inside auditd one example if you wanted to monitor for a file um you just drop in this command in the file and then you're you're good to go so you start with w just for put a watch on this file we're looking at the etsy password that's going to be the file path these are the different actions that you can monitor for you know read write

executes or just changing attributes of the file and then you can give it this little helpful tag which will label the log for you know what you were trying to monitor for this is an example of what one of those logs would look like without a d um you can see that even though we're monitoring files it's still a type syscall at the beginning because it's still intercepting the syscalls but you get some useful information out of it you get you know the user ids you get what program was trying to access the file um you'll notice that the key is what the tag was and nowhere else in the in the log does it say what the file was that was being

modified so that's why you kind of want to put it in the tag so you can at least know what was being modified there so overall that's not too bad for file monitoring um process monitoring uh this is the line you would put in the rules file for this uh it starts with you put in the action which is like always or never so you just always want to log things just makes sense and then the next one is the filter that you're going to actually put the rule in so most times the exit filter when the syscall is exiting then you put in the name of the syscall that you want to intercept and get the

details from so here if we want to monitor for you know just process creation we're going to look at the exe sys call but if you want to make it a rule looking for like just calls when people check the time or make directories you can put that name there and then here it also has an optional file path where we can look for specific files that we want to monitor if they're executing so here every time sshd executes we're going to get a log for that so you might be wondering with uh this rule uh you know this is just looking at sshd but what if i just want to monitor you know all the processes that are running

you know like you can in other tools or like with windows like 4688 or something i just want to see when it runs and i just want to get rid of this so the problem with that is you are going to get destroyed with an insane amount of logs from all kinds of process there's so many things running it's so infeasible to just blanket throw that out there and your system and you know network will just be consumed with these logs so that's not not the best idea what you want to do is what was on the previous slide where you want to filter it down to what uh what specific services and files you want to monitor that are being

executed which unfortunately you know you know there's so many services and so many things out there that you can't make a rule for every single one so unfortunately you're gonna have to pick you know some key ones that you want to monitor for and since you're essentially just looking for a file if it's executing you can actually just rewrite this as a file watch to make it a bit easier just using that x parameter to look for when that's executing so looking at the logs i mean even even with this the logs aren't really the best either so you notice it's split into two so you have one with the syscall which has the user and the process

and then split off from that is the actual uh parameters and arguments that were actually put into you know that syscall so it kind of makes it difficult to do investigations where you just want to look at oh this process fired what was the command line arguments that were put in it's separate it's linked with this id but it can be kind of hard to do that for investigations and especially for building rules that are trying to have to correlate two events together with that and then further if you look at the exe event the command line is split into different arguments as like separate fields so you're not actually getting the full command line

that somebody put into a program and ran is breaking it apart so it's actually even harder to build content around that if you're trying to like look for specific patterns or regexes for specific commands being run and then another thing is that it also is only going to give you the arguments for the syscall so let's say you have a user or like an attacker is on a linux box and they put in a huge like bunch of pipes commands you know maybe making some some named pipes or something like that and uh it's only going to log when a syscall is actually called from that and it's only going to log the arguments for that individual syscall so if you

pipe like five commands together and they're all using cisco you're gonna get five separate logs with the little chunk of the command line in each one um so it's kind of it's kind of hard to see really what people are doing on the box with the command line so reviewing audit d it's good because you can get visibility around processes and file activity that you can't get with syscall or with assist log it's pretty easy to configure you're just dropping those rules in there i mean it's free it's usually installed by default you get the real time monitoring from directly intercepting those syscalls and you know the file format is pretty structured so it's easy to parse out

and build rules around the problems that if you have a ton of rules since it's actually has to pass those filters when you're running a syscall it can actually slow down the performance of the system because it has to you know put that all the syscalls through all those filters so put too many rules in there i can kind of degrade that performance again it only logs when a syscall is used not just when any command is run but you can't log all the syscalls it's way too noisy you have to look at specific files but if the files actually change like the file path or anything else about it then it's not going to be useful anymore

the command lines are separated out from that syscall event the arguments are broken out and then yeah it doesn't actually show the entire shell command so overall i mean it gives us some more visibility but there's still you know some more issues when it's going to come to investigations and integrations and there's still some gaps there so next let's talk about os query so oscar is another great tool i was made by facebook and uh released open source in 2014 so you don't have to worry there's like no ads or tracking or anything it's actually a pretty awesome tool um and what it does is it stores system info in a database and it's called os query so you're

basically querying the operating system for this information so it's a different way of doing the logging i use the sqlite syntax with this other utility that you can use to actually just run queries against the system so let's say if you wanted to get the upside of the system you can query the uptime table in os query and it'll spit back you know all these results or if you want to get the bash history for all the users you can you know query the user table and join that with the shell history table to get you know what the user was doing and here's who the user is and then what their activity was so it's pretty cool um how the

architecture is laid out so os query you have your sql lite engine on top of it and it's going to get data from two different places your virtual tables and then event tables so virtual tables aren't actually really like physical tables but what it does is when you query a virtual table it basically runs a command on the back end that pulls the data back for you dynamically so one thing it can do is just call like different functions so when you call when you run a query against that uptime table it actually just runs this function in the background to pull the current uptime for you so it's not actually like storing it anywhere and there's all kinds of other

like apis and other files that os query can just dynamically pull the latest data from you without you know it's not really querying a database but it looks like it is so when you query for the shell history it's actually just reading the bash history file it's not actually in a database or constantly logging that then on the other side you have event tables which is actually you know it actually uses a database on the backend to store this information over time so one of the data inputs is the syscall monitoring these are the same framework that audit d does so you get all the same syscall information going into these tables the same type of file watch capability

and then you can also tap into different sockets so you can go back to the devlog dev log socket and read traditional syslog into the event table so you kind of get everything all in one then you also have the os create damien which is going to use the sqlite engine and to actually log it out it's basically going to run these queries and then write out the results to the var log os query files so that's how the logging actually works for that it's kind of like splunk where you need to schedule the queries to run and pull data and then only log out the results that you want so in terms of what the queries are how

to schedule them you have this query file which is going to have all the different scheduled queries so it's all in json so you go to the schedule key and you just drop in a new entry for here's the uh here's the query you want to run and here's the interval you want to run it on and then it will write out those those results to that file so for file monitoring you know you can just do a join of the users with the file events table and you have to configure the file watches before by putting the actual files you want to watch into this file path key in the same file otherwise the whole

table will just be blank uh for process monitoring i'm saying they have to configure this a little bit before but in the process events table that's where all the syscall information is going to be then command history again you can just look at the shell history table which is pulling from the bash history file which to pull the bash history it doesn't log timestamps in batch history by default so you want to set this time format variable in order to get those timestamps otherwise you're not going to know when anything actually happened in the logs and just looking at some of the logs overall you know it's not too bad so this one file got modified now you

have the action of what happened the user that did it the file that got modified you know a bunch of ids lots of other information file hash so it's it's pretty good you know it gives a lot of detailed information and then process monitoring you can see it's actually one log it's not two of them like audit d so it actually does that join for you so you can get the command line and the file and the user and the syscall all in one event um so it's it's really nice really a lot better than audit d and then finally you have the command history which is basically just reading the bash file so it just kind of puts

the the columns is whatever is in the bash history file but still still pretty nice so os query so in review you know it can log the process of the files to connectivity and a lot more stuff i didn't really touch on like a lot of the other operating system data you can get out of it you know you can it can plug into all sorts of different sockets different apis and things so it's really really robust tool it's pretty easy to install it's free open source and you can really filter down to exactly what you want to be written out to the log so you don't have to blow up the log file with everything

that's happening you can specifically you know query the assist or the syscall table and pluck out the different syscalls you want you don't have to just log everything like with uh without a d and the logs are really structured and provide lots of detailed information so it's really good for building content doing analysis uh the downsides to os query again you gotta have to watch your queries because it can degrade performance if they're really inefficient or running too frequently so you kind of have to manage it like how you would like with splunk making sure that you don't have too many of them all running you know at the same time or requiring a ton of data and the queries are not

real time so depending on how frequently you're running it you might miss out on some some actionable time um the one thing though is you're going to have overhead developing your own queries so osquery comes with a lot of default packages that can detect lots of different activities auto log things for you but if you want anything specific or you want to try to tune down that you're going to have to go in and make or tune your own queries so that's just you know some more time and and work to put into it uh since the process monitoring uses the linux auditing framework it has all the same problems that audit does with the processes where you know the

syscalls aren't actually going to account for all the activity um on the actual machine it's only going to log when those are generated and the final kind of big one is the bash history file that's where you get all the commands from but that file is really unreliable if you just wipe the file then nothing's going to come back you can easily bypass there's lots of different ways to like make sure your activity is not logged in that file so that's not really an accurate read of what's going on on the machine uh so we finally come to the last one edr which is really the ultimate solution if you want to get lots of visibility on

your endpoints in terms of what they are so they're really enterprise grade technologies where you deploy endpoint or deploy agents to all your endpoints that are going to collect all the local information i think carbon black even uses os query to collect some of that information but you get lots of granular information from all those endpoints and it sends it all back to this nice interface where you can see you know all the process trees you know it's got all the command lines for each processes there's tons of data you can search it it's a really robust tool for really knowing exactly what is executing and going on on the machine so we can log all the same stuff that

audit theory audit d and os query can towards a file and process activity but it goes a step beyond that as well so you get all the processes not just the syscalls so you get everything you know all the commands the user executing all the processes running um you get full process command line so it's not going to be chunked up into different arguments i can log the network connections that the different processors are making any modules they're loading memory is getting accessed so really i mean it's really a complete tool for getting complete visibility on the endpoint it really does a lot of good stuff it also has built-in alerting capabilities so you can build

custom rules to detect you know different types of processes or different types of activity which is really nice so you don't have to forward all of the raw process logs to your sim and just blow out your license you can manage everything inside the edr tool and then send any relevant activity onto the sim also has built-in threat intelligence so it'll automatically flag things with you know different file motors file hashes or going to malicious ips then you can also automate lots of actions out of it so you can remotely like isolate hosts delete stop files from executing you remotely log into the boxes and do investigations that way as well so really edr's like really the ultimate

solution for this but you know why doesn't everybody have one then so here's all the pros i kind of already talked about those so it's uh it's really expensive is the problem um auditd osquery free tools syslog free tools um edr can be pretty costly so you know not everybody can actually afford to use it although it really is a good tool recommended to get if your enterprise is big enough um you bring on additional overhead to actually manage all the servers and agents now so you're going to have basically somebody just working to make sure all these agents are working properly and and tuning them down and you know just maintaining the up time

for all of those rolling out new updates so introduces a lot of new work then there's overhead developing queries so a lot of the default rules and alerts that come packaged leave lots of gaps i mean with the endpoint you can build like hundreds of rules and still have gaps but you're going to have to do a lot of work to fill those in yourself by developing your own queries if you really want to get good visibility and you're not forwarding all the logs of the sim um all the code isn't open source so we've from using lots of different edr tools and a lot of them have bugs there's lots of different problems with them

new updates might come out that it'll introduce new things or break features and there's really nothing you can do about it you just got to open like a vendor ticket and just wait for them to help you out and then finally like the query languages themselves that these tools have can be a bit limiting sometimes you might not be able to do certain things like using regex patterns or looking at certain like parents or parents of parents and processes so it can kind of make some of the content not even possible and make investigations a little bit a little bit harder and more time consuming but overall edr is a really really good solution for for endpoint logging

so in the conclusion so linux logging really important for any organization especially if you're using linux the native syslog is pretty good in terms of like just having any visibility that's like the first step but it leaves some pretty huge gaps for what's actually going on on the endpoints um the alternatives to the traditional system aren't really that much better necessarily and it really you need enhanced logging to get the full visibility there so looking at audit d or os query you know can give you some more more visibility into the process and file activity and then edr is really the best option if you want to see everything going on but it's it can be

pretty pricey so overall what what i'd recommend is the best coverage is really having the native syslog and you want some enhanced logging because the the thing with the edr tools is you know they're not going to get the regular syslog logs which are from the actual like applications and services running um so like adding users um you know deleting groups things like that so that stuff can be useful and the process and file monitoring can also be useful so having both of them is is pretty good there's probably the best solution there as usual i broke down this chart um just throwing out all the different technologies that that i've been talking about uh based on the different attributes so

you guys can just take a look at that you know it'll be in the slides and uh yeah that's uh that's the end of my talk so again thanks for having me um yeah i hope you learned a lot about this vlog and what you can do about it kevin thank you so much this was a fantastic talk very informative i'm gonna open it up uh so that we can have our room monitor go over the questions that were asked

do you want to go down the question questions

yep i'm sorry did you hear the question or should i repeat it oh yeah i didn't hear anything uh with the android being linux based is there any known tools or methods used to collect their system logs that's the first question i haven't i haven't looked into logging android i mean because of linux based i'd imagine it's probably there's probably some similarities there but yeah i haven't i haven't looked into uh to doing that with android okay the second question is because of sudo sudo is it difficult to know which user ran a command slash system call um yeah i can kind of obfuscate that a little bit the pseudo logs themselves are actually pretty good

because it does give you the command that was run i'm not sure if it gives you the actual user that was doing the su-doing um but yeah i mean if you're switching around users and doing things is root yeah i can definitely obfuscate that a bit you probably need like er tools to really trace back the history of that process tree to figure out where it came from which yeah isn't going to be something that you'd probably get from the other tools okay thanks that that's the only questions anybody else have any questions please put them in the chat question and answer section [Music] there's uh are you monitoring the chat session at all uh yeah i'm looking at the q a that are

coming in yeah looks like a couple more popped in just now uh are there any cali tools that do this it was a follow-up i guess um i guess i guess is that referring to doing logging uh in a different way i mean i was using actually cali for for some of this and just like installing the tools on it i don't know um i guess what else i'll be referring to okay we have another follow-up when regarding edrs would i stay with that my employer already has other technology with already already integrated them better i think that's how you wrote it much yeah technology we've already integrated them better can you oh yeah okay you're reading it

yeah i mean i think edr is always like the best option i don't know if you're saying they're trying to switch to like one of the non edr tools um changing edr products ddr products are kind of like i don't know they kind of are very similar functionality um okay

okay like saying staying in the same suite of uh the vendor or going outside of it um honestly we see like from lots of different customers we we see people have a huge mix of tools um from a like i guess from like a logging detection perspective really whatever one has the best features there's not really a problem mixing the technologies um from like a detection perspective it might be a little bit easier if they kind of like have some some synergy where they can kind of work together or if you know it's a little bit easier to maintain for the like the guys out like actually doing the engineering work um that probably the only overhead from

a detection site we see because with all sorts of mixed suites of tools um so it's really you don't really have to stay in there if there's a better one in a different vendor and then going back there is a question for the cali tubes for log analysis i'm not too sure i haven't uh experimented too much with the log analysis tools in cali so

i think that's it referral thank you kevin for that very good presentation all right thanks everyone perfect thank you so much kevin we really appreciate this very informative talk um and with that being said i think this session is completed if no one has any more questions all right thanks again thank you