
that type of thing so hopefully our decks integrate pretty well with one another a little bit of an introduction uh brad and i are just two bad men from chattanooga that actually it's not completely just a movie reference um uh russell you can't answer this question you know which one of these two men actually is from chattanooga who can take the guess samuel jackson that's right blue team that's right
so i tried to connect it a little bit there but um i think brad's more of the longer hair guy here uh but anyway a little bit by the way of introduction um brad you want to go ahead and introduce yourself yeah sure i've been in it since mid 90s in security focus for the last six seven years i'm an active documentation testing and marketing contributor to security onion which you've hopefully just heard about i'm also i was involved in helping technically edit richard baitley's new book for the practice of nsm which really is a must read if you're interested in network security monitoring i put together the security onion for splunk caps a couple years ago and i
haven't updated in a while but they're still quite functional currently i work for mandia um my resume is not nearly as interesting i i will say this and get it brad here i have spent probably half as much time in i.t and half as much time in i.t security as brad has um and i have i think one-fourth as much gray hair in my beard but uh brad this is brand new for brad so that's why we got it on the uh it looks great by the way um anyway um uh just a little bit of a former i.t director former security consultant i actually was red team for a while before i joined man yet it was
blue team um so i've actually converted that's right um uh personally i have three boys at the house four and under my middle one actually turns two today my youngest one is less than a month old so you can add to that resume expert diaper changer yeah and so this is actually my first time i've left the house since uh my youngest was born so actually my wife really deserves all the credit on that one she's the one who put the most work into this presentation is going to pay for it um anyway now with mankind and i am a isa board member in chattanooga as we try to get that rolling attention these types of conferences i'm inspired
by so hopefully we can do something like this one day and chat here anyway what's the agenda for today uh we're going to talk about uh we're going to start with buzzwords we go with big data um and then we're going to start an insecurity on you we'll talk about why that's important and some of the information there uh we're going to take a look a little bit splunk versus elsa it seems to be a common thing that comes up especially when you're dealing with security onions brad is going to talk about his splunk app a little bit we're going to take a look at elsa and hopefully prime you for martin's talk coming up uh we'll look a little bit the
architecture overview and i think you know even though our talk is called iron b onion i i think you'll see that the architecture really can't be divorced from actual the visualizations and what you can do with the product because it's they're pretty intimately related and it's not by accident or that's killer that we're talking about the architecture and then i'm going to get into some of the uh integrating conditional data into uh into elsa into security and then finally we've got some dashboards and some new material and things we want to show you guys as we get started uh but anyway um security onion makes a lot of data what do we got already we've got snort circada we've
got bro we've got osec and uh as as uh we're going to point out that a lot of this we can funnel into elsa and you're gonna see really the power of what else it can do for that for that data um but you know it doesn't stop there because i think secops needs a lot more data you know what's inherent in security now is fantastic but as you move forward i think that you're going to want to get a lot more data into there you're going to see the power of correlation one of the most i personally think the best feature of elsa is that map reuse the ability to look across data sets and see how how
does everything relate in your network uh and we're going to talk a little bit about some of the integration stuff and why that's important but things you know anything that spews syslog is a good candidate for getting into uh elsa or uh looking from a psychotic perspective windows logs also very good options put in there how you can do that type of correlation firewalls any of you run pf sense i have a small blog post i've had to integrate pf sensor with parser for elsa so uh very slick uh just basically um uses the existing firewall access deny and firewall connection and classes built into uh built into elsa but you know i think hopefully when we're done with this talk
we will give you our objectives to give you two things one is give you an idea of what data is existing inside of security and what you kind of get out of the box the second thing is to say okay i need to look at my network and figure out what other data might be useful to me primarily from an ir but a security a general overall process i think is where we're heading with that so uh that's our our introduction and i'm going to let brad who is the expert in all things funk and elsa take over now and start talking about it okay sure yeah i mean we talked splunk and also because they
both are similar um they they index raw log events they parse fields they give you google style searching you know those capabilities are there there's a few different distinctions where you know if splunk's got some capabilities it also may not but really if you flip to the next side it boils down to three things money because splunk costs a lot of it uh speed elsa will give you sub second searches in most cases and looks splunk's prettier that said we'll show you some things as we go along that we're trying to pretty up elsa a little bit for the record l-sim stands for enterprise log search and archive if that wasn't covered in doug's talk
those are true to your question now they know the answer so no they should first person okay if you could flip to the next um so why do a splunk app for so you know why why take something that's free and stick something that costs money on top of it the main reason and value i think most people will get out of the splunk app is to learn the data that you're collecting with an so instance for home installs for small office and styles installs the free version of splunk we'll give you 500 meg a day you can index that's more than adequate for for small environments and as you'll see as we go through we've got
some screenshots if we have time at the end we'll do a demo um of just how powerful splunk can be to really provide you an entry gateway into security onion data and specifically the bro data because that's really where the wealth of value is so what are some of the things you can do with the splunk app as well as take from that and carry into elsa um learn the logs understand what bro smtp logs doing what bro dns logs doing what bro con vlog's doing then follow the uid bro ids logs will log for every event that's log a uid gets tagged and that uid carries through the various log formats so for example if you go visit cnn.com via http
you're going to generate a broken log event with a uid that uid is going to match a bro http request event in the bro http log which is also going to match a dns event and so you can through that uid you can take a session and see exactly where all of the different pro logs were were had detection um then take the next step which is understanding how all of these log events relate across the tool sets not just how bro relates to itself but how you can leverage snort and circada or osac and as chris is going to demonstrate a little later on how you can leverage other data and bring other tool sets in to really provide you some
great visibility and the last thing i think that's key is identifying normal from anomalous get get a sense of what is that normal baseline and that will make the anomalous really stick out like a sore thumb so this is a quick screenshot of the overview dashboard and i hate pie charts i really want to just use them as targets to take out of there but um but those basically just kind of give you a sense of the data that's being collected but you can see along the bottom down here we've got swill events so basically we're getting some snort circuited data and what splunk allows you to do is to really kind of do a search and then compound on that
search so not only do you have what the event is you have the count of events then the unique sources in that account the unique destinations and geoip lookups and then you can also be grouping by severity and whatnot so it can really give you a really good sense when an alert is fired exactly how dramatic it may have been and how many sources may have been involved or destinations and this is kind of taking this a step further so this is what we were looking at a moment ago all of these are clickable so you could drill down and then start seeing more information all the grow events that were you know part of that conversation that
triggered the alert up above and then you can also break out by respond your destination ip source ip a lot of different ways and we'll get into a little bit that we have demo and then lastly you can click from that to drill down to the actual raw event data and one of the more powerful features is the little drop down arrows over there there's a bug in chromium nso where if you actually click on the white part it's going to redirect you to the top of the page but if you click on the gray part it'll give you a drop down where you can do things like an incident response search for the source
destination ip you can do rob text lookups if you've got sif integration and i've got on my blog some details on how to incorporate sif which is collective intelligence framework incorporate some of that data into splunk so you can then do lookups against that to see if this is a known bad if there's any sort of confidence severity ratings that are not associated with that from public sources that capability is also announced by the way and we'll those are the links for the security onion for splunk and the security on your server sensor add-on which you can deploy onto security onion boxes to have those pull back to a central splunk server if you've got a sort of a distributed
environment um and we'll get to a demo hopefully in a little bit but that's that's sort of an overview of how you can leverage that app to learn a little bit more about what's going on with the data you're collecting moving on we need to talk about elsa architecture a little bit the elsa version that is currently in so is old it is you know as of february march martin was was super busy um and did some major re-architecturing whereas before elsa required exposing sphinx and mysql ports and using sort of more raw communication now we've got a full web api integrated where you can actually use ssl to communicate between sensors and to query against log nodes
and so flip to the next side this is a sort of prototype if you will of where elsa may fit within the new security onion uh image once once this gets incorporated which what in the next month we're targeting should be yeah so actually
i think that's something worth raffling off so yeah in a traditional environment where you may have you know three security items sensors reporting into a server with with normal elsa at the way it's deployed today all your log events are going to be stored on each of these so sensors and then you have the ui on the master which will then use query against these nodes for event searches well with the new api architecture gives you is the ability to index logs locally here or you can forward them all to either your so master or a standalone elsa box um where that becomes beneficial for example is when i was last doing an sf deploy
part of our objective was everything we needed to retain or try to keep the hold up we wanted on the so server the sensors we considered if something went wrong if we lost drives okay we lost five days worth of pcap data we can recover from that pretty quickly yeah we've lost that data but we can at least recover but part of the concern was how do we retain those elsa events that raw event data that we really need for retention legal requirements what not as well as for historical search and so what this gives you is a little more options in terms of how you want to distribute and deploy your environment and where you get a little more control
over where you want those events to live and be searched against um so these boxes can be either peers or forwarders or both um so we're actually we're talking with martin a little while ago and he made the suggestion that you know a really interesting way of architecting this type of deployment is to do your indexing on your on your sensors and then do archival on the server so you get faster search responses when you're searching against indexes that gets you your sensor data your servers are kind of so you're getting compression and a lot more capability to store vast amounts of data your server also you know typically isn't going to have the disk costs that you're going to get
with a sensor doing full pcap data and so it gives you at least an option for managing that data getting you some retention requirements matt and getting you a little more control over that and i know doug and scott are still working on kind of thinking through how they're going to incorporate some of this new capability in this form sorry index um seriously yeah you can jump to the next um so that said let's look into the little bit of what's behind the scenes with elsa um there are primarily two config files in elsa that matter um the elsa underscore web dot conf and elsa underscore node.com both are in slash etsy um and there's really only a few things in
there you need to worry about tweaking if you're gonna start customizing um in the elsaweb.com you have basically your peer relationships built so on a stand alone elsa you have an api key user and an api key and that api key can be as long as gibberish as you want the obviously bigger pattern and within that api key you can define the local host as a peer so it's again a standalone mode that's the api key and the user name to access this local peer so when you run a query that's what's used to communicate locally taking a step further you can have a master with a peer so in this case you can see we've got
a username secops with an api key of 001 on the local host that's talking to appear at 192.168.0.10 that's i username to ipops master and there's an api key flip to the next slide this gives you an idea of what that might look at look like architecturally um so in this case you've got a nelson master that's considered set tops that is pointing to an elsa pier that's collecting security onions that else to set ops master also has access to an elsa master for i.t operations i.t operations can't query the setbox data but the other way does work and then you can have peer data defined within it and as long as the master beam query has access to peers it
follows it upstream so you've got that as access control the elsano.com um there's three things really there that matter um the first two are archive and log women um you can define archive limits as a percentage um in this case 33. and that percentage is the percent of the log size limit to the vote for archiving um following that the log size which can either define by count or be a percentage basically tells elsa this is how much of the disk i want you to use for either index or archived logs
so the the third option that you really can look at in the else no big reporting configuration um and the way that basically works is you can figure an elsa instance as a porter any events that come into that box via syslog or other sources get parsed they get compressed and then they get forwarded up to a defined elsa appear that is a storage node or it may be a border node sending it up to another box but it gives you a way to collect events in most cases it's really applicable when you you've got either wide area inactive networks you may not have the capacity to be sending raw syslog data across the pipe and want to and the compression
really helps you optimize that a bit um and flip into the next slide you'll see the different forwarding configuration options that are available basically the method is how or where you want to forward events and there's several capabilities here you can see there's scp you can just raw cp url is the ssl based method um and then lastly there's a an option down here called ops and what that does is it allows you to configure a separate elsa instance or the same if you want where you can forward the web.log and node.log which are elsa's two primary log formats you can forward those up to a dedicated elsa instance so you've got sort of central management for troubleshooting
and investigating so under good before i pass it back to chris just to give you an idea of kind of how the flow of data happens with else the events come in either the assist log or from a forwarder via ssl zipped up and pre-formatted once they hit elsa if it's this log it goes through a pattern db which helps with the parsing and extracting fields if it's pre-formatted it just gets extract extracted um those resulting files get dropped as raw text into the elsa buffers at which point if you have archiving and indexing enabled copies go to both the data that goes into the index database then gets hit with sphinx search which goes through indexes the data
really allows for fast searching um stinks real briefly and i'm not sure if martin's gonna talk a little bit more about this or not but states speak stores data in temporary indexes and permanent indexes temporary indexes are in ram permanent indexes are on disks there's some subtle intricacies when you're searching within elsa if you don't really define any search terms keyword terms when you run a query it's typically going to go against the 10 bin bank companies so if you go into elsin say search class equals snork it's only searching 10 10 bucks if you say search tcp class equal score that's going to hit your permanent indexes as well as the town so it gives
you a little bit more of a dataset to go off um yeah i think that's all on this so pass it off to chris and let him talk a little bit about getting some data in it's way of a kind of to prime that discussion i want to have just have a little bit of differentiation between event and condition type of data here um event is what's typically said over syslog you think of event data it's the action of basically an asset something's doing something right it's usually associated with the time that occurred for example a user logged on at this time usually logged off and virus was found a packet was blocked whatever something
happens at a particular time um and there's usually other stuff describing that common stuff their source destination ips sword sports destination force everything in there that's really you know that kind of describes that action in that event and that's that's extremely valuable that that is like the you know the thrust of what we're you know as doing ir any type of teamwork we want to know what happened um there's this i wanted to talk briefly about what it means for kind of like a conditional type of data really condition i'm looking at is really the state of the asset it's not like something that happened what that individual or what that thing was it was the state of that asset
perhaps we're going to look at at the time of the event that particular event happened and usually instead of actually as being occurred it's not like a time that that particular event occurred it's a state at a particular particular snapshot in time for example it was running this firmware version when this event happened and uh there's other you know interesting stuff usually describe the state of an asset such as the configuration data this is the ios dump of this or uh you know this was the firewall configuration this was the 20 some odd users that were on the box at the same at this time these were the processes that were running so a little bit about
you know where i'm going with this there's so much good work done on what event getting event data analysis i want to talk a little bit about some of the conditional data or configuration data so just got a real quick sample flood workflow and i want to see something bad happen in so they say uh oh let's dig deeper into that there's usually two things which they want to see i want to see other events that happen about that same time just get some context fill that context and other behavior from involved assets in those uh around that same time that's a dramatically simplified view but that's just kind of simple of what's going on
uh what i'd like to what i was looking at is i think it might be helpful to know a little bit more about the conditions of the asset at the time that event happened so can you tell me more about what this box was like when that particular event happened or what this configuration was when that particular thing happened um and that's it's the kind of thesis here of what i'm looking at and type of data i want to talk about getting into elsa now there's all kinds of helpful things that might you might know you might it might be good to know what processes were running what ports were open what services were listening what os was on
the box what known software was installed on it perhaps even any known vulnerabilities that were happening this is like the windows workstation or something so all of those might be useful it might be useful to be able to pull them up quickly while you're going through this process so the question i'm attempting to answer questions i'm attempting to answer here is where can i find this information number one and more importantly how do i get this data into lsyl for easy correlation that's what i'm looking for here now so what's already there we already have products that's basically already integrated i think there's some efficient on that in terms of getting it showing up well but uh some of the
things that uh we're putting in i know that scott's going to put in this is this bro data known software known search known post so basically if you don't know bro listens and says ah i've seen a new piece of software it tags it as a type http browser this is you know this is these are the type of things that it's seeing on the network that they can identify based on network traffic and maybe known ssl starts and some powerful things you can do if you know what ssl certs are flying across your network and this is just one running list of known hosts and all that data shows up now what also might be interesting is
putting vulnerability assessment data in there port scanners and that type of thing so what if we could get nmap or nicto or nessus or openvis in there what what what would that add to the picture in terms of doing some type of correlation uh now what i have done is basically i've created a sample script uh it's out on my uh i think it's actually my github account so what i'm doing is for those four uh individual va flash port scanners that i had on the previous slide uh you're essentially taking this va uh the vulnerability assessment xml data we're flattening it and then we're converting it to syslog and sending it to elsa and puts it in in the sql
database so that's essentially how that's happening and so the flattening process is actually interesting because again it's it's you know putting a state or condition time in there is actually important and and separating it out because you know if you look at the average window substitute there might be you know thousands of different states of condition you kind of have to be smart about that you don't put them all in the same log line but you want to make sure that they're all going down there but but this is uh this is essentially the idea behind getting vulnerable assessment data in there let's take a look at this i'm just going to show you again forgive me for the
quality there but uh basically the script's pretty simple python also dot pi you give it the the place of the report report.net ss the report type you let it know and you get the elsa ip address and that's basically it um actually i haven't shown this on the slide here but i actually have flags in the python script so i think if you would do a dash x it will create the xml for you and just pick it out to an output file you can just copy it into your pattern tv or your merged xml if you do a dash s it will actually create the sql so you can do a mysql dump right
into it so doing this this way you you know the schema will already be updated for bridging that gap between when this schema is already built into so and now you guys you can use that as an opportunity for there but this is just a sample of that you know i've just done a cbe here this is some of the nexus class you've got you know the uh basically the source ip destination ipa you've got some of the service the risk factor that type of thing um and this is really goes above and beyond traditionally what you're using vulnerability assessment data i used to do pen testing and vulnerability assessments and the name of the game was
like produce pretty charts right you got like pie charts you could have table charts you could have any high risk low risk and all that kind of garbage and impressive management it's 300 pages it goes to the drawer and traditionally there's this guy says that is too overworked actually right but um anyway same type of concept here python you basically put an xml report in there uh this one i'm just listed there um and then you give it an ounce ip and then that's basically what it is and so uh you get a little bit of a description an oiv i've tried to put the knit or the oid from any one of these tools the osb dbid from
uh nicked out if that's available so you can do some cross correlation um just pretty simple pipe and limited there now let's just actually put this all together um this is a sample search i've done up here suppose you just want to do some blank search you know class equals nessus risk factor equals critical just quick show me the last scan i did all the critical vulnerabilities on my network maybe that's maybe that's helpful to you and i think that's partially helpful but i don't think that's extraordinarily helpful i'd give you a picture about what's living on your network uh get it taking it a step further here's what i've done with this particular elsa query so i've actually
used the uh the sub search um transform here in elsa and this is this is what exactly i'm saying show me all ip addresses that requested a resource with pass swd in it so it's you know this might be some direct correct reversal type of thing where the server they communicated with had a vulnerability rated as high and the type of that vulnerability was web application uses okay so so that's actually what what we're looking at here you know class opengis host type right risk factor high group by destination ip type that to sub search look for bro http the uri is basically contains ksswd in it and then grouped by the source ips that will get
you that type of list and you can imagine um the kind of similar type of thing nick though if you're doing web application scanning right and you know if it uh will actually let you know say here's the directory that you might have an issue with right this is where we found an insecure apache directory you could look for that you can pipe that in there and you can look at the traffic of your if the pro http is examining it and see if there's any actually requests for that particular resource so those are some of the things let me take it one more step here let's take a look at this this is tell me all the sites visited
that had a country code captured from who is not in the u.s and where the client had a user agent string containing java and a critically rated job of vulnerability as discovered by nessus okay again start with nessus look for java risk factor critical group by source ip sub search pipe that into a sub search let's grow http user agent is java so it contains java in it so it's a vulnerable joggery group by destination ip and then i'm bringing the source id from the previous search into that so that's what i'm thinking and type that to who is and then pipe that filter all that new ways data take out the us from the country
code okay that's uh that's just something like elsa it gives you an idea of what you can do with conditional data i think why that might be important in actually doing your analysis so um one last thing here i want to talk a little bit i actually have another uh script maybe where process data is important again snapshots processes are running at a particular time what does it look like i just basically have a simple wmi script again i know there's some inherent vulnerabilities limitations with wmi i might look in doing some other ways of collecting that process information but what i'm all doing is saying i'm going to take wmi data convert it to syslog and send it to elsa
and let's see if that's valuable here's some of the information i'm collecting on it for example the operating system the process id the parent process id the name the creation time of the process and then the source ip where i got it from all right and then finally again i'm going to just go through here is currently executing java processes on a box right depending on how often you run the script uh actually i got something isn't this here this is kind of interesting if you look like i just did a search for cmd.exe c windows system 32 cm.xc c wisdom system32cmd.exe and then finally c documents and settings desktop cmd.exe so it's not you know just something
jumps out at you like that for the directory i should have done this with the if you do a quick buy by executable pad this stuff like jumps out actually you know what i mean i mean it's this is hot it's not running there um again what i've learned from building lots of parsers number one is familiarize yourself with existing classes in elsa use syslog select star from devices select star from fields understand what the schema is and what options you have i mean just from understanding the schema markings saying you know this is elsa was built for ir by ir and he specifically designed what you know fields and schema for that process definitely reuse instead of building new
i've made many mistakes where i've just built new new fields and i shouldn't have um and then think about the ir process how can i link this log type to other log types that's the big thing how can i take how can i pivot on that data if it's a source ip and va is a source ip somewhere else how can i pivot and get back and forth when you're writing parsers as we're going to learn this afternoon how can i do that and then what would be important for me to filter on mac reviews and elsa my absolute favorite function how can you take that filter from it uh again some of the new content i'm not
going to take credit for all these because scott has done some of these um but i think we've rounded out all of the row portions i think that's all of the prologues so i think we're in good shape there um and uh it'll eventually get integrated um and then again this is the va integration if i'm gonna jump out here if you guys have any questions on this my contact information is completely located on the side please let me know if there's issues there's a lot more i can do with that but this is kind of just my first pass and then some of the dashboards were running up at the end of time so
brad i'm going to let you just kind of talk through some of the dashboards because he's done a fantastic job with this yeah i mean so lev he also uses google visualization for dashboarding um and it's pretty much as simple as running a query saying hey those results look useful add to dashboard and you can drop them into a dashboard and so we're working on trying to replicate some of what i've done with the splunk after security onion and pull that into elsa um to give some of the similar value uh one limitation right now is there is no click through capability so there's not the ability today to drill down on these events but thankfully a
elsa contributor recently submitted a patch um that martin is hoping to get incorporated here in the next few weeks that's really going to help with that click-through capability and make these dynamics give you make these a place where you can launch further actions from versus just sort of okay i need to take that and go back into else and do some searches and in this case you just got you know some snort circado alerts by classification and then by message this gives you an idea of all the bro sources that have come into the network and then this very handy little slider bar so you can really slide it out zero in on a specific time frame and get
a sense of what type of activity is occurring where this is an overview dashboard which basically checks bro's connection log and gives you a display geographically of where you've got connectivity going um it's again it's an overview this is a sense of what's going on in my network um these this is a breakdown of the events by class again these are not necessarily archive searches so these results aren't going to be for every bit of data that you've ever collected with elsa but it will be data that you've collected that's indexed and and capable of sphinx high-speed searching so in most cases it's going to give you a really good sense of relative activity over a period of time
um being vary depending on how quickly your index is rotating and then you know notices by title to the grow notice field um and then some more growth breakdown which is so it's usually way more chatty this is a dashboard we started on that's for net hunting and the idea here is to to try to break down some data so you know in this case you've got class equals protein s group by host name okay well that's a lot of data well you can click on the header to reverse your sorting and you can start looking at the infrequent domains that have been hit usually you can find some gold plumbings when you're when you're baking in there
um and then things like user agent um that have been detected http wise or smtp i just like to point out this is from my own networks i'm like one of the seven people that actually owns an apple tv too so he'll see his charts there but i do like it though i'm sorry yeah and this is one i released actually a few months back um on my blog it's it's a web monitor it was kind of designed to be to give me a parent a sense of what kind of activities going on in my home network so it's it doesn't really have any direct security practices i don't think but it's that again that sense of
visibility and awareness that you know oh look http traffic to china to russia to ukraine okay i've got some things is that the last one i think
yeah and again just some of the sense of the different types of data that you're getting from bro um you know bro irc data smtp data so you can you know tell the types of mind types you've been seeing coming across from your smtp traffic um known services and this is useful for baselining your network for knowing the normal identifying normal and making those anomalies stick out um and then lastly on this one the bro weird the one of the key things to point out though is this i can't emphasize the speed disparity between splunk and elsa i mean for for if you if you've looked at the splunk app it does some pretty amazing things with
the queries but if you do that on a large subset of data it's going to take a long time i mean the tool is really built to learn from it's not necessarily built to be a production solution you can take it and tweak it and and it can become a production and we did run it as a production solution but these i think the net dashboard we've got almost 20 queries elsa queries built into that single dashboard and that single dashboard will load up quicker than one sub search that you saw earlier in splunk um where you're having to do a lot of different transforms to manipulate that data so it's very very powerful in terms of
the speed and performance i think we're up on time to take one or two questions yes sir what's the largest deployment of this ufc
was it 50 billion yeah 50 billion events
are you talking about elsa are you talking about so so okay that's a different like how many sensors oh yeah yeah yeah a lot david you so sensors have you heard of oh i don't know hundreds i mean i've heard of hundreds um you know in in my previous organization we were accompanying about 10 000 people um we had three egress points where we deployed security onion sensors and then had a security onion server um and the longer term goal for our left was to use those sensors as collection points and ultimately have them feeding elsa data back to that security onion server because we inspect that hardware that was comparable for all four boxes and
you know security union server isn't as disk intensive as the sensors are the sensors have got that pcapp data you know and right now they're storing the elsa data so there's a lot of data being stored on those sensor boxes and that was the longer term of goal was to try to keep that pcap data on the edge but try to bring some of that elsa data back towards the center where we could have a little more control over tension and things like that did you guys do your giveaway no chris you had to put the okay right here uh here's the truth it's kind of coming from our presentation here um what are the two
fundamental configuration files in elsa and martin gaines this is down there right here in the blue shirt elsa underscore web.com.com
and we'll be around um i mean if we didn't really have time to demo any of this but you know if anybody wants to see a little bit of what we've done with both spawncap as well as the dashboards that are going to be coming uh to secure young before too long we'd be happy to show those off that's it thank you everybody