← All talks

Manage Your Attack Surface on a Budget

BSides Las Vegas · 202147:22201 viewsPublished 2021-08Watch on YouTube ↗
Speakers
Tags
Mentioned in this talk
About this talk
A panel featuring security engineers from Copper and Apple presents Scout, an open-source tool for identifying and managing external and internal attack surfaces without expensive commercial solutions. The talk covers discovering exposed assets, detecting sensitive data exposure, and practical deployment strategies including cloud integration and regional compliance considerations.
Show original YouTube description
CG - Manage Your Attack Surface on a Budget - Ms. Brittany D Little, Dileep Gurazada, Joshua Danielson, Anchal Raheja Common Ground BSidesLV 2021 - Camp Stay At Home - August 1 Video Tags: bslv2021-cg-manage_attack_surface_on_budget-1046461
Show transcript [en]

so we all know that the number one thing that you're trying to do if you're trying to defend something is to know what the heck you're defending our next panel is going to present an open source tool called scout that helps you understand what your attack surface is but do so in a way that doesn't require fancy expensive tools so we're very excited to have brittany and delete from copart and ancho from apple to talk about the scout tool and how to manage your attack surface on a budget hello everyone my name is brittany little we are going to talk to you today about how you could manage your attack surface on a budget so just going to go over an agenda of

what we're going to look to accomplish today we're going to give a little introduction of the team the first we're going to introduce the first problem that we want to talk about it's exposing assets to the internet and then we're going to talk about our solution to that just scout external we're going to introduce the second problem that we're looking to solve which is exposing sensitive data internally and our solution of scalp internally we're just going to overall sum up scout as our homegrown tool that we've built and compare it to some of the off the shelf tools that are out there today we're going to look at what's next for our tool and then close on some a few tidbits

that we have for you guys so i will hand it off to enter the introduce himself hello everyone until this site i have background and pen testing cloud security threat management building and automating security tools currently i'm working as a security engineer at apple and then as i mentioned my name is brittany little i focus more on the product management side of things really focusing around identifying security risks developing strategic implementation plans for security products and communicating with key stakeholders hi everyone myself dileep gurajada with my background in security automation and engineering allowed me to build and deploy solutions for a wide range of use cases at copper all right so now we're going to get into

identifying our problem and really diagramming um the problem that we're looking to solve which is exposing internal assets every company has a data center headquarters and satellite sites when we look at the data center there's always going to be assets and sites in there that should and shouldn't be exposed to the internet but what happens if one of these assets or one of these sites is exposed there's times where attackers are just sitting there trying to find ways into a company's environment and they're scanning the internet using tools like showdown to see what companies have left exposed and so if they find these sites then they're really able to gain access into the data center and move laterally

depending on whatever their motive might be i'm sure we've seen we've seen instances that have happened like this you have to this is just one example where hackers actually accidentally exposed their training videos ibm researchers actually found the videos on a virtual private cloud server that was left exposed due to misconfiguration of security settings so this is something that we see and have been seeing and we were also looking at the verizon data breach report which you can see here on the left side it's showing a chart that talks about this talks about how misconfigurations continue to be one of the top error varieties in miscellaneous era breaches again this is just one way that

um assets could be exposed to the internet and then if you look to the chart to the right this is another issue that verizon database report identified and it's referring to old internet facing vulnerabilities and the existence of these old vulnerabilities is just again another way assets can get exposed to the internet so the industry is recognizing that this is a problem and tools are being built specifically to solve this issue of identifying these assets that are exposed to the internet one being expanse that you can see in this article it's a new technology that peter thiel funded but today we really want to walk you through our solution to this problem and how it might be beneficial to

some of you that are trying to accomplish this on a budget specifically so now that we've identified the problem how do we know where we are exposed how do we define our attack surface so throughout our presentation we're really going to go we're going to define our entire attack surface but right now we're going to start with specifically our external attack surface and the way that we've defined this is by looking at our external ips checking things like our cloud vendors and um looking at aws and looking at our domain registers like godaddy and other sources within our data center so now i'm going to hand it over to onshore who's going to talk through

our solution to ensure we aren't leaving any sites exposed externally so like brittany explained the problem of exposing internal assets and managing them we ended up developing scout so scout was developed to detect any of the accidental asset exposure due to misconfiguration and to detect it before the bad guys discover it also it's capable of performing regular automated vulnerability assessment on any of our external assets the data center headquarters or satellite sites i have broken down scout into like six different stages starting with footprinting followed by enumeration and then working on gathering intelligence moving to vulnerability assessment where we establish the baseline and then finally firing off the alerts for the team five out of six stages here are

automated right from the start and the sixth stage automatically becomes automated once we do like the first run so going into footprinting this is where we configure the tool that's like the initial stage here we just connect all our domain registrars cloud providers vendors such as like aws godaddy and also we feed in the info that we know about so that could be any of the registered ip addresses that your organization might have then we move on to enumeration stage where scout will query all the previously connected services so say aws godaddy and all those and then get the list of all the ip addresses dns names fqdns from there and this information is going to be used in the intel gathering

stage that's the next one so in intel gathering stage what scout does is it performs like number of queries on showdown and census using like the previous data it has collected from the previous two stages and then moves on to vulnerability assessment and vulnerability assessment scout performs uh other things such as like port scans whose details and banner matching and once both the both the reports are combined from intel gathering stage as well as vulnerability assessment states the data is sent to baseline stage and in baseline stage what we do is this is done only like the first time where we send out the report to sock so what sock would do is they'll do the

initial analysis and establish a baseline during the first round once that's done all the alerts start firing off i'll walk you through an example how one might discover let's say like an exposed aws uh instance hosting like a apache server so in the enumeration in the footprinting stage once you connect your aws account in enumeration stage it'll pull down all the ips and the service would be a service would get caught in like intel gathering phase or the vulnerability assessment phase and once that goes through the baseline data it knows like this alert needs to be fired off and you'll get alerted on either slack teams or like your ticketing platform uh now we'll move to the challenges that

we faced while developing this so the first and the biggest one we faced was establishing baseline this was the biggest one since it's manually time consuming and here is where our saw came into help and they helped us establish the baseline the second one was adding new vendors so every time we wanted to add a new window we had to research what kind of useful data we can pull that that's going to enrich our scout analysis and moving to the third one that's our scanning infrastructure that we used for scout so since car performs number of various scans we realized it was creating a bottleneck in scouts performance we then moved few tasks to aws

serverless services which perform these tasks in parallel without in incurring like significant costs and this is this comprises of the external scanning that we do and giving it to the leave for the external internal phase thank you thanks ancho um as angel spoke about identifying and blocking the internal assets being exposed it is equally important to identify your proprietary or sensitive data in your systems and protect them most common ways of exposing the sensitive data involves data being duplicated in unsafe environments as you can see in the posts attached in this slide all these are databases exposed because of insecure configurations and are also less or no authentication in place so now let's draw the problem here so as

you know our as a recap that brittany and angel mentioned each company have their data center headquarters and satellite satellite sites as locations and your proprietary data will be stored in data center and this is more secure location as you know but if you look at one of the hdlc processes most of these developers require this proprietary data either to enhance the existing application or to debug the existing production issue during this process the data will transfer from data center to personal workstation of the developers during this process the whole data will be duplicated across data center and it could be in headquarters or your corporate and also in satellite sites so now think about the scenario where

the attacker could be looking for any of these systems like anshul mentioned in the external assets so now you're looking at locations which are headquarters and satellite sites which are not equally protected or safe as data center and being exposing these sensitive information now let's look at the data breach report by verizon what it says as you can see misuse of data leading to potential attacks or hacks are common and these involves categories like privilege abuse and data mishandling etc

now let's define our attack surface within our internal systems right so to identify these proprietary or sensitive data we need to look at our database servers and mostly because these are the systems that are used by developers to store data or retrieve data which could have potential production data which is proprietary to your organization and these include servers like mseco oracle mariadb https websites or services that are running now let's talk about how scout internal helps in identifying and preventing these from being exposed as a recap the data has been transferred transferred or duplicated from data center to endpoint workstations which could be incorporate or satellite sites now scout is a tool that automatically detects any of the sensitive information

across these locations based on your ip ranges that you are provided to it so the sock can basically go through those reports and you and close them down or secure them so that no attacker can scan these or find these sensitive information now let's talk about how we develop internally about scout how we go about developing scout internally so initially we go through assets that have data or databases running in them so how could you achieve this is by using your infrastructure scanners like rapid7 which is already scanning for vulnerabilities within your servers or or assets so based on that you would be able to identify any type of assets that are running databases in

them based on the port number and the services running in them once we identify the assets the next step will be to detect the service type and database type so if we know there is a service cms sql running in them we should be able to retrieve the port number and also the typical database type say mariadb or oracle so what it helps us is to create a automated tool or basically a program to interact with those databases and pull the data so to access these data stores you need access to these environments as well as the databases within your corporate environment once you access these data stores you should be able to retrieve the data

and once you have the data you should be able to scan for sensitive information using using publicly known packages for regular expressions of pii or you can create your own for for your own use cases that that is more proprietary to your organization and once you identify this sensitive information or proprietary information we should be able to generate alerts and send an email or notify our sock team to take an action upon so let's talk about some of the internal challenges i have faced while building this tool as you can see it involves access to a lot of assets and data so the deployment of this tool is very essential understanding where to deploy and what

access is needed in on the network level as well as the data level so placing this tool is very much key as part of deploying this and once we have the access figured out network access figured out to the assets we also need to ensure that the access to the data stores itself has been taken care of with user workstations handling the data it's it's not easy to get the access to those data stores so continuously working with them and some manual effort is involved for the second point we have developing tool to handle new types of databases since our tool has been developed or deployed at a given point of time we know the

data stores that have been actively used within our organization and we were able to create programs that can interact with those data stores using python and and we were able to access those data and produce these reports but with the technology being advanced and most of the developers using the new technologies like no no sql databases for example we have to continuously develop the tool to tackle those problems and then the next one is improving sensitive data detection signatures as you might know pii changes from company to company as well as country to country and also the proprietary data for each company might be different so creating the signatures and updating time is a continuous process not just a

one-time thing and then finally the latency issues so while we run this tool this is continuously interacting with data stores to identify if any new data being added to those data stores and if it could potentially be a pii data so during this process the actual database interaction has been has been happening continuously which is causing a latency issue with other apps talking to these data stores so working with those data store teams as well as the services teams which interacts with databases we came up with a like a preferred timeline to run these tools and also not to disturb the productivity of those teams so let's take a recap of our attack surface right now we have determined

about external and internal and we have covered all these now on based on the solution comparison let's talk about what is our home ground solution and how it helps in comparison to the industry solutions or other products out there for homegrown the main the main pro for it is customization in the sense each environment is unique and have their own data data or or even infrastructure that is needed to be like supported by other tools or your own tool so uh for example you have your own type of domain registrars like uh ancho mentioned godaddy and also uh cloud solutions like aws or gcp or azure so it varies from company to company and also the flexibility like

uh deploying this tool some of these solutions does not like for example scoured internal or external can be deployed across uh within your enterprise or outside your enterprise and the cons for this tool is basically uh the management of these two which requires submit to senior level engineer and off the shelf products i know we have developed this tool for the past year and more so during the process of our assessment uh this is what we have found out so it requires low maintenance and also uh less interaction from the teams so that is a pro for it and for the biggest con is the customization like for taking the custom inputs from each company because each of their

infrastructure is different like i mentioned before

now let's talk about what else can we do other than looking at our external and internal assets as you might know each company is evolving and using internet of things a lot for example being a remote site locations using thermostat smart fridge alexa for some of their operational purposes we need to somehow include internet of things to as it is a biggest biggest vector attack vector right now so we need to work on those and it is upcoming for us and what's next as you as you can see the challenges that we are facing is uh mostly around the customizable things the sensitive data identification is one thing that has been taken much of our time being passing through a lot

of regular expressions and evaluating all the data it takes a lot of time so to reduce the time we would be either doing paddle processing or also removing a lot of false postures that could be a part of the rig access and then the next one is developing automated remediation processes this tool is very customizable and also has a lot of a lot of elections being generated so it's not straightforward to automate the remediation but we have identified a few use cases where we can eliminate the manual work being done then let's look at the sensitive data and see what what are the controls we can keep for them in the sense so now we have identified the sensitive

data within your internal systems now what are the controls you can keep to make sure that they are not being accessed internally or externally by attackers

on the conclusion side of things no solution is perfect in the sense every solution has their own drawbacks like i've talked about cons in the previous slide each solution might be customizable to some point we need to work with their leadership team and develop the roadmap for it and for the customization side of things we have to build our own plugins or probably work with them to create those plugins as part of the roadmap for the solution now and the and the final one probably is know your attack surface so this attack surface is is very tricky in the sense it evolves day to day in the sense of the technology being a world and

a lot of departments have using technology more than before the reason being the ease of use and this creates a problem of exposing a lot of new new type of infrastructure for example recently iot we are talking about and also there could be ai based solutions out there that could be exposed later so keeping track of what being used within the company is always a is always a starting point for everyone starting to build a solution and the final final thought about this is most of these products are incomplete and very expensive so it's better to start building our own solution in here thanks for your time so i'm curious how did you guys come together uh to create this

this this tool and and as an open source thing because obviously it's not part of your your company's vision so you know tell us about the genesis of you guys coming together and saying you know people need this let's build it that's not a lot of people say there's a need not everyone says let's solve it how did you guys decide to uh to come together and do that you can take it initially but later probably they can add as well answering so we also presented probably on 2017 a tool called open source tool called github guardian in besides las vegas so this is my second time presenting here so our company and basically our

department security department has always been looking for ways to wish to create this open source software and see how it benefits the industry so that's how i mean that's how we have the strategy going forward in looking at any of the problems that we solve how we can better better help the community out there um and we have seen b sites have been a great platform so far for us and we were able to do it before and then we started working on these projects and we thought it would be a great benefit and then it happened in such a way that we were able to do it again and are you guys involved in a number of

other open source projects that are sort of related or across the security community uh not actively but i've been like part of some of these tools like semgrap and and a couple of other linking tools on the security side we've not been active development but we've been more uh willing to build our own products we've been stuck in there but that's our next goal to move on to that next level and be a part of the community that is actually developing other things as well nice and uh and we'll talk come back into sort of the future of this project in a little bit um but let's let's sort of talk about some of the steps

i think your presentation did a really good job of sort of saying here's uh the various steps uh first step enrollment picking you know identifying what you have sort of the core basic um but as you point out that data is often distributed a lot of different places uh can you talk through some of the challenges that someone's going to be facing when they uh when they think through that process and maybe what you guys had to go through to get integration into other tools yeah sure uh i'll take that so yeah that was one of the biggest challenges that we faced was just to like find a starting point and the one thing we always found was good was

just talking to the accounts department because they are the ones who pay for all that stuff so that gives us a good starting point okay which all services do we pay for and go from there so if your company has a really good idea asset management i think no like your organization should not face that issue if not i would suggest like go to your accounting department and start from there just figuring out what services is everyone using be it your developers or your security team or any of the like even the legal team so that was a good starting point so and one of the things you guys talked about was the importance of tracking internet facing applications

uh of course today the perimeter is quite a difficult thing to define and in fact if you're doing it if you're doing security well there'll be some very important things that you need the track but that ideally aren't internet facing or have some amount of protection in front of them do you guys draw a distinction between that or you think it's important to sort of list everything possible as the maximum set

just to refresh that question probably for us to understand so what is the scope of the applications right let's let's try you asked it much better and shorter than i did uh hopefully i'm answering the right question then um so our um our starting point was to identify the external facing so the way we did was i mean um like you mentioned i'm trying to go through accounting or also look in some of our access logs or dns logs to identify those but applications one of the other steps we have done at core parties mostly we we accessored or single sign on or basically integrated with a lot of idp providers so the apps that are not

actively being routed through these solutions like octa or any of these um so that's when um that's when we get to know the list of apps that we need to do the assessment for which includes different set of infrastructures that is aws or or it could be uh salesforce or it could be any other tools that your company uses but in fact it's also helped people to reach out to us in asking to do sso for these applications as well for the ones that we don't know of because that's ease of use for the users um so so that's the other way we were able to capture these websites nice and have you had to think about or deal with

uh the opposite of external facing right the internal facing uh assets where you say this is this is important because if someone could get inside and hop over they could do a lot of damage right obviously a b is a good one but like are there are there other ways that you guys have started thinking about that or you say that that's an important issue but we're gonna leave that as a separate issue um actually we did that too so that's part of our second talk the internal uh asset identification so the the main goal of these internal assets are to identify those which have our sensitive information or proprietary information that might actually leak something out to the internet so

the goal was to identify these data residing in the systems so so that's the part of our second talk which is identifying those assets internally so the goal there was to see which most of these i think the the most uh common use case that we have seen across the companies are the developers actively accessing those production data for various reasons one i mentioned was around uh solving the existing production issue they need the active data actual day later to test instead of using the test data which may not solve the actual problem so for that reasons they're un like unsecurely storing these data in their personal stations and other locations so we try to solve these

to first come back and say where is our actual production data reciting is it only in data center or anywhere else too so then we were able to identify these workstations and started finding a way to secure them we are we are in the process of securing them the goal was to make it more of creating right processes for these people to act to access these data and also secure the systems that are having the uh sensitive work proprietary data nice thank you uh so zooming out a little bit uh in your talk you sort of you mentioned hey there are tools out there that are doing this you flagged one very prominent funder but and of course we we all know that

there are you know further products uh i think a lot of folks that were selling network management tools in the sort of late aughts sort of rebrand themselves as iot tracking tools you can manage your network um so it's a rich marketplace uh as you guys are building this what was your initial vision for the audience of who would want something like stout in particular uh and then as this has evolved tell me how you've seen sort of your vision of the user base evolve as well

i think the initial poc i think probably answer might have done with the other solutions out there the main the main reason that uh every tool is not perfectly fit for a company any company right so because you need to know your attack surface as any company like a medium small or a large sized company but uh the i think uh angel might explain it later after my uh answer to this the the common problem we have seen is the customization to your environment none of these tools come with your customizations because you have chosen a different vendor or a different uh third-party software or you are a complete sas based application versus versus you have an on-prem data center

that you work on deploying your systems in there so all these different customizations are not uh actively being during your process of assessment right so it's an year back or so um so we're talking about uh not these customizations being readily available it's a it's a hard problem for any companies because they have to still work on building those plugins or those stuff unless the company is ready to work with you and do those it or else you you're anyways building half of this solution so so that's our main idea on that one interesting now that makes it are there are there types of companies or organizations that you think are are particularly excited demonstrated some interest or that

you're you think will be particularly helpful here probably not fortune 100 right i know you guys amazing work but so like how do you how would you characterize who should be paying attention and you know maybe our audience maybe they're folks out there that are working for mega corps but we want to make sure they know oh when they're talking to their friends you know you should you should think about this for their needs i think uh probably let me start with the initial startup based companies right so this keep on building this stuff which they are not aware of for using it the technologies that are readily available so that creates more problems in the

long run because you're not being knowing the assets as well uh it's just keeping this in mind this tool in mind and also making sure what it needs these it needs inputs from all the sources we have the surface or basically our assets being accessed from so taking that into consideration probably they can start in the early part of their company's growth so that would be one thing and then also the companies that are current like trying to move to a uh data center like or move to on-prem or a cloud or or something like that so people that are switching right now i know the bigger companies have their own processes within their within their company just

to identify these assets but the ones that are still trying to are in the process of identifying their assets or asset management this would be a great tool at the same time to see the security posture at the same time so you have identified your assets so what is the next step have you seen the risk score or what is the risk posture of your assets that might be the next step for them to look into it makes sense anyone else have further thoughts about who you'd like to see aware of this to be able to pick it up and integrated and so one thing you sort of flagged is hey you got to do a little work

here um and this is the trade-off here um anything that we should know about someone who sort of says yeah this scout seems pretty cool i don't want to pay the mega core i don't want to pay you know giant vendor prices um what kind of resources is an organization going to have to line up and of course as you pointed out it depends where they are based on you know where they are in their cycle and are they doing a massive shift moving around anything that you can offer for someone to help plan uh what it would take to use uh sort of this scout model to keep it on a budget

you can go here dancing do you want to take it uh can you hear me answer do you want to take it oops you're you're still on mute i think oh oh sorry no worries it's it's now official that we've been on a shared virtual experience someone has forgotten yeah so the one thing uh i would recommend is uh if you have a cloud provider to deploy it there because that's one of the initial bottlenecks we faced was once we deployed it on our infrastructure was like the scanning part because scanning takes a lot of time and is mostly limited by the number of uh network scans that you can launch so that's one thing i would recommend

somebody who's like starting out with this tool to just deploy it on their cloud instance so they don't have to worry about like scaling the scans and like worrying about the scan times i think just to add on to that probably uh until i have seen within our companies where you start scanning these resources coming from these different uh services like showdown and right so there might be some false postures and then your scanner might start scanning them and then you will be flagged from these actual uh scanning vendors say probably if you have deployed it in aws symmetries or to you and say are you doing something um bad to the to the other companies of things like

that so um just just keep that in mind while we're doing i mean i know everybody in the industry might know already we can't use those and you will be like for sure so uh but just these are small things that you might face during deploying these and probably running them running scout and and also the integrations we have built are probably customized to our environments um so it's just identifying those aws probably it might be the common ones you are using or it could be gcp or azure so just knowing those and building those plugins on how we build uh taking those steps into consideration on how onshore has built and mentioned uh some of these

components individually are directly can be directly integrated with the tool that you want to be using it but some of the components you have to build by yourself like customizing the actual integrations with the cloud vendor or any other source of platform where you can get your assets so that's one part you just need to keep in mind that it's not directly consumable so you need to you need to create your own stuff before you make sense thank you uh and and sort of thinking about bigger picture this is i'm going to put you in a little bit of a spot here um there have been a couple of interesting talks this weekend about compliance right this is the i need

security from my own perspective because risk management's important but i also often have to demonstrate to someone else whether it's you know my customers or the damn government uh have you thought about using or seeing where this tool could fit whether scout could sort of support a compliance model is that something you thought about you're like you know what we're just focused on the risk side um i can take this i think most of these companies have assets across different regions uh suppose thinking about probably uh like european countries and they have some privacy laws compared to united states or uk each country has their own and each have their own id management or things like that so you can't just go on

and access these assets and scan for those um so so there are these laws um but what we have done is we modularized our tool deployment by country so that so that we would know uh or we would follow the process so that we don't deploy it or by in accident we would access any of the assets from different regions so we are more controlled inside in the type of fashion like not not scanning the entire infrastructure at the same time but also making sure that we classify them by countries so that's one of the that's a good question actually from you so that's also one thing that everybody have to keep in mind that

privacy is one of those things yeah that's uh always nice when we can actually acknowledge that they are co-linear and if you help one you help the other rather than trying to say they fight each other yeah yeah so and it might be a risk in the privacy-based companies like countries right so that might be a risk where it's just scanning itself is a risk so trying to see the problem so uh one of the advantages of being the moderator is i always get to ask about the things that i care about uh and one of those is um software bill of materials or s-bomb right the idea that uh i don't just need to know

what the blinking box is on my network but maybe that blinking box is made by a small vendor so it's not gonna have a cp itself i need to know what's under the hood right i need to know what the components are is this and and then obviously map them to potential vulnerabilities uh is this something that you guys could see fitting into the scout model uh yep yeah i mean i think that's a good question though so right now we are solving it in a separate use case like not using these we still use our uh oas recommended software like open source software like dependency track and all those um but i think integration

of so so like ansel mentioned right so for the software that is available he's just looking at the surface area of these of these uh assets like say which service running is it exposing the ports that should be or shouldn't be opened and suppose say 80 and 443 so those are like the common ports to run web application i mean 80 we wouldn't recommend but um but going into the next level of actually scanning these websites and identifying the vulnerabilities in like it has and then going into the components like you mentioned that has built these to third-party software or things like that and giving the risk score would always help because that will prioritize the security

operations team to work on which assets uh uh up front compared to the other ones and uh and also and also the reasons why it has happened and go back to it and see it could solve multiple things but it's just aggregation of data i would see it in that side of things aggregation of data and giving it a posture posture to those essence i'm going to just start referring to s promise it's only a problem of aggregation of data how hard can it be uh no this is this has been this has been really interesting um and thank you guys for taking the time to do this uh tell us what do you see

sort of for the future of scout um is this something that you think is you know you're going to hope for people to lift uh sort of help with the heavy lifting and start to think about their specific use cases um tell us what you guys have planned for the future in all your infinite spare time on this project um angel do you want to take it or yeah i'll go for the external one for the external one uh i have like two things planned out which i want to add is one is adding more cloud vendors so like azure and gcp and at least like adding top five vendors which uh most of the organizations would be

using and the second and the biggest one would be to improve like the scanning time uh since this time we started building it we have brought down from like 28 days to like 12 hours and that's my goal is to like reduce it even more so that the organizations could get their report faster and they can go on scanning like more assets quicker nice is there anything else you want to flank and brittany i can put you on the spot as sort of the horrible cold calling professor uh anything you want to add that we haven't touched on because we are just about out of time for our discussion not really i was gonna the one i was gonna hit on was gonna be

angel's last point of um always just looking to improve the scanning time and seeing how quickly we can provide that data um to the team and whatnot is is that the sort of thing that you think of as a real time or background cron job right what's what is what is what is you know taking the time and having it or taking it down by an order of magnitude how does that help it's just the i think it's the concurrent direction yeah probably yeah concurrency and the size of network that you're scanning so it might be even like small for like a smaller organization but it it depends on like how many assets do you have or how many assets

does scout discover for your organization so yeah that's all we're trying to bring down but yeah that's the challenge right now thanks for watching yeah and i think probably the one last thing i just need to add was around but i think we were mentioning in the end about iots which are the new subset of your infrastructure that is actually being actually or or i would say like most commonly used compared to any other right now uh like compared to fighting years back so it's going to change more towards in that side of things so uh just to keep like keep that awareness and also try to build those integrations because it's it's such a new space there

might be one or only one or two vendors where you can where you need to integrate with so that might do a heavy uh uplifting for us in the sense of it might help a lot of people because they might still be using the same vendor compared to the infrastructure that we integrated with got it well thank you i'm gonna ask one more favor which is uh could one of you post uh any relevant links in the uh in the the um discord uh right there's a github that folks should know about uh just so anyone who wants to learn more can contract this uh as i did not did not write that down from your video presentation

but unless there's any very short last words uh thank you guys so much for taking the time to present thank you for your hard work building this and if anyone wants to know more they should reach out directly to you thank you thank you for having us bye thank you have a great rest of the time at summer camp we'll talk soon campers