← All talks

Real Time Vulnerability Alerting

BSidesSF · 202046:51285 viewsPublished 2020-03Watch on YouTube ↗
Speakers
Tags
CategoryTechnical
StyleTalk
Mentioned in this talk
About this talk
Amol Sarwate - Real Time Vulnerability Alerting by Using Principles from the United States Tsunami Warning Center Harness public data and apply data analytics principles from US Tsunami Warning Center to cut through the noise and get real-time time alerts only for highly seismic vulnerabilities. Make vulnerability fatigue a thing of the past.
Show transcript [en]

okay today's talk at the moment is real-time vulnerability analytics by using principles from u.s. tsunami warning center by a mosa wit he is the head of security research at cloud passage Inc and he heads cloud passage worldwide security research responsible for cloud focused vulnerability and compliance research he's devoted his career to protecting securing and educating the community from security threats here is Emma thank you alright so the talk that the title of the talk is already mouthful real-time vulnerability and Alex by using principles from the US a tsunami warning center so as Tatyana mentioned my name is amal and during my day job what we do at cloud passages we provide solutions for for for securing and visibility of your

cloud containers and traditional workloads so real time vulnerability analytics has been a pretty old topic I mean this is people have been trying to get real-time information on vulnerabilities for the last I don't know how many years do you think that's happening I think about 20-25 years even because I remember when I was in the mid 90s late 90s I was working for one of the security vendors and even 23 25 years ago this topic of vulnerability and Alex and real-time vulnerable analytics was there so why is it that after almost 20 to 25 years I am still here talking about real-time vulnerability and Alex and well there are a few people here trying and trying

wanting to discuss that and I think the main reason for that is vulnerability landscape just changes so rapidly and it does not really matter if this is if this is ten year old 20 year old I'm pretty sure 10 to 15 years down the line we'll still be talking about vulnerabilities vulnerability and Alex and how to spot them so in in the next 30 to 40 minutes what we'll do is we will look at the problem definition we'll look at sort of a newer approach that we took this time we will go over some of the design and implementation of that we'll go over a lot of examples because I just I mean it's it's good to

have design implementation approach but when you see a system working with some working examples that's when it really it really solidifies what you're trying to say so we'll go over a lot of examples and then at the end of the day we'll we'll conclude and look at some of the future work that this project will do so let's get started I mean this I put this slide because really speaking all of us see a tsunami of vulnerabilities attacks exploits coming your way every day I think some of you must be security practitioners some of you could be working at security vendors and in some respect everyone is working somehow in the security landscape and we

see this really tsunami of vulnerabilities each and every day and this is a this is a slide from CV details calm everyone knows here what the CVE is right does it okay everyone knows what a CV is it's it's essentially a unique identifier that is given to one let ability and there are many companies one of them it's CV details the track how many vulnerabilities these are the vulnerabilities by ear and on the right hand side is 2019 last year where they were according to them about 12,000 unique vulnerabilities and mind you these are just unique vulnerability so one of this vulnerability can be used by many proof of concepts why many exploits by many viruses malware and things like

that so it gets really quickly multiplied by the amount of bad things happening in the industry and then it's at the end of the day in your organization you have hundreds and thousands of assets so it again gets multiplied by those number of assets so these are just unique vulnerabilities just to show our fangs how do generally and you can just this is a small enough room how do you currently get your vulnerability information nvd nist cv details vendor [Music] whatwhat do you currently rely on anyone wants to volunteer all of the above ok that that's that's that's that's a very good strategy because again getting it from just one source is it's not really

gonna cut it so the point of this slide is that when we have so many unique vulnerabilities coming in and this is just these are just approximate numbers because as you know there are some vulnerabilities in the nvidia database which are not in this database which are not in that database so you are always like sort of scrambling for finding the truth and also not just the truth but what's the vulnerability of that day what is essentially of highest criticality for that day so this is a screenshot of of the Pacific Tsunami Warning Center now now you would say why why tsunamis all of a sudden because there is a little bit of a correlation or similarity

between vulnerabilities and tsunamis so the root cause of tsunamis are essentially earthquakes and earthquakes can occur at any time so as I mentioned that the root cause of most tsunamis are earthquakes there are also obviously landslides volcanic eruptions some other causes also but the root cause of tsunamis are earthquakes which cannot be predicted but still even though earthquakes cannot be predicted tsunami warning centers do exist and they give us very timely information on the tsunamis that are going to hit our shores so earthquakes can be compared to one other abilities just like earthquakes you cannot really predict vulnerabilities like I really I can tell you like maybe five minutes ago what was the high profile vulnerability but I

cannot predict what what's going to be the most trending or high profile or seismic vulnerability by the end of this talk so similar to earthquakes you cannot predict vulnerabilities but still you have tsunami warning centers that give us information in the nick of time on on the dangers that are to come so a thought came to my mind is like if you can have tsunami warning centers implemented in real life why can't we have similarly vulnerability warning centers which sort of get same cues which are based on similar principles similar ideas of a tsunami warning center so I said okay let's take a quick two minute look at how these tsunami warning centers work so what they have

done is essentially and I promise this will not take more than two minutes I know you are here for vulnerable and not knowing about tsunamis so what what happens is they have a bunch of sensors at the bottom of the Pacific Ocean almost bolted or we have like heavy weight so that there are these sensors which are at the bottom of the ocean and they continuously are monitoring for some seismic activity these sensors have I think it's too far for the laser the sensors at the bottom of the ocean they have some sort of a bi-directional communication with some of the buoys which are floating on the water now what these buoys do is they

get data of assessments as macaque tivity happening at the bottom of the ocean they also collect data additional data like water temperature surface temperature wind a lot of additional data nearby and then you have multiple of such sensors and buoys which have and the buoys are they have they have good power they can talk with satellites which then later communicate your data to the data centers so essentially to summarize we have some sensors at the ocean floor which get data to the buoys the buoys get the most important suspect data as well as data metadata or data for wind currents temperatures on and so forth so collect additional data the buoys have good power to communicate

with the satellites and get the data to the data center now indeed this is this is an example of a real-life buoy that you would find in the Pacific somewhere so once the data comes in the data center there is a lot of data analytics ttan not just based on the suspect data because that's the number one cause for tsunamis but also for data as you said for historical data essentially what had happened when such an earthquake occur and the temperature words this and a lot of such data analytics is then in the data centers which at the end of raises an alarm saying that okay you could have a tsunami coming in the next

few hours or something like that so so essentially that was the question is can we use some of these things for can we model over one our abilities based on earthquakes so we just went over one of these things which is earthquakes cannot be predicted vulnerabilities can also not be predicted so what we decided was to essentially take three these three important components which are the sensors and the dock and the buoys and the communication and analytics and try to put them in the context of vulnerabilities so essentially we knew we needed these three things one is data collection fast communication and analytics for data communication we thought of a lot of things in the past

we had worked with honey pots and those had given us very good results everyone knows what a honeypot is right I see some some hands okay so essentially the advantage of honey pots are that you do get to know about live attacks and threats which is really good but there are also some disadvantages one of the biggest disadvantage is that the vulnerability has to be weaponized and being used in real life by either some attackers or by some vulnerability scanners from one of the vendors who are scanning the Internet continuously for your honeypot to catch that vulnerability so it's already a little too late because at that time a proof of concept for the vulnerability is already

out there someone has already converted the proof of concept into a mechanized script or something which really works or in the worst case adversaries are already making use of that vulnerability that and therefore your honeypot was able to catch it so this time we did not go with honey pots although I'm not saying honey pots are not useful they are tremendously useful but for this particular real-time alerting we thought that honey pots were too slow also the type of vulnerabilities that the honey pots can catch are limited so essentially a honey pot can catch only a vulnerability that essentially has a listening port which attacker can actively send packets to so again in today's day and world with CI CD and you

know a lot of you are ops engineers nowadays you don't even have SSH running the the usability of honey pots for at least for this project was was not really great well the second thing that we have always we had always done as an industry also and as well as me personally in my career was analysis reverse engineering manually tracking of these threats and this is also a very effective approach especially reverse engineering and manual analysis because you can actually tell what the attacker was thinking when he or she wrote that code or that piece of malware but again for the purpose of this project it was it just cannot scale in the sense you

cannot get real-time vulnerability analysis analysis data on manual reversing it is a very effective tool to know the nuts and bolts of a particular malware or exploit but scaling it is an issue timing is an issue also I mean again I this is not a reverse engineering talk but it is a fantastic tool but for this purpose that we were and we have we have successfully used this in the past but for this particular problem that we wanted to solve which is real-time vulnerably alert ring that just was not going to work so instead what we said was instead of going to a totally honeypot reverse engineering manual tracking that type of solution let's go with something that a lot of

people are doing which is collecting public information on attacks on exploits on data leaks via various blogs via various exploit databases very via even things like Twitter and LinkedIn and let's try to see if we can get better data that way then going through the way of honey pots and trying to capture the actual attacks and things like that so we said okay let's let's give this a try so the technology we use there is nothing very specific about this this is just some tools that we were already familiar with so we just used those so we created this system in AWS we used if you remember we needed three different components one is data

collection so we wrote a bunch of lambda functions for data collection and we'll get into the details on how these lambdas look like and what did they do for storage we used a really old style prose Christ sequel database because the data that we were looking at was not like in beta by type of data I mean you have under the Sun maybe 150,000 or so unique owner abilities so if you even multiply that with five or ten you are not going to get petabyte of data or something like that so our data was relatively small so a simple school sequel database would have sufficed and it did and then we wrote a bunch of data analytics function which

would sort of process this data and and give it weights and so on and so forth which we'll look look at in in a moment so the design of the system was like this we had a bunch of data collection lambdas each of this lambda function collected data from a particular source so one of the lambda function let's say collected data from nvd this is a very common source I think most of you would already have it one of the lambda functions collected data from one of the exploit databases free databases like exploit DB or some of the proprietary or paid exploit databases one lambda function as I said it went through just some of the security

researcher blogs that we have trusted for so many years some one of some of the lambda functions just browsed through vendor advisories and vendor feeds and the information that the vendors like Microsoft Red Hat all these vendors come up with and all this data one on one of the lambda functions was for Twitter because whenever there is a high profile vulnerability before the like everyone sort of talks about it there are some few key key folks who talk about it tweet about it so we said hey let's not forget that data let's scrape that data as well so we got that data and all this data went into our database which was then given to the

analytics lambda now the analytics lambda what it did was it well it is a very simple weight base algorithm this is not AI this is not ml I know I mean at this age and if you are talking at a security conference and if you are not doing I or email it's almost out there but we didn't really at that time needed that type of capability because our data was not like that because for a Imad you really need a lot of data to learn from to create your models from and we just didn't had that type of data so what our analytics function did was it just assigned weights to each of these sources and then calculated the

resultant for that particular boner ability or CVE so to give you an example the weight for lambda function which which got data from n VD was very low very and and the reason for that is n VD we knew it's a data source for vulnerabilities just because the vulnerability is on n VD does not really mean that it's a high-priority vulnerability it's a tsunami type of vulnerability or its it can cause like cess makeshift in your organization so the weight age given for data sources like n VD was really low if a vulnerability was found on one of the exploit databases then yeah the wait for that would be a little bit higher because now it's not just on n VD on one

or one of your data collection one of your vulnerability databases but it is also being exploited if it is found if the data analytics did some analysis on on the blogs that we were looking at and from the analysis if the data analytics Indian found that this one that this this particular blog is talking about an exploit or a possible exploit about that vulnerability then the wait for that was higher and that in that part some of the natural language processing and those type of algorithms were used but initially we gave gave them also very low weight because we were not really very confident on how well they work so for example given a blog post

you are writing a computer program to find out what it is about if it is about the vulnerability what is what is is it's the blog post saying that this is a low profile or nobility medium profile vulnerability or high profile vulnerability you can do do those type of things and we have done those but initially we even give those a little bit low priority I think high priority was given to things like just how many times you saw that same type of vulnerability in a given period of time from different data sources and from different things from the internet so that is essentially what this analytics engine did and then it created as an

output of the system was some graphs and some charts that would tell you on what it thinks what the system thinks are the releases make vulnerabilities the really high profile vulnerabilities for that particular hour that particular day and so on and so forth so we ran this engine continuously and started watching the results as a matter of fact now the engine is running for about two years and let me let me give you an example of some of the data that we collected in the first month so we started in December of 2017 about two years ago and this is what so we had we started the engine we said ok and then there were

some vacations Christmas we went home when we came back in Jan this is what we saw the data at that time we and on the on the x-axis what you have or the days so currently I've plotted the data for D and on the y-axis what you have is the vulnerability intelligence quotient so it is just a number that the algorithm created for that particular CV or that particular one vulnerability and the dots that you see those are individual vulnerabilities so in one month which is which was a very short time we saw that okay this particular Microsoft malware protection remote code execution vulnerability came on top the second one was the Palo Alto Networks pan OS remote code execution

based on the on the management interface so I mean we on a day-to-day basis we are responsible for in our day jobs we what we do is we look at these vulnerabilities create datasets for detection of these vulnerabilities release these datasets so that same day next day whatever as quickly as possible our customers can identify these vulnerabilities in their organization so we were pretty familiar with the landscape and then when all the engineers looked at the data we said that yeah if we had to do this manually we also would have sort of rated these two vulnerabilities on top now vulnerability classification is as much as as much as the art at science so there could be many other remote code

execution vulnerabilities there could be many other vulnerabilities that have more grave impact but then you have to take into consideration a lot of different aspects of that vulnerability so for example if there is a vulnerability in in the malware production engine itself which is supposed to cause catch malware and if that belongs to a very widely known operating system then you would say yes I would put that on top or things like that but anyway the first one data we have to be happy with that we were not very confident with it because it was just an experiment what we decided was to just keep the keep keep the engine running and see what data it gives over the course of

time these are just some of the parameters of the two vulnerabilities that I discussed these are from various open sources they're CBS s course and things like that by the way in in in that in the data analytics part of it we did not use the CBS s sub matrix for this vulnerability and you will you will come I will discuss why that why that was the case everyone knows about CBS s or have heard about that right anyone it's essentially just a scoring system where every vulnerability is scored and it is a very fantastic system and we'll we'll look at that a little bit more later so after a month of data we said

okay data is trickling in and everything is fine but just in five days about in Jan of 2018 sort of the alarm started to go off and you can see the December data which is on the right-hand side of ya on the right-hand side and in Jan it's just alarm started going off we started hitting vulnerabilities that we are five times more that we have ever seen anyone even try to guess what what these vulnerabilities could be Jan of 2018 so these I'm sorry say you said something eternal blue will come so these were the meltdown and the spectre of ulnar abilities if you remember I mean these were in the Intel CPU microprocessors and since they were in the

microprocessor side-channel at type of attacks everyone and anyone was affected they were more cloud providers were affected by it because you know they used Intel processors and I don't think we need any introduction for these two vulnerabilities so when again we in our meetings when we looked at it was at all cool I mean looks like the system is doing what it is supposed to do because these were as you could remember in the last ten years I did not remember any bigger vulnerability of this magnitude which affected so many systems and had really like a tsunami type of suspec impact so we were highly encouraged by the results because usually we see these low-level

vulnerabilities and then all of a sudden meltdown Spectre the counts went up so when we talked about this internally to our security teams we we soon came to realization that we needed an easy way to classify this we cannot go and give them the y axis which is the vulnerability quotient which could be like two thousand eight hundred and seventy-five I mean if a person looks at if a security person looks at it he is going to say what what is this which compared to what I mean how do I know that 100 is good and two thousand is bad so there was I mean very quickly we realize that just the raw numbers are

not gonna cut it if we want this to be adapt adopted even internally so we came up with classification system low elevated high and critical and this classification system was based again non on a very complex formula or something but it was simply ratios and we just fine-tune these ratios a bit so this is what we came at that time the green worse the low priority the then the yellows orange which is not looking like orange here and then then the red ones so essentially if you classify if we classified them into these different categories it was just a lot easier for everyone in the company and possibly we had some other POC when customers to understand what the data is

so interesting facts and we are still in just the first month of this is that in the same month if you look at the CBS s course on the left hand side are the CBS scores of various vulnerabilities that found that we are found exactly in that month you see that there were about 96 vulnerabilities the one in there at the end which were of score between nine to ten so the most the highest-priority vulnerabilities according to CBS s there were 96 such wasn't abilities death that month but in our alerting system we found four and that actually was very encouraging because as security practitioners you would know that you get essentially the entire industry and

security practitioners are flooded with this tsunami of vulnerabilities where everything is security school rating of 10 so if everything has a security a score of rating of 10 how how do you really classify what sucess m'q type of vulnerability and what's a vulnerability that just getting the score of 10 because it checks all these boxes it is a remote code execution it can be accessed over the network access complexity is low and things like that so again I'm not against CBS s in fact we have used CBS s for a lot of years and I was active in one of one of my previous employers when I was working there we were actively involved in creating this CBS s standard so it is

definitely very important I think if you remember the days before CBS s there was a big sort of chaos in the industry because every vendor used to rate this there a particular vulnerability differently and CBS s came in and sort of really calmed that down and this was one central authority mitre which which gave scores and they sort of did and still keep doing very good work but one of the things that you see just because of the nature of the same nature of CBS s is that it is strictly a formula based system so it does not take into consideration a lot of things like what what if the vulnerability is exploited currently in the wild or what our

security researchers think think about you think about this vulnerability what is the possibility of this vulnerability to become big or really get exploited so some of the vendors like Microsoft on every Patch Tuesday they actually they give very good guidance that what what do they expect would this vulnerability be exploited in the next 30 days which is their sort of case again so all those things are not present in CBS s as a result of which a lot of vulnerabilities these are just one month of vulnerabilities and you had about 96 vulnerabilities with a score of 10 the highest score and I think our system gave just four which were actually just these two vulnerabilities the meltdown

and Spector each of them had to see B so that became four so let's keep collecting data I mean any of such project we want to really make sure that we have good data and in the next six months we we collected some more data nothing says make happened adds spectra and married down which was actually good because we do not want to alert people we do not want to alert ourselves saying that hey there is on every day there is some huge vulnerability so in the next six months we didn't find any vulnerability that could crack that red red barrier which would go into the likes of spectra and meltdown there were some vulnerabilities which were in

orange and I think I have highlighted one or two one was the tuple code execution vulnerability another one was a pretty big DHCP client script code execution vulnerability and again since we look at this vulnerabilities day in and day out like one of the things that we did was we had just these weekly reviews of what the system produced with our engineers to say that hey if you had to manually classify a vulnerability would you really do it this way and we were sort of taking that manual analysis of our security engineers to see if their system is producing correct result so at least expected results or not so that's what happened in six months again

now if you have been using CBS s there were more than a thousand CBS s vulnerable at vulnerabilities with CBS a score of 10 and again I'm not even looking at the greens and the orange and the yellows these are I'm just looking at the last bar in the CBS s diagram these are the level 10 vulnerabilities so nothing can these are the vulnerabilities that have the highest score of between nine to ten so there were about thousand there and we were still at four which were the still the spectre and meltdown which again was a very good thing because if you want you can go down and look at the nineteen high-priority ones or you can go down

further and look at the nine ninety five yellow or elevated ones but I think the what what most of the most of the challenge that the industry is having is prioritization and I think this this actually tells a very good story so we said okay let's keep it running one year data so we we did see in in one year we did see some more neural ADIZ that that way that that started becoming like melt down or like Spectre so one of these vulnerabilities I think you may remember it was a ssh vulnerability so someone could connect without any credentials to your ssh server if you are not recalling it there are CVS at the end of the

presentation so do look up and that got a really high hit again when we did we were always doing manual sort of analysis to see if the system is producing correct resource and yeah I mean all the security engineer said yeah it says it vulnerability logging in remotely without credentials that's that's that that's a big event so at this point we were pretty confident we sort of expanded on some of our POC customers and started talking about this and also kept comparing against CBS s which by now I mean if you had done your triage based on CBS s you would have to threw about 1,500 vulnerabilities which you say are really really critical and

it does not work when you say thousands of vulnerabilities are the most critical vulnerabilities I mean nine or ten vulnerabilities you say ten well the system that we developed produce ten vulnerabilities in a year that we're really critical and then that is manageable if you say I have 1500 vulnerabilities which have a score between 9 to 10 and then again some 2,000 Mulliner abilities between 8 to 9 then it's just in real life I'm sure you can relate to this it just doesn't get in scale fast forward there's your blue keep what we saw was again the Microsoft blue key blue keep RDP vulnerability that was able to crack into the red zone in about 1.5 years and then we also had

this whatsapp vulnerability by which you could send a packet and just cause remote code execution on the whatsapp messenger that also cracked into the red so they'd all look pretty good again we did comparison with CBS s same thing and this is essentially all the data that we have collected in the last 2 years or so just a few weeks ago there were a few vulnerabilities like now this is few weeks old so I'm assuming at security professionals all of you have heard about this this is the vulnerability that was found by NSA they reported it to Microsoft it was a vulnerability in the way certificates are processed so that well there are two aspects of it

let me first tell you why this vulnerability now is the highest-ranking vulnerability in our two-year database if you see to the left even the meltdown inspector vulnerabilities in the they are really low as compared to this and secondly I would like to talk about the change that I that we did to the algorithm that scores this now you see that the spectra and meltdown vulnerabilities they are now in orange and no longer in red because now we have a new king now we have a new red we have someone who has taken that sparked up so we will talk about these two things so the core of all vulnerability I mean it's it was in

the elliptical core of algorithm essentially your certificates are signed with various their RSA or elliptical curve or something like that and the vulnerability was in the implementation of the elliptical curves so what Windows do is that when when it gets a certificated source it in the in its cache and when it looks at another certificate it compares the signature to the signature of what it has in the cache but the implementation flaw was that it did not check what curve was being used so it just does not have to see if the signatures match but you also have to see what all the curve algorithm was used and Windows I mean and then it happens with I mean there it's nothing

against Windows I can point to you to like many other vulnerabilities in each operating systems that are like this but even after 25 years in Windows there was a flaw where it did not compare that curve algorithm because of which anyone could create a fraudulent certificate for google.com bankofamerica.com anything and in your browser when you open that site it will just appear as a good certificate or essentially you could you could fool fool in believing that this is a valid certificate so I mean when our engineers so I mean when any I mean when you hear about it you can immediately tell her I don't buy god this is like wow the fundamental of secrets fundamentals of

security TLS and public key cryptography it's obviously not a flaw in the algorithm but in the implementation of the algorithm so once again we were we are at this point very confident that this sort of art science system that we have built is giving us pretty good results and alerting us only on the highest criticality or her highest priority or are really suspect Wallner abilities and we can sort of ignore a lot of the low level noise now the second thing which I mentioned I wanted to talk about was that we changed our algorithm on how they are scored so now if someone got a scored of let's say 2800 which is the same as married down

inspector we would only give that an orange and not a red and we are still debating on how to change it we would need to change the score because the way the system works is it collects data from lot of sources and as you know there is more and more data generated on vulnerabilities so we do expect that we will get higher and higher scores going forward so this is the first time in two years we changed the scoring mechanism I think our what we are thinking is that we should instead of doing this in one go we sort of need to change the scoring slowly as we go because as the system collects more and more data this the way

it works right now is the scores are slowly getting higher up and up so that's the second thing that I was like I was wanting to point out so well if anything just take a screenshot of this this slide I'm sure the slides are available for download as well but in take a picture if you are a practitioner just go back and make sure you don't have these sort of top 10 bad boys of in the last two years which at least according to the system that we built had the highest or the systemic level of these vulnerabilities were really high and they could really cause tsunami of flood of viruses worms or really bad things in your organization

so our future work one is definitely we are think we are looking at how to deliver this data to the community how to make these a bunch of did this project open source project and expose it to the community so that is I think right now the top priority as far as the project itself is concerned we are thinking on classification so not everyone is interested in every vulnerability so if you are doing let's say application security at company ABC then you may not be interested in the kubernetes vulnerability or some infrastructure vulnerability you may be have only dedicated focus area of focus so that is something that we are thinking of adding so that you can just

subscribe to the vulnerabilities or type of vulnerabilities that you would like to see and then the system can just give you the same results but only filtered them based on your preferences more sort of correlation with targeted attacks malware exploits and things like that we already have something in place but we want to expand it more and also some root cause analysis and classification so that if as an industry you can see that okay for most of my vulnerability zuv type this they were caused because of this and then as an industry we can just improve on on that root cause again I mentioned earlier we currently do not use we do use some AI but not

I would say the bulk of the system does not really use ml or AI and there have been so many ready-to-use libraries and things like that in here WS GCP and assure that now it has come to a point where we could theoretically really train what algorithm to read tons and tons of blog posts every day find out what the meaning of the blog post automatically and coming to a severity and then we can compare that with our existing favorite severity mechanism and then sort of get some interesting research saying that okay we it is possible now for us to write a crawler who can just read through all the blog's all the research articles all the papers

that are being published and really find the meaning of those papers so that is an interesting thing that we would like to do going forward so that's it for the talk I think we have maybe one or two minutes or maybe not I think we do have a couple of minutes okay I apologize the site is not set up so if you have a question just raise your hand high keep it up and I'll bring you the microphone

all right okay all right thank you so much I'm well thank you