← All talks

Educating Your Guesses: How To Quantify Risk and Uncertainty

BSides Knoxville · 202337:39178 viewsPublished 2023-05Watch on YouTube ↗
Speakers
Tags
CategoryTechnical
StyleTalk
About this talk
Most cybersecurity risk assessments rely on qualitative matrices that lead to inaccurate assessments and bad decision-making. This talk demonstrates how to apply quantitative risk analysis and Monte Carlo simulation to model uncertain events, calculate financial risk in dollars, and communicate cyber risk to business stakeholders in terms they understand.
Show original YouTube description
At its core, cybersecurity is all about risk. We need to understand, report, and mitigate our risk. However, the industry adopted methods for analyzing risk lead to inaccurate assessments, invalid math, and ultimately bad decision making and spending. I will show you why, and how to fix it. Asking for budget and justifying spend in cybersecurity departments can be a difficult task due to limited data and high uncertainty of future events. This talk will dive into quantitative risk analysis as it relates to cybersecurity - how to model uncertain events and understand financial risk. Attendees will see a first hand demonstration of how quantitative modeling can be used to communicate risk and understand ROI. Attendees will walk away with the tools needed to present cyber risk as a dollar amount that can be easily understood by other business decision makers at their company.
Show transcript [en]

for our next talk here we have Sarah anstey is that is that right ansty yeah uh is presenting educating your guesses how to quantify risk and uncertainty cool my name is and I work for Nova Coast as their director of data analytics so I probably have like a relatively different background than a lot of people at this conference or in the room because my background in training is not actually in cyber security or I.T or anything like that um I'm actually more of a statistician and data scientists by trade um who just so happened to get into cyber security about six years ago now and have been trying to take a lot of statistical methodologies and ways in

the math and data world that we do things into cyber security and kind of applying them to some of the problems and challenges that we have in in cyber and I.T um so today I'm going to keep it like pretty casual and stuff but I'm going to be talking about risk and uncertainty and how we can model that and understand that kind of go over some of the ways that it's currently or historically been done in the industry what I think is wrong with them has the statistician and then different models that we can use to actually quantify risk so I think before we really hop into any of that we need to talk about you know

what is risk um and really defining that before we can try to quantify it so risk in my opinion is anything that's unknown um so whether that's a risk in cyber security or even a risk in life it's any time where you don't know what the outcome of something is going to be so think about taking a risk in real life right why is it a risk it's probably because there's some uncertainty around it you don't exactly know how things are going to end up so it's the same in cyber security a lot of the times the different risks we're looking at are the risk of getting breached you know the risk of ransomware or something like that or a

vulnerability being exploited all of those things are things that you know we don't know if they're going to happen to us if they do we don't know when and to what severity so it's really all about uncertainty and unknown which I think begs is a question if risk inherently means we don't know what's going to happen and it's it's all about the unknown how can we possibly measure it and how can we quantify it right it's kind of like it seems counterintuitive that you would be able to do that if inherently it's unknown and so before I get into some of the methodologies of how I think that we can do that and how we can use you know

stats and Mathematics to do it I want to talk a little bit about the ways that it's currently and historically been done in cyber security so everyone is probably pretty familiar with this this is like your typical risk Matrix right so whenever you're rating something on a scale of like low medium and high maybe you're doing one to three or a one to five scale or something this is pretty common in cyber security um so you can see we're looking at things like what's the impact going to be versus the likelihood and you can rate things low medium to High um sometimes I see like a lot of like one to three scores or things like that

this is what we call a qualitative method for understanding uncertainty or risk right so say someone comes to you and they say hey what's the risk of this thing happening and you say medium that would be a qualitative answer it's not quantitative in nature and so these are what's used a lot of the time and the reasoning that I most often hear for why we're rating things on on these types of scales is that because it's unknown we don't have much data right we don't have much input data so we can't possibly say with a hundred percent certainty that there's a 12.4 chance of this exploit happening in the next five months you know we we don't

know right it's risk we don't have that input data but we think we're relatively well protected so we'll say it's like a low a low chance and people feel more comfortable saying that because they feel like they can't get exact and precise in their measurements but I wanna I wanna talk a little bit about where this starts to fall apart so let's say I'm you know a data person and I'm trying to understand for our organization what's the risk that we get breached in the next year right really broad question kind of hard to answer and so I've got our suso and our CTO and I'm gonna go ask them both the same question on a scale of one to five what do you

think the chance that we get breached in the next year is so I go and I ask the CSO and then she says like you know we've made all of these great improvements in cyber security and we're doing so well and we've got all this tools and funding and whatever um we're really doing pretty good there's probably only like a three percent chance we get breached in the next year they're kind of you know thinking in their head so they say that's a one on a scale of one to five and they tell me we think we're at a one okay so then I go to the CTO I ask the same question and maybe the CTO is like

yeah the CSO is an idiot and we have no idea what we're doing and who knows right and they say you know in their head they're like oh there's probably like a 17 maybe 18 chance to get breached in the next year so they'll rate that a one on a scale of one to five just like you could see here and so now both the CSO and the CTO have agreed there's a one on a scale of one to five chance of this risk occurring of getting breached in the next year but all of us in cyber security know there's actually a huge difference between a three percent and you know a 19 chance you get breached in the next

year that's really really significant difference but because of the scale that I asked them to use when giving that rating you know we're not seeing that difference we're kind of getting rid of precision and so I want to show another example of how these things really start to fall apart so now let's say we've got two risks risk a and risk B risk a we say has a 50 likelihood and a 9 million dollar impact risk B has a 60 likelihood in the two million dollar impact so using this risk Matrix so it's like I just said there's nothing crazy it's a common risk Matrix we would rate risk a as a medium and risk B is a high by just

using the scales but if you do a very quick calculation for expected loss which would be impact times likelihood you can see risk a has a 4.5 million dollar expected loss and risk B only has a 1.2 million dollar expected loss so now not only are we like losing some clarity and precision because of the scales that we're using but we're actually understanding our risk worse right we have come to the wrong decision using a risk Matrix like that about risk a versus risk B and at a high level this is a concept we refer to as analysis placebo analysis Placebo basically is just a broad term for any time that the measurement or the scale that you're

using to try to understand data or you know understand some something statistically is actually giving you either no measurable information or giving you a worse understanding of your risk just because of the actual methodology you're using to assess it which we can see happen right here right because we're using a qualitative risk Matrix approach like this we're actually getting a worse understanding of our risk and at a at a lower level too just to say that the actual type of analysis per se or analysis Placebo at play here is called range compression so basically we're compressing quantitative data into qualitative ranges and then what happens is people try to do mathematical operations on it right so we rate things

a one two or three let's say and then people try to say well a one we have a one and a two so the average is 1.5 but that's the exact same as saying we've got a low and a medium so we'll call it a low and a half right because you can replace one with low and three with high and so forth anytime you can take those numbers right and just replace them easily with words it's a really good indication that you're probably not using a quantitative scale you could do the same thing on a you know one to five scale you could replace it all with improbable so um Regional Etc so if these are some of the problems

with the way that we're currently understanding risk at a high level in cyber security how can we fix it well this kind of begs another interesting question because if we go back to what I said at the beginning that risk is anything that's inherently unknown then anytime we're trying to measure or understand our risk to an extent it's going to be a guess because we can't know for sure what's going to happen in the future we're always estimating or guessing and it kind of begs the question is it okay to guess right and then we move into this question of are you a good guesser right and is it possible to make yourself a better

guesser so before we get into some of the quantitative methodology I want to run through a little exercise to show everyone you know of whether or not they're inherently good guessers and see if we can learn how to be a better guesser so um I like to think that um you know I can read people pretty well and you know stuff like that and I can just tell everyone in this room is a huge fan of reality TV I can just yeah I can just tell um and so I want to talk about how many seasons of The Bachelor have there been and do not Google it um I love reality television let me explain

um because I know that's going to you know invalidate me if you have never seen The Bachelor or The Bachelorette it's on for two hours every Monday get yourself a bottle of wine drink the whole thing watch the show by the end of it you will feel so much better about your life I mean seriously it's really therapeutic right okay but so how many seasons of The Bachelor have there been why did I pick this question let's think about it a lot of you probably if I'm going by stereotypes and cyber security don't watch reality television or the bachelor right um so you probably don't know off the top of your head what the answer to this

question is but you probably you know maybe you've heard of The Bachelor or maybe someone you know watches it maybe you have like a little bit of information or history with the show right so I want everyone actually real quick so everyone in their head right now I want you to think about what you think the answer to this question is but don't just think of a number think of a 90 confidence interval range so if I can take us all back to college statistics real quick a confidence interval right a 90 confidence interval you're going to give a lower bound and an upper bound to what you think the answer to this question is that you're

90 sure the right answer would fall on that bound so about a 10 chance you know it's it's outside of that but you're 90 confident that it's between these two numbers kind of think about that in your head right now now let's say we're gonna play a game unfortunately b-sides didn't give me the budget to actually give you guys a thousand dollars so hypothetically let's say I was giving you the chance to win a thousand dollars okay I'm gonna give you two options for the way to potentially win the money option number one if the correct answer to that question is within the confidence interval in your head you win a thousand dollars if not you don't

option number two spin the wheel if it lands in green you win a thousand dollars if you don't if it doesn't you don't okay so think about instinct which one would you choose right just it's not a trick question it's just like instinct which one first popped into your head that you would you would prefer so now let's all go to mentee.com real quick this will be the only time I ask for audience participation but it'll be interesting to see to see what people answer we'll go to this code right this is going to be a quick live poll um it's not going to make you download an app you don't have to sign in

all right and I'm gonna just pull up monkey and the code's still up there at the Top If you need it but what was that instinct which one would you choose I want to see what the audience thinks I'd also just like to point out a room full of cyber Security Professionals and you all just go to whatever link I tell you I think we might need to reevaluate so would you choose a confidence interval would you spin the wheel or do you have no preference you would do either and it's not a trick question it was what was your gut It's Your Instinct right pretty even interesting so about half would choose their confidence interval a

half it's been the wheel most people had a preference okay most people had an incident okay interesting so I won't leave you guys hanging the correct answer to the question 27 seasons and this is I I should have said this before this is just the bachelor we're not does not include the bachelorette bachelor pad bachelor in Paradise Bachelor Winter Games okay just the bachelor 27 Seasons it was 26 I had to change my slides this morning because one season just ended but Zach was like the most boring Bachelor of all time it's not worth watching he is terrible terrible not worth it save yourself okay so real quick one more time let's go back to mentee here

hold on things are going crazy present okay was that answer within your original confidence interval yes or no be honest let's vote let's see how we did so was 27 contained within your original confidence interval

promise there's a point to all this by the way other than it just being fun it relates back to quantification okay so about 25 75 30 70-ish majority though no that answer wasn't within your original confidence interval so here's what's interesting about these results so about 30 of people had it in their confidence interval about 70 percent didn't if we were let's say a perfectly calibrated room which means if everyone in this room was a really good guesser we would expect 90 of people to have that answer in their confidence interval and 10 of people not to right because what did I ask you I asked for a 90 confidence interval to the answer to

this question which means 90 of all the intervals given if you gave your true 90 confidence interval should have had the right answer even if you didn't know the answer what you guys really gave on average as an audience was your 30 confidence interval right on on average that's actually what you all gave as your confidence intervals in the audience which means that you guys are what 60 percent overconfident in general um it's not just because you guys are in cyber security that you're all over confident it's actually human nature there's there has been so many psychological studies done to prove that when people inherently have to estimate something they are almost always incredibly overconfident and this is the

reason that when your boss asks you how long a project is going to take and you say four weeks it ends up taking five months right like how often does that happen it's because we're not inherently good estimators no humans are but the really cool thing is we can actually learn to be better there's a few psychological tricks so going let's go back to this let's go back to the wheel inherently if right away You're you your thought was that you wanted to spin the wheel that means that you thought there was a better chance of winning a thousand dollars when spinning the wheel than going with your confidence interval but if you truly gave your 90 confidence

interval you should have had no preference between the two because both in your mind should have represented a 90 chance of winning a thousand dollars so if you wanted to spin the wheel then what you should do is go back and widen your confidence interval right make both ends wider until you have no preference between the two and then if you're inherently if you wanted to choose your confidence interval you should shrink your interval until you have no preference between the two and that's called the equivalent bets method it's one of a bunch of psychological ways that people can actually become better estimators but it basically just proves that whenever you give monetary um potential winnings or monetary loss

on the line even if it's fake in this example you can actually train yourself to be a better estimator by putting you know money at risk so just a method that you guys should you know even if even if you don't do it in cyber security even if you're doing it to maybe just estimate how long a project takes try doing that next time um it's a good way to kind of tune your brain a little bit to be a better estimator and like I said there's a bunch of other really cool methodologies to do that but moving on so why is all that relevant right um well it's relevant because now I want to talk through a simple one-to-one

substitution of the risk Matrix that I showed before but one that is statistically valid and accurate for uncertain data and uncertain endpoints so basically what I mean by this is that the methodology I'm going to show it doesn't require any tools it doesn't require any additional input data than you would have if you were just making a risk Matrix um and it will give you know similar types of results but in a more statistically accurate way to assess risk so what we're going to do is called a Monte Carlo simulation a lot of you are probably familiar with it um it's basically just a broad term for a statistical methodology where you're replicating something like 10 000 times

right so it takes input data in the form of ranges confidence intervals I told you it would all loop back around um instead of precise numbers so if you think of like a normal equation right you know x equals five or whatever there's an exact number well a Monte Carlo simulation takes ranges as the input and why is that good it's because we're looking at risk so there's a lot of unknowns and uncertainty right so it's a lot more accurate if we can give a range of things that we think are going to happen instead of pinpointing ourselves into one exact number and then do thousands and thousands of reputations and look at the trends and the averages

and things like that so another last time that I'm going to make you do college statistics I promise but if we look at the replications let's say we're doing 10 000 replications well those variables need to be pulled from some type of underlying distribution right like let's say you take a random number in Excel there needs to be some type of distribution that you're pulling a random number from you know maybe the distribution is from one to five and it's all normally distributed which means there's the exact same chance of you pulling 1.2 as there is 4.3 or something like that maybe it's based off a normal distribution which you probably all remember from college stats again right

so your normal distribution it's centered at zero standard deviation of one even on both sides it's in red there pretty simple this models a lot of real world things but actually a lot of things in cyber security don't fall well on this model but if we look at a log normal distribution which is in blue you can see the differences a lot of the density of the curve is closer to zero right is closer to the left hand side and then it has this long tail on the outside the other important thing here to notice is that it never goes below zero so what do these translate to in the real world well one it shows that our

risk can never be negative right as much as we might want there to be a negative five percent chance of getting breached in the next year that can never happen so we don't want to choose you know a distribution that would allow for that and then the other thing we see here like I said is the long tail this represents that when we do get breached right A lot of the times it's not that five million dollar reputation loss hits the news data breach sometimes this is having to remediate a few laptops or fix something or you know do some patching or whatever like I said I'm not as cyber security heavy as you guys

um but it can happen right those five million dollar data breaches can happen we want to account for that but we're saying it's much more rare it's actually a lot less likely than just a typical event that requires a few hours of remediation so now that we've talked about the underlying distributions let's talk about how to actually do a Monte Carlo simulation and how to really switch from a risk Matrix to um quantification the very first step is just to Define your risks which just means what do I want to look at with this methodology the really cool thing is that this is just a really high level framework so I've been using like the risk of getting

breached or the risk of a data breach a lot but you can do this with any risks so like I've done ones for companies some kind of interesting ones we just wrapped up one where they wanted they had a bunch of Cisco switches that were going end of life and they wanted to see if they were taking on more Risk by keeping the end of life switches versus it was going to be like 1.2 million dollars to replace them all in Hardware costs right so are they taking on more than 1.2 million dollars worth of risk because if not it's actually a higher Roi did not replace the end of life switches or I did another one that was really

interesting that looked at um the risk of allowing personal email on company owned devices right how much additional risk was a financial institution taking on for allowing employees to go to their Gmail on a company owned device so you define your risk whatever thing that you want to look at the next thing we have to do is Define a Time range because it doesn't actually make sense to say the risk of getting breached you would have to say something like the risk of getting breached in the next year right you always have to define a Time range for the things that you're looking at happening because we're looking into the future then the next thing we're going to do is

go back to those confidence intervals right so this is where we're going to figure out for that particular risk what are all the different things that can affect it we'll call those your input variables right so if we're looking at the risk of phishing maybe we've run click uh you know phishing scenarios simulations in our environment and we know the average click rate maybe we know how many phishing emails are coming in a day or something like that we take in all these different variables that can affect the risk that we're looking at but again instead of just giving it a number we're going to assign a confidence interval because we don't know for a fact that 50 phishing emails

are going to come in tomorrow but we do know that 90 of the time on a given day there's between 10 and 8 000 phishing emails that come in per day right so we're going to use those confidence intervals and then so I have here repeat with multiple experts what that basically means is if we don't have any data in our environment so say we don't have click data if we're looking at phishing or we don't have some vulnerability data for some reason we can use subject matter expert estimations which would be all of you guys in the audience which are the people at the company is actually working on the cyber security environment every day who have seen

things and can kind of say hey on average this is what I see and you know use your confidence intervals and that's why we need to learn to train ourselves to be better guessers because if we have subject matter experts who are what we call highly calibrated meaning they've gone through tests like the equivalent bets method to know that when they say we have a 90 confidence interval it's actually a 90 confidence interval and not a 30 we can actually get a lot of good information from just subject matter experts even if you don't have hard and fast data and then we're going to run our Monte Carlo simulation and take averages which for this part I'm actually going to just

show you guys quickly how to do this um I'm just going to show it in Excel because I wanted to make it approachable you don't have to buy a software for this or anything like you can literally do it in Excel like I said a one-to-one substitution I'm not gonna you know make you get anything new so in this uh scenario we'll say we're looking at a company in the technology industry with around a thousand users and about 10 million records and they want to see if they should purchase a phishing detection and response software something like a proof point or something right so they just want to determine is it worth the investment because how much risk are we

taking on now and how much risk would we be taking on or how much would we reduce our risk if we did buy and implement this software to a high level of efficacy so this is where we look at all of our input variables right so these are just some example ones but you can see we look at things like well what's the business impact of a data breach which means if a breach happens how much on average does it typically cost the company you might say this is like really hard numbers to get but we actually pull a lot of these from if you think like the Verizon data breach report or the IBM cost of a breach

report you can actually get a lot of this data free online from just looking at the company's segment you know and and the company size and then we want to look at things like number of users that click on a fish in a 12-month period again this might come from phishing simulations that you've run and then we also want to see you know questions like if we do get breached how many hours does it typically take to respond remediate and recover if someone does click on a link you know what's the hourly wage of that person doing the remediation because that's cost that we want to factor in you know how much user productivity is

lost during that remediation time and then we want to look at if we were to buy you know let's say it's that proof point or whatever it is what's the estimated reduction in successful phishing attacks right and then how much does it actually cost to buy that software in terms of Licensing and how much to implement and maintain that software working at a high level of efficacy those numbers are pretty easy to get because you know we can usually just pull licensing quotes and things like that so you'll see we have one blank here because I wanted to show a live demonstration of how this would work so right now you see on the right hand side in my

results here nothing is everything's you know broken and stuff because we've got one empty input variable and that empty input variable is the number of users that click on a fish in the 12-month period so we're going to go back to mentimeter one last time because I want us to now put our better guessing into uh into practice here so let me see I can get this to work hold on I'm sorry it's all zoomed in

hmm

okay it's kind of cutting off so sorry about that but I want you to think about your 90 confidence interval to the number of users that click on a fish and then in a 12-month period for a thousand person technology company so maybe you just use you know from your company how many people you think and if your company's five thousand you divide it by five but give me a lower bound and an upper Bound for a thousand person company how many people within 12 months do you think would click on a fish fishing email right use that equivalence bets method okay make sure we're well calibrated and give your actual 90 confidence interval and

remember we're very overconfident so you probably want to give a wider range than you would initially think and then I'll take whatever numbers we come up with and we'll plug them into the model and we can kind of see how we get results

okay so looks like you guys are estimating between what an 11 and 56 55 click rate 13. all right so we'll use we'll use 140 and 170. or 570 570 yep all right so we're going to come in here we'll plug in 140. 570. and here we go we've got some results so let's decompose what you actually get from using this type of model the first thing you're going to get is what we call your inherent risk which is your average annual cost before before just meaning like your Baseline so if you're not to buy that product or do whatever initiative you're thinking of how much risk are you currently at due to the the specific risk we're looking

at so here we've got a little over 282 000 and we want to get everything in terms of dollar amounts because at the end of the day we need to report this to the CEO or the board or people who don't understand you know what it means to have 10 000 critical vulnerabilities on a Linux host or something like that right they want to understand the ROI of purchasing different things or doing initiatives so that's your Baseline that's how much risk we're looking at right now and then we look at our average annual cost after that's our second output which means if we were to implement this software and have it running at a high

level of efficacy how much risk would we have after how much would it reduce our risk and then from there you can really easily just get an Roi multiplier which basically just looks at how much did it reduce our risk and divide it by how much did it cost for us to buy and implement this that's how we get an actual Roi calculation and then cyber security can start to be looked at as something that saves money instead of just spends it right the next outputs we get are simulated loss histograms so simulated loss histograms just means of our 10 000 different replications that we did you know how many times did it fall into

different buckets you can see again that log normal distribution happening where a lot of the times in the next year you know what is the max by far is like 656 000 which means very rarely is there ever a breach you know that cost you that much but it does go all the way up to like 3.8 million so sometimes you can have that large date of reach and then finally we look at what's called our law succeedance curves our lost exceedings curves basically just show us so again we have inherent versus residual which is what's our risk before versus what's our risk after and it shows us what is the percent chance of losing a certain dollar amount in the

next year or greater so here for example we can see whatever this point is so there's maybe like a five percent chance of us losing 560 000 or more so there's only a five percent chance of the next year due to this risk we would lose more than 560 thousand dollars and then we can compare that to what's called our risk tolerance curve our risk tolerance curve is something that's actually made up by people in your organization who have the ability to say how much risk they're willing to take on so are you willing to have a five percent chance of losing a million dollars in the next year due to a breach and a lot of large companies will

probably say yeah we're fine with that you know smaller companies are a regional hospital might be like no that'll put us out of business so every company is going to have a totally different risk tolerance um and that's something that usually like the CFO the CEO maybe the board would help come up with but these are the outputs of the simulation and what what we get from moving to a more quantitative approach so to kind of sum it all up a lot of the feedback sometimes I'll get when I like present on this methodology is like well you're still just guessing right you know we're we still don't know for sure what our risk is going to be but let's

go back to that initial definition of risk it's always going to be a guess because you can't predict the future but this is a better guess right it's it's something that is statistically sound and proven and is not going to introduce more uncertainty into your calculations simply because of the methodology that you're using to try to understand your risk you know I think that um in cyber security a lot of the times people like to think that we have these crazy unique challenges that no one solved before and stuff and maybe sometimes we do but this understanding your risk and uncertainty is is not one of them because the preference a lot of you probably know

this but yeah I'm not this smart I didn't come up with any of this this has been going on and being used in the insurance industry for 30 years because think about the insurance industry they have the same type of risks and uncertainty because they don't know if someone's going to get cancer or get in a car crash or anything like that how do they decide what deductible to charge someone or how healthy they are or how much of a risk they are to ensure well they do know some things they know family medical history they know if you're healthy if you work out all these kind of things that can help influence that we don't know if we're going to get

breached in the next year we don't know if we're going to get a ransomware attack but we do know what controls we have in our environment we know how many people click on a phishing simulation we know how many critical vulnerabilities we have on our different softwares that we haven't patched and how willing we are to to lower those patch windows so we do have some data right and we can use that data to make better guesses with this methodology um yeah that's pretty much all I have I threw my LinkedIn up there if any of them want to connect or anything um otherwise I'm always happy to talk about data and statistics and I'll take

any questions as well if anyone has some if we've got time yeah yeah yeah so actually so I wouldn't recommend sharing I won't share this one but there's one um so there's a book that I got actually a lot of this kind of presentation material on it's called how to measure anything how to measure anything in cyber security or something like that I think it's called it's by Douglas Hubbard and um Richard searson but if you Google that book not only is it one a great book that Josh should read but if you go to their website they've got free Excel downloads that are are a template to start to do some of this stuff so I would recommend

downloading the ones from there is because they have some written in like tips and tricks on how to use the Excel files they're going to be more accessible than this one would be but also great book to read yeah how do you choose how specific you go with your risk right you have a breach it could be any number of ways how how detailed do you go yeah so in general there's there's this concept called um like decomposition where you can start to decompose all these risks so the highest level one to look at might be the risk of getting breached but you could decompose that into well through phishing or malware through these different attack vectors decomposition

is an okay thing to do but it actually starts to give you a worse understanding if you decompose too far so there's this concept that if you really try to go specific on the risks it's actually harder to get accurate numbers which is a little counter-intuitive but the broader the risk you look at the more accurate you can actually get your numbers so like I can say from experience one one thing that I've done that I think decomposed too far that just didn't work was I tried to like map it to the miter attack frame framework um there have been some people that have successfully done it with cyber risk quantification maybe I'm just not smart

enough to get it to work but like when you go that in depth on different attack chains and vectors and moving from this to that it this methodology kind of falls apart so in general I like to keep it at a higher level if you think of like the verticals of cyber security so maybe like phishing is one vulnerability management and patching might be one maybe IIM stuff is one keeping it more broader on those Strokes I think typically works better and then the other thing I would say is it works really well if you're looking for a specific Roi number on on an investment so doing it on more of an initiative basis or like a product basis

so maybe you're thinking about buying qualis or something like that that's a high enough thing that it's pretty easy to quantify but doing something really specific like the miter attack framework or something is usually going to decompose too far does that help answer a bit yeah any other questions well if not thanks so much for listening happy to connect [Applause]

[ feedback ]