
all right can it can everybody hear me all right all right uh well it's 11 o'clock time to get started thank you um for this conversation we're going to be talking a little bit about threat modeling in an agile software development environment so rather than um talking about a lot of tools i want to talk about threat modeling from the perspective of trying to advocate for it within a software development team or software development organization that has moved from primarily waterfall to agile tell you a little bit about myself um i actually have a degree in marketing so i don't know anything about technology at all got a degree from uh now university and marketing and found that i really liked
being in technology and so turned that into a paid hobby a couple years after my degree started my own business called alc technologies and then folded that into a business with one of my partners called itx corp uh we're actually headquartered in rochester um and we now are uh 250 people strong we have 11 or 12 software development teams and an infrastructure team that helps to manage deployments and security and support for those groups and i'm kind of the chief geek for that part of the organization i have a cissp pursued my mcse back in the day and while i was doing that i uh got married and raised a family and got involved with the boy scouts um and when
i'm not doing technology i'm reading science fiction uh or uh swimming or biking or doing something else that gives me time to listen to audiobooks and podcasts um and finally i'm always learning uh and you'll see from this conversation from this talk one of the things that i've learned is that i'm often wrong and i'm always trying to find a better way to do what i'm doing there's always a way to improve if you look back on how things are going and ask questions and listen to constructive criticism and so in this what i want to talk about is my my journey with our software development teams as the as the kind of the chief geek right as the
person who manages the infrastructure and manages security i don't manage any of our developers or any of our development teams i'm actually not allowed to give orders to our developers or our development teams um i can set standards um those standards are approved by the board and i can audit to those standards to make sure that that people are following them but even if they're not following them right how do you make sure that every developer is writing good code right that's the responsibility of your technical leads and your architects um and so at a certain point software development being a design problem um and being kind of more of an art form as opposed to being something that's
ruled by tools or automation is an interesting problem to solve and the biggest part of it for us is that our my job right as the head of security is to do my best to make sure that the software that we deliver to our customers isn't putting them on the front page of tomorrow's paper right so we want to deliver a good quality product so years ago when we were talking about software security and trying to get it into our development teams right um we started probably the way a lot of you started which was doing software development or software security rather as a qa step right so um we do pen testing after the code is
delivered to our uat space we do vulnerability scanning after the code is delivered to our uat space we do automated checks on our code for code quality and that type of thing um and that's not a bad approach right but it's um the the challenges that we know by doing as a qa step that we're going to release software that has bugs in it right we can only catch the things that we're able to find during that quality assurance step and we know that we're not going to catch everything the other problem is that as a software development firm right we're building software for our customers they're paying us for development hours so when we find a
software bug right that's what we know a security flaw is we have to go and fix the software or if we find that there's a problem with the code that's not to spec or doesn't meet a particular security standard we have to go back and fix it so we're essentially rewriting code and that costs money and that means that either a we're eating it which i prefer that we not do because that's our margin uh where the customer's eating it which they would prefer not to do because they've already paid for it once and they'd rather have us deliver a new feature that's cool and new and exciting for their users so it's not a win-win
for anybody it's essentially a lose-lose when we have to fix bugs especially bugs that are related to security because the customer obviously expects us to deliver secure code the first time around but this is the original model so when i learned about threat modeling i got really excited right threat modeling is an opportunity for us to think about security from a design perspective essentially to shift a lot farther left in our software development process than what we're doing now with software security testing so is everybody here familiar with threat modeling not familiar with threat modeling just a quick show of hands so we've got a few great um i think there's a talk in the afternoon
i'm not sure if the speaker's showing up yet or not but there's a talk in the afternoon somebody's going to be speaking more deeply about threat modeling but threat modeling is essentially these four questions right we ask these questions what are we doing right what are we building for a software product and then how can it go wrong right so what we're essentially doing is starting to think about what are the problems that can go into or that we might build into the product or how might the product be subverted or how might somebody attack the system and we start asking those questions during the design phase so when architecture delivers an architected product we can start looking
at a product and say where are risk points and where are our attack points and where are potential attack vectors and how well you mitigate it right that's our next question is once we determine that there's a potential attack vector we can design a mitigation into it into the product or if we don't design a mitigation we can implement that code in a different way so that there isn't a threat or we can go to the customer and say there's this potential threat right because when we're designing the product those threats don't exist yet there are potential the customer might say i don't care which is fine too but then the customers going into the process with their eyes
wide open saying no i'm not going to pay for that mitigation or i don't or we might recommend that well this isn't a big threat you might find that your money is better invested someplace else because we're doing an mvp right now or it's a threat but it's not that big of a deal because there's no pii in the system right so we can have those conversations with the customer as opposed to a customer just assuming that we're building perfect code and in that case we make active choices during the design phase of what we're doing about a potential threat to the system right so you might say um in our login screen there's a potential for somebody doing a
denial of service attack against the logins themselves or against the login screen so that nobody can access the system right so that's what can go wrong that's our threat vector and then you can say well we can do about it there's a couple of things you can uh you could code the screen so that as it sees more connections from an individual ip you could start blocking that ip so that they can't access the that particular login page um that's an active way of coding around it you could say we'll mitigate it up front by implementing cloudflare or some intrusion prevention system in front of it right so you could do that or again the customer could say well i'm
not really worried about it because we're deploying this product internally behind a firewall and so there's a really limited level of scope in terms of protecting the login page so we're just not going to worry about it all and we'll leave that on the backlog for long term and then finally there's this question how did we do in doing something about it right and so in this case you can actually you would actually build a test to see if you actually did it so um so we're able to hand off an active role to the qa team right in terms of how do you identify whether or not that we we built a preventative measure
that prevents somebody from dosing that particular page or that screen in the system right so that's essentially what what we think about for threat modeling and and the cool thing is that threat modeling is kind of a way to think about stuff in general it's not a it's not a software development model or a security model this is a way to think about kind of what's the happy path and what does it look like when you deviate from the happy path i'll give you an example i have a friend who's comes from um the banking industry right and so in the banking industry in the regulatory space they do this but they don't actually know it
has a name it was really interesting i was talking to him and he's like yeah we were we were seeing that there's this um there's a way that somebody can attack this new financial instrument that the banks are asking to allow have allowed in the regulatory space and so we in order to allow the financial instrument we kind of have to talk about how that instrument might be subverted right so all of a sudden we say what are we doing we're creating a new financial instrument right what can go wrong well there's a lot of ways people as we all know from many recessions there's a lot of ways that you can attack a financial instrument or subvert
it to make it work for you against the people who are consuming the financial instrument right so what can go wrong are all these ways that the financial instrument can be subverted so then in the regulatory space what they do is they work with the banks to find checks and balances to prevent that instrument from being subverted and then finally how did you do they have tests that they can go along to see if someone is actively attacking the instrument right so in the financial industry they do it they just don't call it that they don't have even a name for it we're lucky we do now we call it threat modeling but we see this
in other places too so even within our software development process our own team thinks about this not just from well how can the system be attacked from a security perspective but also like um like where are the potential places where the system might have a performance problem right bad performance is also off of the happy path and so we can identify places where you might have a system bottleneck as opposed to a security threat and then work out mitigations for that and then work out qa steps to determine whether or not you've mitigated that right so we can think about threat modeling from a bigger perspective if we're comfortable with that we can think about it from a way to just think
about how things can go wrong in general and and take a more uh proactive approach about how we prevent those bad things from happening um if you're interested on the security side go to the threatmodelingmanifesto.org and they actually kind of talk out what threat modeling means especially for the security organization so i was really excited right we can do threat modeling now and the way that we started approaching it was by building what's called a data flow diagram this is something that you can build off of your architectural diagrams or your architectural works this is an example data flow diagram from a product called threat dragon which you can get from for free from um the osp organization
um and threaker dragon's really nice it lets you build these and essentially you've got all your different parts right you've got your application and your worker process right those are your systems that process and then you've got your flows of data so like for instance an end user using their browser can talk to the application and then there's a web response and within the model for the data flow diagram you can identify with the green lines uh what's called a trust boundary and that's a space where you identify that data is flowing from a place of lower trust to a place of higher trust or vice versa those trust boundaries tend to be an indicator
of where your risk points are where an attacker might attack the system because at the end of the day it's either your availability that somebody's attacking or the integrity or the privacy of the data usually our customers are a lot more concerned about integrity or privacy than they are necessarily about availability so we speak to the trust boundaries this is actually a really good approach so remember i'm trying to advocate for better security with our developers so what we said was well let's do this model for each of our development teams on the products that they're working on so we can um for new products after it comes out of architecture we can start talking
about this and for existing products we would go and retroactively apply the data flow diagram and then apply threat modeling onto the application and we actually got a little bit of traction from this we got a couple of unexpected benefits one of the great things was that a lot of our developers didn't actually know how the whole product worked because you've got this development team where you might have a new developer come on you might have developers leave the team and so you've got some natural churn within the team sharing a data flow diagram because it kind of parts parses out how the system is built as an entirety sometimes would bring to light how the system worked for new new
developers so it became a training tool for them the other thing it did was it did start opening up conversations with our clients about potential threats to the system that had not been anticipated so we started using this um and it actually was good for us and our customers because we started building out work in our backlogs work that was could be productive um so by doing threat modeling what we did was we essentially inserted a new step for quality and it was shifted left right so we've got architecture and the the the ux design and we've got our feature ideation that occurs at the beginning of the product development we do our threat modeling everything's really exciting we
start building out our backlog of security related work that needs to be done for the application and then in theory you've got your build work um and then you've got the rest of your testing in qa right and then your final release right the problem is that this is the waterfall model so in the room here who's still doing waterfall for their product development if you're doing what product development right who's doing agile if you're doing project development right so all of you guys are like that's not how it works right um agile is more iterative right and in the case of your design you do have your initial design but the design is more short-term looking towards your
initial mvp rather than a finished product because agile obviously implies that we don't have necessarily a finished product or we definitely don't have one right away right and so in the middle between your deployments you've got change happening all of the time and so this is kind of what i learned is there's a breakdown here right we do our initial architecture and design and we do all of our work for threat modeling and then we go away and then all of a sudden you've got a whole mess of change that's occurring and no way to account for new threats that might be occurring within the system because you're enhancing your system and so you've still got your qa for your
security testing your automated stuff but the the qa team also is unaware of new threats that might be indicated that ought to be getting tested for right so other than exploratory testing which our qa people still can do your qa people are prevented from doing tested against new user stories that are being created in the system so so we're kind of halfway there right we have a data flow diagram that might or might not be up to date and we have uh a product that has some threats mitigated anticipated and we're mitigating other threats by doing qa at the very end of the process before release but we've got a lot of churn in the product and the product is
slowly drifting away from our initial development so realistically this is kind of what we have for our timeline right we have our engagement and then we have a cycle of sprints and then our final release um and unfortunately this is this is kind of a huge breakdown for us in the traditional model of threat modeling right because oftentimes you may not even have architecture re-architecting the system you definitely don't have your team lead bringing in security on every sprint um for us it's because the customer is not paying for that extra fte worth of time to monitor what's going on in security right um and for you guys budget is a thing too if you have multiple development teams
security may be sitting in an advisory capacity but they're definitely not embedded in the team like your developers and your qa people are so things might be getting overlooked or were not thought of because your team is actually churning out new features as they're being ideated with the customer but agile is valuable it works right um and we're not going to be able to stop people from doing agile just because we don't like the way that they're managing security right so a realistic timeline for our agile would be to do so we have our our sprints and our intermittent releases between our sprints ideally we should be revisiting our threat model periodically within our sprints to take an account for
new features and new changes this was kind of our interim solution what we came up with was okay well why don't we re-threat model every quarter or every couple of sprints to account for new features that are released right so that maybe we can get new stuff into the backlog and again we're we're developing towards something that's better but it's still not perfect um we found that this was still helpful because we were identifying new threats and we were able to have those conversations with our customers but we found also was that a couple of things one is you can see if we're threat modeling every like three or five sprints we're having multiple releases that
potentially have security flaws in those releases or new threats being added to the product it's still not real time and if you're if you're catching it that far out then you're already past the development process and again remember threat modeling is supposed to be a design process or just the the same way you bring a ux person in to design what the look and feel of the system is we want to be thinking like where's our system going to break right that's how threat modeling works what are we building how can it go wrong we should be asking that question during that step right again new features are being added in every single sprint and we should be accounting for that
every single sprint so while adding intermittent threat modeling was helpful and it was helping us because we were getting the extra revenue from the extra items in our backlog and let's say that it wasn't good it wasn't great because again it's not a win-win right the customer's essentially getting a delayed reaction um and so we said well the problem is that we can't have constant oversight in our agile team and um because we don't have a security person assigned to every team so we said what if we were to add some extra training to our team members themselves right so we came up with this idea we call them security champions right so this is going to be awesome what we
did was we said okay we're going to have somebody embedded in every team who might be a developer who might be a qa person we don't care we want somebody who's kind of more interested in security than the rest of their teammates so we asked for volunteers and we got volunteers some teams we had two people who volunteered i think our developers just as much as anybody else know that security is important and they were actually engaged like i wanted to learn more about software security so we did do extra training um we said every team needs at least one security champion that was a kind of something that we asked for or if the team didn't
have a security champion we said um uh when um we said we'd like to at least have builds reviewed by somebody who's a security champion and the question we said was well we want you to assist in identifying security issues um we did do uh pretty aggressive training uh we trained them all on owasp top 10 and the olaf's proactive controls if you guys are familiar with that the secure product life cycle is something that was that we actually asked everybody to review including our team managers and our technical leads but we found a couple of other really good resources we essentially said let's train our developers who are doing this as pen testers and so troy hunt has a really good
course called hack yourself first and we ran them through that and there's also a good resource called hacksplaining.com and there's either a free or a paid resource it's paid if you're doing the enterprise version where you can get reporting into your central system but the hex planing is a really good resource too it kind of works them through like here's all the different types of hacks and it shows how to mitigate against those attacks as well as how to execute them so they they worked through that and then we did a capture the flag which was really fun as part of our training the problem was that we had security champions in place um and then when we went back and
measured there weren't any actual new um stories being added that were related to security so now we've got security champions in every team but no threats being identified to the system which we thought was weird and we went back and actually sat in on a bunch of um a bunch of stand-ups for our agile teams and what we were finding was um nobody was um nobody was talking about security in our stand-ups so the um the team manager or the we call them the product owners right the innovation leads in our case we're talking through designing the new feature um the developers were there participating in the stand-up but we were definitely not having them participate from their
security perspective they had their developer ads back on and they were participating as developers right so so again within our our agile process we've got this iteration where we design so we're grooming our backlog we're bringing stories in usually the product owner or the customer representative is saying we want to do this we want to do this we refine those stories and we release the stories right that's during the design phase and again this is where we had expected that we would start seeing threats either go into design and flag them as threats or news stories being written in our backlog for review of the client we weren't seeing anything like that at all
so um so the next thing we said was well let's intentionally add security champions into our refinement process um has anybody read the checklist manifesto okay um well there's this uh guy named atul gawande he wrote the checklist manifesto somebody yeah and um they wrote um he wrote from his experience in the healthcare field he worked for the the global what is it the the world health organization the who and one of the challenges that the wh had about improving the health of the world global community is that a lot of the global community has different amounts of financial resources for implementing change and the book checklist manifesto reflected on some of the things that he learned about
checklisting and the use of a checklist for delivering improved health in the global community essentially one of the things they did was they they built a real checklist that everybody now uses generally for pre-surgery um conditions it was kind of based on checklists that airline pilots use for when they're doing um when you're starting up a plane and getting ready to taxi out and one of the steps in that checklist was that each of the people would declare their role in the surgery so they'd say you know i'm paul whatever i'm the head surgeon and um and somebody else would say well i'm rahul i'm the lead nurse or whatever and everybody would declare their role
part of it was to get over some hurdles they saw within the organization where in surgery some people were afraid because it was oftentimes depersonalized they were afraid to to make a comment during that process they wouldn't they wouldn't say uh doctor there's a problem like you forgot the sponge or whatever they would they would step back because they didn't have a good relationship or a personal relationship the other part was that the doctors oftentimes didn't recognize that the surgeons didn't recognize the people in the team as individuals either so they wanted to increase kind of the communication and respect and we wanted to do the same thing so at the beginning of the refinement we actually
are asking people you know if you play a special role we want you to say out loud i'm the security champion or i'm the performance champion i'm the qa champion and so they shift from wearing the hat of a developer to wearing the hat of security during that refinement and so there's an onus that they take on essentially to be more responsible for that particular problem or that particular role during that that refinement session and um and so that was the first thing we found was in the refinement session when people would actively declare that they had a special role they would be more more actually active in the process so instead of sitting back and just saying
well there's a technical feasibility problem which is usually why your developers are in your refinement is they'll say well i can't do that or i'm having a problem with this or whatever now we've got developers or a qa person who's saying there's a security problem here because of x y or z right so we're actually seeing more active security roles um as opposed to be you know more passive about it um the other thing we did was we asked them to modify how we were doing the story writing themselves so normally you write what's called a user story and within the user story you say this is what's being done these are the expected outcomes right so
essentially you're writing out the algorithm of the story um in story form so when user x does action y um result zed occurs and you also might say um these are the limits or boundaries for the qa team right so this is they'll describe how you test it to make sure it works so what we said was let's add abuser stories um we've got some people are sensitive so we if you've got people who are sensitive to language you might want to use the word like a misuser story as opposed to abuser story which is fine the cool thing is it still rhymes with user story so you've got your user stories and your misuser
stories in this case you're starting to parallel the steps in threat modeling within the confines of refining a single story so you say what are we doing right that's your intended algorithm and then your misuser stories right so how can that be subverted to break it and this is these are our different threats or attack factors to this this story in some cases there is no abuser story right sometimes you have a very simple algorithm feature or function that you're writing in other times you might find multiple abuser stories so you might have you know a security attack threat you might have an opportunity for a performance problem say you're designing a new report structure
you might have an opportunity for a normal user to misuse the system which is why we use sometimes misuser as opposed to abuser so for instance if you can select and delete multiple records from a filter somebody might in theory select everything right and you want to identify that that's a potential flaw in the way that you're designing the system which could allow somebody to delete all of your data even though they're authorized to do it might not necessarily want that outcome right so when you essentially hit this point you're saying how can we stray from the happy path right and this is kind of what we're teaching now is how can we stray from
the happy path and how can we prevent that from occurring in a way that's graceful right so that's where you start talking about mitigations right what can we do about that so ux might get and be getting involved with internal user um potential flaws right so you can have ux design a pop-up that says hey guess what you're about to literally all of our records or 10 you know thousand records is this what you meant ux can also do things like if you're running the person through a sequence of screens you definitely don't want your final accept button to be in the same places where your next button is because somebody might just keep clicking right
um it happens and we need to design for that right that's off of the happy path again so you kind of talk through the mitigations or again if the product is an owner is there you might have that conversation well this is a a low risk event so even though it's a potential threat you can still have that conversation of well we're not going to worry about it we'll just call it out to the customer and let you know and let them have the opportunity to override our decision but maybe we're just not going to do anything at all right that's okay too um again from us i've always had this weird quandary right which is that i'm the
security guy but i don't get to boss anybody around um especially when we're doing work for our customers right our customers ultimately are the arbiters of their own risk the the problem that we have and this is why i've been really struggling to move our company towards moving security into the design phase the problem is that we if we don't tell our customer what risks exist then we own it right and if the development team builds something without declaring that there's a threat that they've identified right negligence probably isn't the right word but essentially the development team is accepting the risk and carrying that risk instead of the customer accepting and carrying the risk and so when you
get to the delivery point and you deliver a broken product if you knew that there was a problem and you didn't tell the customer then essentially you're accepting the risk of a negative outcome if you deliver a broken product where you have that outcome right so from from me moving from having my my security hat to me having my management hat right um i don't want to be in a position where i have to explain to a customer why there's something wrong with their products i would much rather say well this was a threat that we identified for you six months ago sir and you said you didn't want to make an investment in mitigating that threat at
that time here's the documentation for when we have that conversation we can fix it for you now but you own the outcome right you're responsible for what happened that's a great conversation to have maybe not for our customer but it's a great it's a great position to be in with the product owner right because um it it prevents you from having to take on that that liability right so as a as a as a contractor role in that contractor role or in a delivery role it's nice to be able to offset and identify who really ought to own that risk and make sure that they carry it so finally with the abuser stories or the
misuser stories the last is within the story you can actually write out specific tests against that particular threat or those particular threats that were designed so we're able to catch that during the testing and acceptance for that one story as opposed to having to do it when you're doing say exploratory testing or vulnerability testing after it's in the ua space the user acceptance space so it gives you a little bit more granularity in terms of testing for the security and allowing your qa people to do a good job or in some cases you can automate those tests if your developers are building automated testing right so suddenly now we're essentially threat modeling everywhere right this is
kind of the outcome that we want is we do want to be doing the threat modeling up front when we have our initial design but we also want to be threat modeling at each step and we want to be threat modeling within each sprint because as a feature is added right as we talked about before that's when you can do the best job designing a secure product not identifying it after the fact hopefully and going back and rewriting your code but actually writing good code at the beginning and again this is this is the other side of the story with a relationship with a customer is the better quality code you can deliver from the very beginning
the better the outcome is for us but also for our customer because that maximizes their outcome right if i'm a steward of our clients um money or financial investment i the more that i can do to reduce the cost to to fixing bugs and means that um means that that customer gets to have more features or they get to have a better visual design because they can afford more money for those other things as opposed to fixing bugs right and that's again what we consider security things so um that's what i've got uh are there any questions or comments or thoughts yes um
make sure yeah that's a great question um so for those of you who didn't hear he's saying um once we had identified our security champions how do we keep the security champions excited about security right is that yeah essentially so um it's a couple things first was um it's a voluntary role right so um so the first thing we're doing is trying to identify people who already think security is an interesting part of their work um and then the next step was that uh we we had a we do it like an annual or a biannual training so um so we made sure that they went to the training and we actually did a like a
capture the flag exercise where there was some competition um and that encouraged the learning um we used the uh oas has a shopping cart system yeah the juice system and the cool thing about that is it's intentionally vulnerable to like all these different vulnerabilities and so as they're competing to find the vulnerabilities they're actually learning how they're executed and how to remediate them at the same time so we did the the competition with juicebox so that was the next step so um other than that we had two other things that we've done one is that um part of the role as being a security champion is to identify the next security champion um so that they can participate in not only
training them but um also getting them excited right about it so at least they're acting like they're still excited um the second thing we did was we actually started um like a monthly it's not a lunch and learn because everybody has lunch at different times now um but kind of a monthly get-together where people kind of will demo something that they found or that they fixed or that they identified that was a remediation and then the third part was uh we made uh not just security champions but this concept of a champion role we rolled it into how people are able to advocate for their own advancement so we have uh developer level one developer level two developer level
three and one of the things we did was we said to move from one to two you need to take on one or more champion rolls and if you abandon a champion roll the potential is that you could slide back to from three to two or two to one if you don't maintain those roles the last thing we're thinking about doing but i haven't brought to the head developer yet is um a cpes or continuing professional education credits um it's worked for me in terms of keeping me actively learning by actually tracking cpe work and the response i got from our from the head of delivery right now is we're too busy to talk about that
so i'm hoping maybe in 2023 that's adding cpes as a rigorous way to measure that people are maintaining their champion role is is something that we can add in because it keeps them involved in learning new stuff about their industry so thanks for the question yes
okay so your question was how were we measuring whether or not people were adding security as a concern into the stories that were being written um and so the scrum master one of the roles they do is they tag stories in jira as they're delivering them so if there's anything that's an abuser story or security related um they're actually tagging the story with security um as part of building that issue or building it for release so we're able to do a quick measure over time in terms of how many stories are being created that had the security concerns added to it does that answer your questions
oh i see what you're saying um i'm not measuring for that because i didn't occur to me and that sounds like an awesome addition i should talk to our developer or our director of architecture on that because the technical leads report to directive architecture and they do the code reviews for their teams thank you i'm actually going to capture that that's a great
require the idea with the missed ssdf today and then like how do you foresee the white house initiative to implement that into the stlc like are you guys ready for that because i think everyone using the same software that the government uses is required to purchase what do you mean um so the white house conditioned much result but like they're requiring governments that purchase software they confirm and verify that's following this ssdf
um that's interesting so first we don't do a lot of work with the government um because i've always felt like the winner to a bid was the loser right so yeah
yeah yeah
yeah so um we haven't looked at nist aligning our policies to the nist policies yet in terms of um one at a time historically um we've been when we've had framework yeah that particular framework so historically a lot of our work has been business to business e-commerce driven um so we've been aligning more of our stuff to like the pci framework which is um a little bit more prescriptive so it's a lot easier to deal with in terms of checking the boxes or alternatively we've got a couple of clients that have their own framework in um in an outline of requirements that we have to meet and so we've been working against that as well
um going against nist per se that's on our uh backlog for our own certification stuff so right now we're working on um oh what the heck is it called um i just blanked out uh it'll come to me but um adding nist into our uh compliance controls is probably for us going to be 2024 or 2025 in terms of our maturity level
yeah i think that's a really good point is this concept of trickle down i'll give you an example this year a boatload of our customers just got excited about accessibility right does anybody know why because of all the lawsuits right so yeah once you see that kind of come through the court system that can drive the compliance requirements into into that trickle town right so now see what what we run as a challenge is we've got customers who may not have the sophistication to think about that so they're paying for features and they assume the rest right um and so with this trickle-down effect one thing that it does is it it allows us it forces us oftentimes to have that
communication up front with the customer saying ada compliance is probably something you want to pay for now because you probably need to have it and once we can have that conversation about getting it funded where the customer knows that they need to pay for it then we'll get it in and and we already have the way to apply those controls as we're doing the development
yeah and sometimes again for anybody who's doing outward-facing work sometimes a lot of the we call them in our non-functional requirements right a lot of those the customer even either thinks they're getting for free because they're supposed to um or they don't want to pay for them because they have a limited budget right um so a good example is like a really small customer who might only have like a 25 or 30 000 budget isn't gonna have the money and that small of a scope to also be ada compliant so at that point you're you're just gonna be like well we'll apply one of these seven templates in a wordpress site because what all of that
is kind of cheap um and you'll get the benefit of of using these compliant components instead so when we get down to that space it's done for us and we don't really have to have a conversation with about it at the high end you see that's where you see a lot more intentionality about talking about security and performance and stuff like that and those are customers that oftentimes have sophistication to actually call it out as a requirement already that middle space is where it's a real challenge right so any other questions or yes again
this one yeah so this comes out of threat dragon which is an open source tool for threat modeling it generates a json file and actually this is really cool um one of the things you can do right is like on this point here right where the um data flows oh sorry this point here where the data flows across a threat boundary you can actually identify a threat within the application that you want to have mitigated call out within the framework what mitigation how the mitigation falls and it'll actually store that is either mitigated or unmitigated so the cool thing that we did was we actually built a little program that reads that json file and automatically
submits those as new issues into our jira backlog for that particular product so we didn't have to worry about transcriptioning because that's oftentimes either forgets to happen or there's a transcription delay so um and then when they do get mitigated it pulls it back down from the closed issue back into the model so people can actually see how many unmitigated things are in the model so that was really a success for us just by being able to process that gfon json file and we store the json file in git so we've got the version controlling it as well any other comments
awesome well again um i'll throw my contact information up there if you guys have any questions um if you're interested in that bad uh json reader for jira by the way that is open source so you know if anybody is interested in it just pop me an email it's you'll have to do a little bit of work just to adjust it for your your own api keys and stuff but other than that we don't have any problems releasing it and other than that thank you so much for giving me your attention and i'll be here around for much of the rest of the day [Applause] you