
[Music] [Applause] [Music]
so today we are going to talk about devsecops and like how do we use that in security definitely and uh everything related to devsecops such best practices what is it is it a field uh is there any career in it in pakistan or abroad or anything like that right and uh a little bit uh introduction about hassan uh he and i have been working in e-brakes for i think maybe an year um and then he went ahead to germany so though basically or english okay so you know i would just ask questions you know everything related to security assessment so he's a kind guy who would answer that okay so uh getting onto the rules uh before we started uh the talk
that would be okay at any time you guys can raise your hand and i would unmute you and you can ask the question uh all due cities are chat my diapers
so you can join our discord channel and drop your questions in cloud security channel we have a specific dedicated cloud channel to cloud only right so okay over to you somebody you can take it off
um [Music] uh screen visibility yeah i can see it okay let's start
uh told us our introduction um so software engineering and by education um cloud security focus i'm era just may i think teen cha super i think four plus years program uh just [Music] so many different uh positions make arm key election background mirror still to have a security analysis engineering or instant response throughout the career um [Music] so many different positions may come here deficit [Music] um
[Music]
[Music]
um
[Music]
to [Music]
[Music]
[Music]
in your company to [Music]
[Music]
[Music]
[Music]
[Music] um agile that was the in thing agile is a way of developing software um in short sprints there are several methodologies within agile as well as um they can
[Music]
[Music] for example
[Music]
[Music]
[Music]
[Music] how to do this is to remove the this this communication barrier [Music]
the trigger of how things started
[Music]
[Music]
[Music] um [Music]
[Music] the actual character which enables a devops environment or a devops culture or a setup continuous integration um continuous integration continuous delivery here microservices infrastructure has got a it's not entirely uh everything you have is on is on microservices um this is just the new uh way of making things even faster so also included here um monitoring and logging infrastructure as code and collaboration collaboration coffee jar will keep keep artists come because the whole point is to remove the barriers between the [Music]
continuous
[Music]
[Music] um and how do you actually achieve that is uh the continuous integration of your code uh so there are ci pipelines which basically in the end are building your code um on every merge for example or whatever your company is doing with that um then a continuous delivery slash continuous deployment yeah interchangeably use other election there is a small minor difference in their uh candidates delivery is generally just uh delivering the software continuously but continuous deployment is there is one specific difference kick other upgrade deployments and to the production are also automated there is no manual intervention okay from the point your developer commits the code um it gets deployed uh to your production without any human
intervention without any manual uh intervention so that is a little bit of difference between continuous deployment and continuous delivery um so yeah code changes continuously deliveries changes continuously or production keeper [Music] that is delivery and deployment is of course you are actually deploying that piece of code to um yeah to your production server microservices is a design approach to build a single application like underscore smaller services yeah some smaller pcs may divide build karna and then they can talk to each other using a specific interface using an api hd usually an http based api um just to make things easier by so let's say one development team has 20 uh sub teams or the development division itself has 20
teams every team is responsible for specific part of the product in the end ecommerce website but quite possibly check out the functionality of the
[Music] coffee changes also instead of um going for bigger changes uh in practice agile plus devops work together and how they work together is okay [Music]
okay what caused the issue and which service um caused the problem and how do you fix it then and who has to fix it so these things uh there's so many benefits to it taking a just a big picture of you per infrastructure as good again um infrastructure escort is i think uh joe at least devops engineer position of the game arcade this is what most of them [Music]
[Music]
so the old way of doing things was apne pass basically servers for example cloud pair you launch your server for example an ec2 instance or some compute instance or gcp or something like that um [Music]
actually completely manual even though if you are doing it on cloud linking with cloud the edge is that this cloud will give you an api so aws for example um [Music] so you can write the whole thing in in the form of code then there are several ways of doing it will be uh briefly [Music]
you have auditability infrastructure create for example again let's talk like look at the devops situation um and for example i am the devops guy and let's say umar is the dev guy i'm the ops guy and he's the dev guide my infrastructure go to the crown uh pushkar on repository and he sees he uh reviews the pull request xyz configurations maybe they are not relevant to me how or deployed here uh infrastructure and before that you were able to uh review this audit this later let's add also security for example security wants to audit our infrastructure configurations for example let's say do we have um all the ports open do we allow um traffic from um
the internet or from only specific services uh so it's like a checks you have of course our infrastructure called the configuration if you have tight controls over things uh then it is easy to just read the code and the code itself is its own documentation so exactly application development approach to infrastructure uber apply using several frameworks and [Music] pair tools monitoring and logging so monitoring and logging again is very crucial for devops because um also uh as part of the devops philosophy it's cap blameless raccoon environment go if something fails it's not the person who failed it it's the system that failed your uh complete system of scandal you you don't have enough let's say
you're not doing the gatekeeping enough uh [Music] problem introduced foreign it's not on you it's it's not on the person it's not the system itself so you always try to improve it but to do this this you always have to monitor for the problems and uh for example for performance and for other things okay let's say we deployed this time we tried a different approach how did it work for us
[Music] is part of the devops um collaboration here so collaboration as i said it's the whole thing is about collaboration actually in the end um it's about removing those barriers um and sharing so if everyone is looking at the same code and if you have these practices implemented that there is tight control over what code actually gets deployed j
[Music] is already improving but also you are collaborating right on the code right where the code sits before something goes wrong so this is like a very preemptive approach or collaboration that's why it's very necessary or usually a big picture viewer can say it have a um any change how how any change is released in in a devops environment basically the change could itself be to just infrastructure or to the application itself for example near future release karma you discovered your team discovered a bug and now you want to push a fix for that bug so everything is a code change and how do you do it you plan create a build out of it then your ci cd
pipelines then you you know the whole thing again you monitor it you operate it and then you uh also look for the future what is what we can improve
[Music] same same problem so in in reality or actually in theory devops was supposed to take care of security as well so when they say when in the devops philosophy if you read like um it says software could deliver fast getting a rapidly cutting silos so but in reality it's not always the case why because security teams work with their own specific tooling so fantastic vulnerability scanners and key security monitoring you have agents and um foreign so everything that security teams usually have it's a big silo security teams are usually a big silo that kind of live in their own um or japanese roommate and you are the security person it's usually um either they think you are weird or you
think they they are weird um so somehow this this silo is a problem of course for the company itself just because the alps or dev have not worked enough on your tooling or you haven't worked from their angle so this silo causes the same problems of similar problems successful problem and that is speed um for example um security teams this is also another thing that um quite quite famous so security is always like me um you're introducing a new vulnerability with this new release or this new code that you pushed we're not allowing it now okay like one off you can say that twice you can say that but maybe not three times uh especially if the developers don't
understand okay what the hell is happening so this silo brings cultural problems and eventually security's goal itself will not be achieved gold security guys software drop delivery yeah the company itself it should be secure so when we talk about software delivery or the infrastructure delivery um [Music] security name even though they try to stop it but they couldn't because eventually if you are working in a silo you will end up causing more friction than solving the problems so breaking the silo was basically uh yeah another trigger but as i said devops itself says that security should be [Music] let's add that to the usual ci cd pipelines and this way we will be improving um
transparency
[Music] for example the x vulnerability of vulnerability and what is the reason and what will be the impact and so on and so forth uh especially for example for the companies
[Music] frequent updates
[Music] perspective for all you know every commit or every feature release may have another vulnerability because you have not tested it so how to remove this uh break this cycle so to break the cycle you have to get into it and this is um when the basically shifting security left is the is another acronym being used for this okay security co-op left in the devops cycle to have a move currently what it actually means um so yeah
and how we are going to do these things breaking the silos and shifting uh making security more preemptive than reactive in these companies so you firstly you start with the security as a contributor to the devops itself how do you do that you try to understand how these uh how the uul deployments work um from application perspective from infrastructure perspective and then you uh once you have understood software deployed and then only you can try to improve it uh security wise uh then yeah increase trust and transparency between the three units so operations dev and sec it's basically the same thing again just by adding the tooling to your ci cd you you may not be doing
depth cycles because again developers in the end they are motivated from the side of security they are not fixing vulnerabilities fast enough um basically you will not achieve anything again indian things boil down to culture that you try to build up the culture which is collaborative japanese [Music] then yeah integrating the security ideas into devops what this means for example um so tooling as you can see is another point yeah vulnerability like the traditionally um [Music] but shifting left means you start doing it early as early as possible as early in the development life cycle as possible uh so any ideas also for example um let's say up uh security scanning um you could also be doing compliance as
code you could also be like any security ideas that you have on the table because now you have cloud because now you have so many automation options um to up basically security teams it's an opportunity for the security teams to reduce uh their own workload in in a sense and make their own lives easier because of this this whole culture and this whole movement so what does it mean by shifting security left um so usually cassie tickets for example if you are sitting on the seat of a developed developer so how does it look like you have jira or use some ticket management system um you have a backlog already whichever tickets create keem here like what things we should be
working on what features need to be added what bugs need to be fixed and so on and so forth uh you as a developer you pick up every day you pick up a ticket and you start working on it normally these tickets are like uh for example they are stories user stories as well for example like user sees this page and he clicks that this happens and so on and you try to implement that in your code so then because we're talking about the devops you're going to commit this code into a version control system get for example um once you commit it then depending upon how you configured your ci cd um you may be creating a build out of it
on a specific branch that build gets created you some someone tests it maybe there is automation again for the testing as well um and then you deploy it to the production server and then your team monitors it so these are this is the usual flow of things let's cut cut out kv's made devops and your device and joby i just navigate because the same is true for the infrastructure code the same is true for application code as well so whatever you do this is this is the usual cycle a pipeline may stages can arm different hosts everything can be changed but um anyhow so when we say shifting security left what what is meant by that is
start for for this piece of code that this developer just committed try to scan this for security issues as soon as possible try to detect security problems um right from when he is for example coding so like in the ide there are extensions now for visual studio code yeah atom yeah abuse extensions while you are coding they can scan your code for several issues uh so static scanning basically uh or if you are using vulnerable libraries or packages for uh dependency analysis this can be done while you are doing your while uh the developer is coding so this is as left as it gets or for example um some quite common uh of an issue okay
because it's the world of apis you have plenty of credentials um or either incidents like i think some developer committed some some credential into the code and then because they didn't make a pull request or whatever they just committed directly to the master the code is already in production so that's the one of the problems how from the security perspective you could have if you had this shifting security left you would have prevented it okay i got commit time he had come he's like she had configured some pre-commit hooks on the commit time um this commit hook runs some kind of secret scanner and doesn't let the person commit or at least um yeah show an error like hey you're
committing some high entropy stuff looks like a credential something like that so secret scanning again that's another part of the devsecops toolchain or or whatever you call it i push my if there are any questions let's just raise them now or should i continue uh i think we're good for now there were some questions but they were you know regarding abbreviations and stuff so we're good to go okay okay so um the same example joey last may the kia you get a ticket you try to develop the feature you push the code gets the code gets deployed passing through several steps what steps is what security can add as part of the dev ops uh
or as part of security within the software development lifecycle so free commit like before you commit stop what can you do okay feature ticket creep create so did anybody threat model
[Music]
[Music] and so on and so forth so all those questions that are preemptive when you start them right from the requirement stage of course you need a security mindset that is thinking of the abuse cases right from the beginning then as i mentioned ide security plugins several of them you have for infrastructure code you have for application code for almost all the uh modern languages so that's something free commit hooks as i mentioned or and then the importantly peer code reviews so you can always and this is done sometimes but uh yeah you can always get your code reviewed by your by your colleagues also by this this is by the way also at the time like for example you committed
commit the code but not to the main branch and not to the master branch of the co of your repository but for for example some feature or development branch and that will not be merged to the master and hence won't proceed to deployment unless someone reviews it and this is the usual way of doing things and these devops based companies uh okay everybody like in in a good devops environment there are no exceptions about this everyone's code has to be reviewed um and yeah then you can be critical on the code um and being critical there then improves the quality of the code itself of course and also uh security issues can be got this
way commit time static code analysis you committed the code for example you pushed it um so let's say you have sonar cube or some static uh code scanner that can run on your um that can run on your code or for example one which is the traditional static scanning and the second is dependencies uh analysis or dependency scanning if you're using one vulnerable or outdated packages so that can be caught in the static scanning um dependency management is exactly that container security if you're using for example docker images the same applies to docker files as well so because a capacity docker file and then you create an image out of it and before even you create an image out of it it
can still be scanned for issues so there are tools we will have a quick look on those tools as well um then acceptance so acceptance stage may appear like for example um the pr was merged what now usually um so infrastructure has code cloud infrastructure usually uh this this depends on how you have set up your ci cd pipelines again sometimes i've seen uh a calculus pipeline have where um this may infrastructure changes because we are using infrastructure as code infrastructure changes blue chica application musical product so you commit in only in one repository um and depending upon the change the only the relevant jobs will run for example um and then uh other change you thought let's say you
had a terraform template after smoking change to service or whatever the change was um so that that gets uh deployed uh so why in acceptance testing this is mentioned is basically we're looking at from the application angle uh and for application to be deployed there needs to be infrastructure already there so it can get deployed on that and then dynamic testing dynamic testing is yeah for example run sslies nicto some other vulnerability scanner that you have that runs on the build itself um so this is we're talking let's say pre-production uh for example right now usually there there are uh multiple environments this may be uh testing here because it can be dynamic testing it
could be also your qa testing some stuff um your automated jobs that run on basically your http endpoints uh so that those can those dynamic security tests can be part of the pipeline as well um and then security acceptance tests as well like what is the criteria for security if basically it's a whole lot of automation that you can put on every single stage of um of your software development life cycle um yeah security smart secrets management um configuration and actually you can move around these some of these pcs as well secrets management is not is not equal to secret scanning or secrets detection right now we're talking about okay uh production may there is this um
application deployed that needs to talk to let's say some aws services how does it talk to aws services maybe some credentials that is it uses or for example um the um yeah like for example the application needs to use um needs to use a docker docker container so it will have to pull a docker image for that the docker image lives in a docker repository and how do you authenticate with that the application needs some credentials so in the end any kind of credentials database credentials for example so any kind of credentials that your application does need how you are managing those credentials ideally a secrets management solution so uh hashicorp world or some others uh secrets manager from aws
tons of ways and tons of tools to do those things as well then we talk about security configurations and the server hardening again this is um infrastructure oriented a little bit um so security configurations again um are we following the security best practices for where the code is getting deployed to like um it could be as simple as let's say are we do we have the security uh security headers uh that should always be enabled for or the operating system has specific patches or specific baseline configurations that we want to meet and anything basically that you you can put into code and this is basically where we also call compliance as code uh so yeah this is production and then
we talk about operations operations is when the build got deployed what can you do with it now or what will be happening from here on both from ops perspective and from securities perspective uh something went wrong so an incident could be an attack uh caused and available like a ddos uh degraded your web application for example and now you have a uh availability based incident you try to mitigate it whatever then usually this um again as part of the devops principles itself you don't blame the people you play in playing the whole system itself and you try to improve the system by doing these blameless postmoderns um after these instance like what went wrong how um we could have avoided this
how in future we can avoid this and so on so usually these post post modems are very good way of maturing your infrastructure and stop that problem from happening again continuous monitoring this is again both size availability monitoring performance monitoring security monitoring and any kind of monitoring so again you're monitoring for stuff that okay now the software is live we know that everything went well only that's why we our software got deployed but still um no static or dynamic core scanner is going to find business logic flaws for example so still things have to be tested still things have to be protected there needs to still be a wrap there needs to still be a tight um perimeter so yeah
and around that parameter you your security is monitoring whatever controls you have whatever you play you have placed and then threat intelligence uh what kind of attacks you're facing uh what are your patterns uh are there any ttps uh that are well known and so on so all the security monitoring thing that is post deployment that you can do with it um okay uh go summarize them and um try to come to what exactly in an application security pipeline uh practically that you will see and then we have some examples as well this could briefly discuss yeah sca sc is software component analysis or basically dependency analysis uh yes okay for example uh python may app key application
and you're using requirements.txt file to manage the dependencies and within those dependencies you are using a package that has i don't know like itself is vulnerable or is dependent upon another package that is vulnerable so basically is introducing um a vulnerability in your code and you didn't have to deploy the application to find this out you can do it right from the beginning um starting with an seo so stop basically in your usual um devops pipeline or cicd pipeline which deploys this application you add a new job to it and before the build is created itself so in the ci we're right now talking about the ci pipeline before the build itself is created you look for specific um
let's say uh yeah you run some tool that is specific to your language in this case is python so we would probably use safety or some commercial like snake or some other dependency analysis tools this will run as a job in your pipeline and this again depends on your maturity of the devsecops pipeline like um do you want to proceed if there is a medium vulnerability do you want to proceed if there is a low vulnerability uh starting in the beginning you'll probably only be doing let's say like really blocking the pipeline or blocking the deployment of that application let's say for critical or high vulnerabilities or maybe only critical vulnerabilities um so yeah this is these are some of
those things that then once you start doing devsecops what you start once once you start adding these automated jobs uh pipeline may then upgrade your social network considerations security you cannot just come in there and add some tools randomly and start blocking deployments because that already will kill the whole point of building the culture because that will introduce let's say friction and tools can have always false positives maybe not sca tools usually they are very very um um yeah accurate because they're looking for specific versions matching finding the cve for that and blocking or taking the action or whatever your job describes to them to do but simple static scanning can have false positives secret scanning also in in some
cases can have false positives and so on um same goes for sust so such like sonar cube and so on um and dust definitely has a lot of false positives so yeah when we start doing these things in as part of the development life cycle we have to be very careful that you start small and gradually you try to improve your tooling you try to improve the rules that those tools use um false positives [Music] and then once you have built enough confidence then you you you just let your those tools start blocking stuff that now if it is if it has find medium vulnerability that it definitely is a medium vulnerability so once you can say
that about your automation only then you start um yeah start blocking these things and blocking vulnerabilities from reaching production so um yeah what else so here i'm trying to cover very specific stuff that i have seen happening in a pure um in a pipeline in the devsecops pipeline where you're trying to only protect the application application kit your deployment pipeline you're only trying to you know just it's it's a lot about hardening basically that's it it's hardening it's tons of hardening uh vulnerability scanning um and then in a better setup you will probably have good vulnerability management tool as well and then these tools as whenever these uh jobs run whenever basically you are deploying
um these jobs will be sending those vulnerability data whatever they get in a machine readable format so json or xml or something like something else to this vulnerability management tool and then you have in a single place you have all the vulnerabilities and your your teams your security team can keep an eye or you know follow up for the fixes or whatever um yeah what else in uh what else can you uh improve in in the whole devops pipeline or in the whole cicd pipeline so as we said serif applications cloud itself is a problem uh when it comes to security if you don't know what you are doing and you most of the time you're using your the
default configurations you're probably creating new infrastructure security issues without even knowing so when you have infrastructure as code the infrastructure is application deploy audio itself is already part of your some repository apply the same techniques to that as well now this i haven't seen i mean i have yeah in my experience whenever someone is talking about devsecops most of the people don't talk about this for some reason configuration management they talk about compliance has got to talk about but somehow infrastructure security is missed but um plenty of other people are also talking about that so in my opinion um when you're talking about security in the devops they cover everything so if you're doing hardening and you have
opportunities to automate stuff to find security issues use them uh use the same things use the same techniques uh there may be different tools for your infrastructure code as well configuration management uh it's mr quickly goes through them uh configuration management is for example um yeah you want to have for every production server you want to have specific users only rolled out to them you want to those servers to also have specific configurations of for example you want to install a security monitoring tool uh which is which has an agent uh you or you want to do uh you want to install some application performance monitoring tool that has another agent how do you so you have a baseline of of
a server so you create those baselines like um this server should have these three four agents uh these tools should have xyz configurations um our ssh server configuration should be hardened until this level uh so the sshd configs should look like this whatever your um whatever the configuration changes you want to make then there are tools to manage them on scale so every single server so if your server was running in an auto scaling group and there are 100 servers running of the same dish all of them should be the same to ensure and to automate that um yeah you have configuration management um sometimes you will probably see some uh ansible go share the infrastructure
as code maybe they can get because it has things that it can do for example lensible also has um aws uh module that that you can use to provision your infrastructure as well but that's not why it was made so um anyhow um then we have compliance as code uh compliance score is so configuration management you did okay good but you still want to be sure that we are complying to our xyz baseline configurations after the deployment why because let's say you did have ansible you deployed your servers with those configurations but they they are still sshp users who can go in there change something manually in the server how do you manage that then compliances code kicks in
maybe you have an automatic job that is trying to run a tool like chef in spec so chef and spec is uh it has some like for example you give you create a profile in chef inspect which says uh my server should my ubuntu server should look like this in the baseline configurations this works uses using ssh you um and you define whatever that that configuration looks like as a profile you run chef inspect against your production server it tells you okay okay server [Music] and all that stuff but in the form of code of course and since it is in the form of code you can run it in the pipeline as well and you can generate its output in a
machinery but readable format that can also be fed to your vulnerability management system however you want to configure it um yeah and then continuous integration and delivery so ci cd tools of course this is a big part of the whole conversation uh so you define your workflows your you define your pipelines okay commit time selected um remote committee remote push select deployment what are some of the things or what are the things that you want to do and in which flow so maybe you want to do secret scanning first and then dependency analysis so there will be one job for secret scanning and then there will be another job that runs in this whole pipeline how
do you define this pipeline there are several tools jenkins gitlab ci code pipeline and so on but then again there is a security concentration here as well you can also make mistakes um like there's also like pipeline itself is a resource in cloudstone so you also have to protect the pipeline um and then there are pipeline security stuff as well for example different considerations for jenkins different from gitlab and so on um and then there is artifact management so artifact management is energized it's what is your release and what is the image for example in this case let's say docker registry ecr jfrog there are so many tools that can do this um or would be used to do this is
so an artifact is for example a specific build that you created that got deployed how do you vary in it how do you keep a track that what is running in my production um this so you always do this using for example uh artifact management different tools um or tagging uh your images maybe tagging your branches sometimes with release names and versions and so on um yeah so this is now a sample pipeline um for application security as you can see start cutting vehicle raw say it's it looks too big but it's just so many jobs that's all um so that's a copy engineer by the way this is all using um aws stack purely aws stack that's why
you see only aws services here and it's not um yeah it's an open source project called devserkoff's factory [Music] yeah so cloud9 is basically an ide you push from an id code commit is a git system uh by aws so host your code um you make the commit commit goes now um what can you do with it or how some of those jobs or whatever we have discussed so far falls into place is um using aws code build for or whatever uh thing that can run bash basically in the end um you start adding these security toolings in there so head to lint you have a docker lending carrier dot docker files can connect you to that then there
is uh detect secrets secret detection delay then there is um sneak so sneak also um snake does basically dependency analysis um so commercial tool like in this i think an open version has free version as well uh same goes for da same goes for bandit like whatever you want to this is all sas by the way this is all static scanning uh either dependency analysis the bandit for example is um static analysis then 3b is for scanning the images of um docker images basically and once these all jobs have run successfully let's say the job fails on at trini it means that the next action probably won't happen or is he going to have what is deployed
imaginary over because the image itself was vulnerable or had problems uh so this is how you [Music] create these jobs all of these security tooling put them in in a pipeline uh in specific stages however it makes sense um usually the fast ones will be more towards the left or more when we say towards the left it's earlier in the pipeline and that is exactly what shifting security towards left is detect things as soon as possible uh then the code deploy services for yeah it's for deploying the releases or deploying whatever comes uh an aws fargds is is a service to host um clusters of containers so yeah and then all of these tools if you can
see security findings so they have security findings maybe you want to use security hub or some other vulnerability management tools [Music]
so that's exactly what i'm saying you don't start with failing stuff in the pipeline you always start small and start finding like at least they generate these findings they put in a central place and then your security team looks at it and tries to improve the automation over time reduce false positives and so on um and yeah this is how it works uh we have a question uh sorry to interrupt so uh it is regarding supply chain attacks let's say there was a evil manager of a project right and he intendedly let some developer or himself add some you know buffer overflow or something like that in the code so how do we detect that like uh in in these
pipelines or how do we check our old code for you know security issues how do we do that yeah um so depends again for example um so code curve breach is is one of the recent ones let's talk about that that is a similar scenario supply chain attack so
[Music]
with docker pull 3v image and then you run this image and you give it the target but what if um this needs let's say credentials or whatever it's running in our environment this this code um this container but it was not developed by us and this is quite common uh docker images so this is one of the examples that you are using an image or package created by someone else that got um reached that company got preached let's say 3b for example in this case someone made changes to that doctor image and you are just pulling the image every time this pipeline runs without checking whatever um yeah whatever this container where or the environment that where it ran had access
to will probably be out to the owner of that yeah whoever bond that first company how do you stop it so artifact management or again you check the integrity you always match the integrity of the hashes of the images uh if you're not like for example abner trivia say image pool here and you usually these companies have like okay latest images um secure tested images so when you pull that you always try to match the hash uh so this is one of those ways but it really depends on exactly what the scenario what the attack is um and yeah you take it take a look at that and then deal with [Music] yeah then deal with these issues
all right okay so what about old code right let's say there was some repository or some tool or some feature which was left by some department right and then the manager or the developer has left so like how do we scan that do we use the same tools uh do we just directly add it to our code like what do we do in that case yeah that's so um it depends so again uh for example all of these tools the sas tools especially like right now we're talking about scanning the code somehow
uh but not every tool work like that most of the tools will scan the whole thing uh by default anyway so it will be called other menu let's say you're told to then run a manual scan simple all right okay
okay let's move so this was an application security pipeline we focused mainly on the jobs or the tooling that will help secure the build of the software that we are deploying itself now we go to the infrastructure security so again the assumption is that we're using infrastructure as code in this case uh this is cloud formation and there are several static scanners for cloud formation the hum infrastructure pipeline job my pipeline is [Music]
which will be a cloud formation template or set of templates then we have several different uh static scanners as i said cf and python link now it is called cfnlint python based tool edge okay cloud formation which can get the security issues clear you can run it in cli and basically whatever you can run in a cli you can run in this code build jobs as well these are batch things so uh same goes detect secrets because even cloud formation stacks have parameters someone passed being being lazy just committed something over there as well quite possible um then you add cfrapper again it's another cli tool scans uh cloud formation for security issues um cfn nag is a similar one checkout is
a similar one so it's like you put in what you have you uh try to find security issues in there and take decisions based upon like for example in um from the pipeline perspective it's i know it's not too much clear in the diagram but anyway we could have only one job and we call it sast for example and run all of the tools uh in sas even if one job fails we fail the whole stage and the production and the deployment does not proceed so this is one of the the same approaches that we applied just to infrastructure code you can do the same thing for terraform you could do the same thing with whatever you have
let's move to a quick demo if there are no questions um i do have a question if you don't mind so uh we talked about uh credentials you know being committed by mistake into the code and let's say uh the bill also got deployed to production how do we actually go passwordless let's you know talk about aws in case of editors how do we do that right i do not want to use any credentials how do we do that um okay so tooling first
then you are just using the assumption is you are using pure native aws services because then you are able to leverage roles not credentials so at least on aws api level joby rp results interactions that those will be taken care of by the associated role and now this obviously you'll find resources that don't don't support roles so in course case may that that's probably a different story they can generally this this should be i think done quite easily in this stack if you can see um but if you want to go with a mixture of things so like you have github um and then you have something deployed on gcp something else deployed on aws or
elsewhere whatever and if there is authentication in there it this will always be the case uh i've got authenticate kernel taker when we say going totally uh credential less or password less so if you're doing your secret management very well uh for example hashicorp vault use current that allows you maybe there are other secret managers as well that allows the same approach they allow you um okay are you actually secretary that always stays with them but they generate a temporary credential for you in the end it's again a credential so um but the exposure is quite limited because it's going to expire so that's another approach that you can take secondly there are some ways as well like using
vault that you don't have to store that thing it will only be in the memory of the production server or container whatever uh linking on disk storage new order yeah you can avoid on disk storage um but of course then it depends on your tooling you have to probably retool or maybe your current tooling allows that or not and depends on on what you have so here thank you smack we can continue okay
okay um all right
[Music] [Music] what is the usual approach or yeah a deaf psycho person would take so there is this one project to cf and good uh bridge crew a company and they have created this vulnerable cloud formation templates jinko how many jobs or would they think how does how does this thing work and then we'll also take a look at the pipe in line
so um
so yeah um cloud formation 101 quick uh these are yaml templates where you define some parameters uh and then we're creating some resources based the parameters are basically input values and then we're creating some resources in the resource section uh some vpc some routeable blah blah blah whatever um i'm not going to deploy the stack because that costs but we're only going to take the static scanning examples here so um okay let's go back to the cli and um
as you can see we have the cfn board i added this as a module here and now i will just try to run um yeah by the way i already have the docker image for this checkout so i'm going to just use that i guess why so okay
[Music] you can see it here for example um
yeah this is the image name um and what it does if you can see already passed check 0 fail checks 2 and yeah it failed some checks and is telling us why it failed ensure lambda function is configured for that letter q i am policies do not allow right access blah blah blah uh data doesn't yeah i am policies does not allow data exfiltration so we have probably a very loose i am policy as we can see um [Music] yeah the principle resource is carried so i'll be good i'm going to deploy any here i've been looking here so we just ran this check off and it generated already some problems for us what else we can do with this um i think
we can also do something like this and i will explain why
so now we have json output two things to quickly see here so that output was very the first cli output was useful if i am doing a manual analysis and quickly checking if my cloud formation code is vulnerable myself sitting on my laptop but that is not good for a cicd pipeline why because there you have to parse the data cli output that is harder to parse so instead of using that i just uh used the flag to generate json output because this is easier for um for me to parse and why would i want to even pass this jessica diagram method i may want to get all these findings and send them in a central place
so that's why i would like to pass them or maybe make a decision based upon the job or based upon how many of them are failing and so on and so forth but another way to make that decision is also just checking the error code because assuming i mentioned here most of the these tools are um defining your jobs in in batch scripts so up other
good practices you're following stuff so i don't have anything in that case the um the exit code would be zero so probably the job could just proceed but in this this case it's one which means yes don't proceed now above your ci cd tool depend how you take care of this problem and how if you want to proceed or not if there is an exit code or if you totally ignore it because well you're very in the beginning of your devsecops uh maturity stage uh okay so this is something that i had already in place up which we have a different content okay let's quickly check also um yeah so i have this very quick checkout job
but what if i want to also do this um yeah in in a pipeline or basically like for example github actions is one of those ways to create those jobs in an automated fashion this is a test job it's nothing just like some running echo statements so just printing stuff but this is actually the job let's quickly take a look first and okay directly so click and then we see what is happening here so for directing collections this is the last commit so you can see it and let's just read and run anyway
so let's quickly take a look this is the desktop and this is the check also
so test job is just print print print print print and that's why it's successful fine go to checkout um now we try to understand what is happening here so get of actions is running whatever i defined in that small job section on
uh instead i am using i'm referencing another repository or whose repository is a map here um actually this one this is for the checkout checking out so like within let's say github actions is running this whole job in a container so check out repo means basically check out current his his project key is main branch cooper uh if you can see me in rancho um then we go to the check of yeah then the other one is the check of actions the same actions in one job multiple actions in one job sorry uh [Music]
um so as you can see it already tells us that this action because this is the actual code for the action itself we are just referencing it and passing it some uh quick uh configurations and then it just works um let's quickly go through the configurations yeah we can actually go to the job run so what happened
okay so this is actually the command so this is this is these are the environment details uh [Music] on the reference here and then in the end it runs check out minus p in in in a specific repository which we told it to william they can quickly get essay and we have specified magic test that is cloud formation um because checkoff works with cloud formation with terraform and i think some others as well so um yeah and in the end it just says this dog status is success okay so basically we didn't do too much for example here we see run run run name uses run blah blah blah but here it's just it uses
and then with this configuration we are going to run it and then we just run an empty print statement that says the draw was successful and of course we can configure this differently and now this is exactly what we are going to do so we ran it for cfn gold or bridge crew also has more examples for us let's quickly take a look yeah cfn code is cloud formation good up hung
okay
all right let's see yeah change your wordpress so yeah get modules yourself and we have a direct directory here let's just commit update um
so yeah my my system is obviously configured to work with my repository and now see
so now there is a new uh folder just again belly cf uh let's quickly refresh this 20 seconds ago so we have a new sub module ok
sub
[Music] yeah do this so we have paragraph now all the nice terraform code is here with aws with azure gcp blah blah blah [Music] and we will run now
let's quickly check and we're setting everything correctly so talker run volume just might actually close just because playground terrible yeah
okay let's see
okay so again problems secrets problems you can see aws credentials coming together secret key administrator passwords and so on and so forth to check out checks for a lot of things secret scan docker file scan what else yeah aws based aws infrastructure scans like one tool for terms of things and it is telling us exa for all the misconfiguration like what exactly is happening the same thing we can then do like minus or json and you know this will give us a change in output for this so in in in essence again again i'm making here basically um you will see this is this thing is wrapped so
[Music] um [Music]
so kind of the same so make you some change correct yeah now again see we have an option we can create another job or another um run statement within the same job um so yeah we can i will create a new job this time you call it checkout dash go and change the first stop to see if i can go so um what is going to change the directory that we need to scan instead of
what else still the same branch still the same action that we want to reference maybe we can see what quiet crew does um [Music] and cloud formation it's definitely not so they have already told a cloud commission or let's let's go with all because this time it has tons of things so i want to see what happens and our performance will jason this we don't need
let's review okay let's just commit and seek out so
back to it um so with every thing every good push that i do the jobs start to run now this time there are three as we predicted or as we expected ah yeah next thing also as you can see now we can track like with this approach that we are doing um exactly trackers which uh commit or which change in other sense um made was deployed for example and refers to which commit and then you can also see the code so that's um basically with every change let's say that there was an incident and now we are trying to see incident what is the last let's go to the last um commit let's let's go to the star uh uh or let's try
to find out when this incident uh when the problem that caused the incident appeared first in production i was time came around to the third account and then you try to drag down things like that um okay so it's still running are we still calling the same things check out that action yeah the name anyway
okay all of them have run um you know ah problem problem cli i didn't see a life
let's try to run it again
so
okay nice
and yeah so um this is like one of the yeah like how the pipelines look like for example most of the time the cicd tools that you are going to face if you work in such a job are going to be using jamal but there are others as well like jenkins is uses its own syntax ruby files um github actions git lab is yaml based so it's very straightforward clean to like the how ever the code looks like is going to be almost as similar that you would see the visual view of things um so this was just a very quick example of how you we just ran a sust basically but on the infrastructure code and we
generate something out of it and the jobs were successful and so on but this was a very short example let's move on to the next another one um now we have something else yeah this is this time it's that that one was infrastructure uh code again i here as well i'm not doing an actual deployment this is uh still sassed jobs but i have the code i can walk i will walk through the code itself like in um jobs i'm actually running a comment you run it they are only going to be some sort of some sort of static scanning uh so up hum use carrying gitlab ci um [Music] or i'm more familiar with gitlab ci
syntax and code pipeline than github actions so this is yeah in this case i met used around django uh and the django envies is uh kind of like intentionally vulnerable um django application so in python um like how do we do those things that we talked about uh in django tango and uh with the python based setup so let me go here
so jessica github has has a different way like you create that directory you have your workflow created and then you have a yaml file in there in this case we have the yamaha file in the root this is gitlab's way of doing things um and this let's walk through quickly through the syntax first or predict a job casey how the job looks like so uh starts with image docker column latest so we're saying okay the pipeline itself are the jobs that jobs environmental jobs will be run connected that should be using the latest docker image and and the services are basically again the jobs will run as containers each of these services will run as
container within a container so it's a docker in docker approach that we're going to use here um i'll be happy uh if you can see minister of teen charge your happy stages leukemia we can have as many as we want we can this entirely depends on your tooling entirely depends on what uh what you are trying to do what you are playing with so pipelines and in the diagrams on internet you will see there are so many different ways stages are different uh stages go um get the jobs defined someone would put i don't know secrets in pre secret scanning in let's say a sas stage or maybe in the build stage or however you want to name it
um but anyhow just to keep things coherent and simple a massive or sort of security key job science can do that no no actually build jobs or that are building the code or deploying and all that stuff that is not happening here um and most of the time not exactly in this one but most of the time i'm trying to use something docker based so like within docker for example this thing the whole job runs in docker but pulls another docker image and then runs that within the main container the main container comes from here up this message quickly go through so the stage [Music] django application we have both front and and back end uh front end will be
[Music] for the js and this part is for safety so safety uh yeah safety is a python based static um sorry dependency in a scanning tool then we have static scanning uh static scanning you just have the campus stage both of them have the stage call and sas but the job itself is named as secret scanning or uh in this case sast we can call it something different as well not a problem um but the point is this always depends that's what i'm trying to say you may be running secret scanning first even before sca blah blah blah that depends on how you do it um
hog is the secret scanning tool secret detection tool that runs on your well your commits your recent commits and sees if there was something committed to your code and yeah then gives you output if it finds any secrets
okay talker yeah let's go through the syntax one by one very quickly so the first stage is going to use a node um node image because to install retire we need to well we need to have npm and that is available in the node image then for retire itself once we install it we run it retire minus minus output format json again you want things to be readable machine readable and some output path blah blah blah and then you say this is important which is allow underscore failure so if the job fails do you want to move ahead or do you want to say hey the pipeline should fail if this job fails so let's say
we set it to true we'll see how things go from there uh so now we're saying basically okay if it fails fine yeah let it fail but maybe in some one of these we can set it to false and then see behave the behavior so that was very quick and then artifacts so artifacts is basically any file that was generated for example here we as an output of the retired gs we generated this json file and now we are telling gitlab ci that hey this is one of those artifacts is go upload kernel of my past i may need it at a later time um [Music] when basically tells gitlab ci when the job should run we can have some
conditions here but in this case this is a very straightforward pipeline so it's when it's always always done on this job um yeah same stuff here um i missed one thing yeah so this basically if you see the script tags are what are what what what they are doing is just running bash commands but like as simple as that it's nothing uh too big here here we have different three different um so waste or the before i'm using before script because uh yeah like i could have run all these commands again in the same uh script tags as well but depends on you before script it goes pulls a docker image uh then runs that image
outputs that to results of json and once it has used it removes the image why all of these things are going to be happening on a compute instance in in an actual environment in a production environment um and what you're doing is you're just continuously pulling so many images that yeah they are going to be stored on the list so yeah clean them up once the job is done same thing here we put the artifacts same story but different tools uh different tools ahead as well so sas may have vanity use current then as i said there's also i have this code for dynamic testing in there i'm running for example nicto ssl scan and map zap baseline scan
whatever we want to do but in this case i would have used i mean i would have needed a deployed application on an on a production server that i don't have and that's why i just um yeah commented out the whole thing and the in the end we have this hardening job uh infrastructure hardening so here we're just trying to run uh basically ansible to and so ansible with some devsec west harding i will briefly touch this as well here
uh actually this is the better one yeah so def sec hardening framework is exactly what it says it's for um yeah you can use this tooling to harden your infrastructure how using combination of ansible you can also use chef you profit blah blah blah whatever you want but this project defines some baselines some hardening uh baselines so for example angel collection hardening we go here or the dev set project itself and we see it has windows batch based line linux based line chef as such harming engineering exam so many things so many ways like there are already um baselines created created for you uh whatever you're using probably in this repository and then you can just start
using them um and that is exactly what we are doing in this job we erase that
um [Music] that you cannot see here because yeah um this is uh i just committed it today artifacts um usually of course when i am trying to reference this file that needs to be accessible within the container because this whole job is going to be running in a container and how do you make that file accessible of course you have to um yeah you have to make make that file part of the project and then probably here upgrade scope change directory into that specific where that file is or if it's in the root i then you just reference the pile and so on and so forth so yeah these are some of the things we could also have had
some inspect profile do you have a code job or ct i mentioned that to inspect compliance as code so before we run that was hardening job we could have run that inspect job that first checks server will be here so if it already complies to my baselines then yeah we probably wouldn't run ostarton and then you can create the between different between these two jobs you can create a relation like this runs only when the first one is success is failed and but that job if it fails it doesn't still fail the pipeline because it's fine so something like that um yeah we made a change let's try to see the pipeline itself and visualize
things here too and
now we go to yeah should have triggered a python
so because we had commented out the stages we can only see the two stages that it can find uh some jobs in although in the code i have defined four stages but these two stages don't have any job as part of that so having an empty stage does not make any sense that's why gitlab was smart enough to just skip it so let's see what is happening here
okay this one finished let's see
let's try to break it down very quickly um yeah so preparing the yeah as i said sub sub jobs number docker can the run carry in so it's it first tries to create that environment um yeah using docker executor fine okay in the environment getting source from the git repository so here it basically gets that code step script okay and here what is happening so it's talking that this is the sca frontend what were we doing here we were installing yeah these three commands we were running these three commands in a node image let's go back and can install sure and can install minus g retire sure and then it runs retire json but if you see we have
like yeah so we were only trying to find high severity so these are the flags that we passed maybe it's intentional in some cases as i said you don't want to fail uh in the start and even then if it finds we are still explicitly telling it to fail with exit code 0 which is successful and hence gitlab thinks that yeah the job succeeded but let's see and let's see the json file what does the json file actually say ah oh that's right
yeah so yeah it did finally find issues medium medium medium medium medium yeah maybe maybe that's why so it did find issues but because of the flags the job was successful and we let it go let's see the other jobs okay those are failing uh back to the pipeline to view things what is happening
yeah again dependency analysis close an image runs that image puts the file in oast results and then we remove the image no files to upload
so probably some error yeah and then in a real world it would have failed the i mean if we had configured it to fail the pipeline this is probably a uh syntax error would kill the point pipeline now we go to the pipeline some other state is another jobs secret scanning let's say let's see same problem okay so no such file or directory or the argument is wrong or blah blah whatever and it's asked so yeah that's the energized the idea i'm not going to troubleshoot it right now um somebody though okay you know just that's the that's how the pipeline will look like in in reality it will have different states stages like build and then
testing whatever and then within those pipelines you just go ahead and add your own um jobs or tasks or whatever your tool calls it and then you run them and analyze hopefully automatically if things are machine readable yeah and then based upon that you also make the decision if you wanna want your pipeline to succeed or fail so in this case we see the pipeline pass but there is this exclamation mark um why let's see the reason for that because we have set allow failure to true in all of them so we're saying that even if we have found vulnerabilities just let it go the pipeline should not be affected that's what we're trying to do
um yeah that's actually it did i have something else [Music] yeah let's continue to um to my hearing success so the demo is over any questions in the demo or before i move to this uh maturity model yep okay so i don't see questions from the chat but i do have one um how do we actually share files between the jobs right um i don't see any options you can you can pass you can pass the files again depends on what tool you are using and uh what syntax will it exactly be to pass the file um but yeah you can pass the artifacts that's why you stole them the artifacts that's the point okay all right okay
it's kind of makes sense so what happens i'm going a little bit out of context what happens when we have multiple runners right and then we want to share files between them multiple runners yeah um again you in this case what you will have to do let's go back to the code so runner context you have a uh execution environment okay so yeah when we say when we specify the artifacts path is this whatever runner ran first we'll upload this file okay right this is the producer would you upload a gitlab server it's not going to stay with the runner because runner is going to be ideally you should be cleaning up the runners because you don't cash to collect over
there take it so the file will be uploaded into artifacts and then in in this in this terminology you always call them artifact whatever your the file you are creating the code and beat here so in this case what we will do is in the next job you will make a dependency on the previous job okay previous doctor dependencies i want to use that artifact as an input for the next job okay so all right but in that that case it's living somewhere so yeah you can and you can also do multiple things i mean let's say your ci cd tool i think everybody every tool does allow that but if some tool does not then you just upload it
somewhere because you want it so use a bucket use something else okay thank you all right um yeah any other questions um no i don't see any from the chat about now we can continue good okay so let's move forward we have done all the magic of devsecops and we have created various security hardening and vulnerability so in in essence if you see here what we're trying to do is automated hardening or automatic vulnerability management that's at least what i see so far um but in reality uh at least like if you're if you want to be so ideal and if you want to say like the philosophically devops include security and whatever and themselves is like
bringing security to close to devops find then you can also apply the same thing for post production so post production what can be automated post production uh response can be automated in certain cases like for example on aws um whenever yeah whenever x event happens an x event can be someone made a change to a production security group a production security group is anything any security group that is attached to your your clusters introduction anything in your yeah your compute instances um ec2 vcs blah blah blah so someone makes a change go and revert the change immediately and generate an alert this is not like rocket science to do it and yeah security could have its own
pipeline that is not deploying anything except these automated actions so it could be a lambda function for example in this case it's not you're still creating software sure it's not your main application of the company which is your product whatever uh you are applying the same principles and you are now producer of software which makes your life easy which makes governance easy so you can also be running compliances code and automated fashion like chrome jobs um what else um yeah as i said response actions uh some monitoring stuff where you're not taking response but you're still wanted we want to monitor specific actions um so yeah anything post-production that you you see so cloud basically gives you
tons of opportunities use them and do as much as um as much automation as possible so yeah now once we have all these uh shiny things such
so for dev ops maturity google has a very nice tool uh um where you can basically i don't remember the tool you can actually google you can google what the google tool is for devops measuring uh success for devops it's basically you answers a few questions releases frequency of change number of incidents key percentage blah and in the end you get [Music] how mature your devops system is now the same thing with devsecops but it's a little different and there is no calculator that i could found to measure uh except this big uh project of devsecops maturity model maintained by os so always dsom um yeah feel free to uh go through it i will quickly go
through all of these um some of these so for example build or deployment time level one maturity can there catch you in pony changes so this kind of defines like most of the time in the level one maturity if you can like quickly go through all of these uh it's have a process in place have a defined process in place um which will be followed or processed the healthy devops pipeline do you know it should look like now in different stages let's say in the build stage stages you have for example uh yeah you have continuous integration that's okay well pretty basic um defined deployment process how the deployment and who like can do these deployments um for example
only master branch will be able to deploy to production only staging should be able to deploy to staging um yeah things like that so batch management batch policy is defined uh automatic pull request for for for patches so this is like the way of measuring how many of these you have achieved and eventually when you have achieved all of them you know your maturity level four generally for level one and level two the basic thing is basically you don't go ahead and start failing your bills you don't go ahead and start failing your pipelines that's i think the gist of uh this whole thing there may be some more things that are missing out but yeah that's at least the
summary uh so yeah this is a cool project that you should i think if you're interested in this you should definitely check out um and what else we have in this yeah on the job market so last thing okay devsecops engineering position how it looks uh how it looks like i don't know in pakistan i'm still out of the market for three years like in yeah in eu i have seen these positions for the past three years that i have been here there there are positions uh in the market uh basic skill set on guys basically you need to be able to code um at least in for automation python dash sometimes go as well because now cuban it is in
horizon it's written in go and then yeah go is also getting in and you need to be able to develop these pipelines uh in i think one two one also is good enough usually it always depends after company cash tag use carry and the job position but usually since most of them are using yaml so you will be fine if you just understand one um git sure it is crucial most and i think for for security people coming from pure operations background and haven't done any uh dev this is yeah kind of missing i i i wasn't using it quite often i uh before i did my first devsecops job um yeah and also infrastructure
infrastructure has called ansible cloud formation terraform docker these are the things that you your your dev ops guy needs to know and what he will be doing that we talked about putting security stuff automation stuff into the code and making it part of your daily life how it helps johanna you have a pipeline image on the output degree i'm not the the security people are not the only ones looking at this pipeline everybody's looking at this pipeline so it increases transparency if something fails people are looking if something passes people are looking so there are both aspects to to that um and who is watching dev ops and sec so like all the tech divisions or all the tech
teams are watching um so yeah this kind of improves the transparency and then speed of fixing things uh creating that uh atmosphere in the company uh yeah that's pretty much it and yeah if there are any questions let's have them now or yeah otherwise in the end as well we can you can guys you guys can reach out anytime in the cloud security channel on discord okay so uh the last question would be uh i don't see again from the chat that would be uh what other resources let's say if i'm a beginner right no i am so how do i get started and as a dev supports guy where do i learn how to create these pipelines you know
their syntax and everything where what others there is i think uh devsecops.org um so yeah devsecops.org is one of the great places if you wanna get really hands-on i would definitely recommend you to check out check out this this project aws devsecops factory but this is again very specific to aws this is one of those um a couple of architectures that i presented are from here this is the source this is a good project to start with i mean just start something for example this start implementing it in whatever your choices don't look for only aws services so go for gitlab if you want gitlab mix it with docker mix it with something else depending upon what you
want to learn uh yeah well implement one of these pipelines yourselves this is a good place to start um then there are also as i shared so devsec framework that's that's a hardening framework this is uh for if you want to learn compliance as code or hardening infrastructure hardening this is one of the things you can do generally speaking if you stick to this slide these are the skill sets if you have these they're good that depends they can yeah if you only cover these you will be like one out of every everyone so i still don't know puppet and chap i've never worked with them angel is mostly good enough for me jenkins i've worked with a little bit
code gitlab a little bit more code pipeline a lot um git is usual python and bash go my i i still can't code and go uh docker terraform cloud formation so yeah it's really depends on what you want to build and what you want to yes just start for example with your own website you have your website okay or start with let's say dbwa so you pick up a vulnerable website deploy tera good or some vulnerable infrastructure so vulnerable application on vulnerable infrastructure create two pipelines to deploy these or one whatever uh and then do all of these things as part of the pipeline these if i think anyone who builds these two pipelines should be
able to cover quite well devsecops all right that is cool so um i don't see any questions the last one is to you know just post these links in the discord server i will do that and i will also link these in the youtube channel and um i think we are good to go uh thank you husband bike for joining uh it's with two hours thank you very much for your time i know you are busy right like most of the time but thank you very much for joining and uh thank you chad and guys i mean whatever of you are left right so like we have i think around 78 people it was 30 in the start but yeah it's a good
start right so thank you everyone thank you for joining uh inshallah see you again for the next tech talk alohas thank you guys [Music]
you