
cool so I'm here today to talk to you about Gophers whales and clouds and hopefully some things that I played around with and maybe improve my efficiency a little bit so first of all my name is Glen or dibelius you might have seen me on Twitter spamming out everybody else's talks today I'm a penetration tester at TSS and also polyglot developer and I get up to a couple of other things when I'm not using computers including drinking butter in my coffee so I wanted to start off by talking about a few trends and buzzwords that I've noticed across the industry and random blog posts that I read now I was gonna go into lots of
stats and why you should care about these things and that kind of stuff but if you really want that it's already on the Internet so instead I'm gonna talk to you about just a couple of things that caught my interest and I wanted to play around with more so first up docker now if you haven't heard of this before you've probably been living under a rock but it's a lightweight container virtualization system so it doesn't have the same kind of isolation that you'd get from vm's but it also uses far less resources on your system to run it it does share the kernel with your computer so if you get owned in the container then you're
probably going to get owned anyway but for my users that's not so important it takes a base operating system image and if you use something like Alpine that could be as small as five megabytes and then each program you install adds a new layer on top which can be shared among all your containers so it really does that kind of efficiency thing that I like so the default type people use it for having kind of their build systems all wrapped up nicely sharing like clustering their data and production and keeping environments the same throughout their build so that the soft way they build actually runs to me it's more about having my tool kit available in a
way that any books I run it on I just know it's there and it can clean up after itself when I'm done so not cluttering it up with any dependencies or things like the next thing that I kind of like the idea of and seems to be pretty popular these days is service and functions as a service so if you haven't heard about it before service is kind of like this concept of throw it up in the cloud and make it somebody else's problem taken to the extreme so the first thing is it's really cheap a couple of fractions of a cent to run a function on this and if you're not actually running code on it
at the time you're not paying well it sits there idle if you want it to scale out you just hit it with more workloads and magic happens and your service stays online at least so goes the theory so functions as a service is one of these service design patterns we sort of take a traditional monolithic application break it down into a series of micro services and then pull out all the functions that you use to build them and make them their own thing and what this allows you to do is you get these tiny bits of code that are really easy to understand and modular and you can kind of update them without hopefully breaking the other areas of your
application so you may have heard of some of these service things before Amazon's got lambda Google's got their cloud functions a Zilla also has cloud functions because they're not Amazon and they actually know how to name things and this is a chart that came from a blog I was reading earlier in the year and it just shows a pile of different things that are happening in this function there's a server space so there's quite a lot going on at the moment the next thing that I really like the idea of is going and it's not just because they have probably the cutest logo though that is part of it so this is a programming language that came out
of Google in around 2009 and it's kind of got a bit of a cska feel to it but without all of those annoying things like pointer arithmetic or managing my own memory or any of that stuff that just has no place in 2017 so it's compiled it runs cross-platform which is really nice static typing because there's dynamic runtimes it just give me some typing on my variables please memory safe and it's really good at concurrency so none of this playing around with managing all your threads and locking it just kind of works unfortunately it's not a functional programming language so I do have a bit of love in my heart for Scala but unlike Scala it doesn't have a heavyweight
interpreter on that it needs to run on so that's pretty good it's growing fast there's a lot of library support out there and most of all it's kind of fun so where this all started I was hanging out back at Canberra in Australia and talking to a friend and he sort of convinced me that I should come indoor talk at a local meetup so I was sitting there playing around with ideas and I just said yeah alright let's do it so I wanted to know like what could I spend my time on that people might be interested in hearing about and I started off with a tool that I was already kind of familiar with but takes
a bit too long to run in my opinion so if you haven't heard of it go Buster is a directory in DNS brute-forcing tool and basically just allows you to take giant word lists and just smash them up against things really quickly but if you've got a huge word list then it's probably going to take a while so I thought why don't we take this and run it in lambda we get that fun service aspect and maybe we could divide up that word list and kind of run it in parallel or something like that so lambda natively doesn't support go which is kind of annoying but on you can use one of their supported languages to wrap
around it and just run the binary directly so you can make it work hopefully they'll support it at some point but in the meantime there's a lot of projects out there that can help us one of those projects is apex and it's kind of just a little command-line tool that allows you to easily compile deploy and then invoke these lambda functions and it's supported go so the plan was basically do-- busting is too slow so we'll cut up the word list send it up to the cloud run it in parallel some kind of black magic happens here I'm not too sure and then hopefully I profit and get to hackle those things I found now I played
around quite a while and I realized all of these things don't translate very well into slide decks so there was a lot of times then hacking on code and reading about too many things I got trapped in some rabbit holes and went off on tangents that were completely unrelated all died pretty interesting and thankfully it seems like I didn't violate any time to service because last time I checked my account was still active so at the end of all this I had some code that was running on lambda seem to do the dirt busting thing and it was kind of all tied together with some hacky bash scripts so at this point I should probably pray to the demigods but
I forgot to do that this morning so here's a little video showing just how boring it can be to watch some lines of text scroll on a screen now you'll see there it cut up a word list into about 50 different chunks and this is getting all of the brute-forcing back from the Landrum vacations so that ran through in about seven eight seconds now that wasn't very interesting to look at I thought I'd build a chart instead because what's a presentation without a chart and while you probably can't read all of the numbers on the screen there the main thing to see in the middle there is that running on about a hundred threads and go Buster's seemed to be the
best performance and there's three middle bars there were kind of splitting a word list into between 20 and 100 lambda slices and it ran at best in about 4.8 seconds for those 20,000 words so what I learned from that splitting things up and running him in parallel makes them go fast who knew about 50 lambda slices with a hundred threads was the optimal out of the tests that I ran and that was in about 4.8 seconds the there wasn't a lot of difference between twenty and a hundred slices running on lambda the next thing I found was lamb burgers actually really cheap so I calculated it out and maybe about a dollar we'll give you five and a half
days of compute time the total cost for all of the testing I did was about five cents and that was for $17,000 in vacations running for about six hours of compute time so it works out pretty well particularly when you're on the kind of money that pen testers make so one thing I found really annoying that I would do better next time is don't manually collect all your timing data and play with it in Excel it's just a real pain and it takes a long time and that's not a good use of your time there is a github link there with some code on it and I'll tweet out this slide deck afterwards so if you want to play around
with it yourself it's online so after I ran go buster on it I was wondering what else could I kind of apply this concept to and I was thinking like things that take a long time and just aren't really that fun so maybe like nmap UDP scans or scanning subnets running those basic checks or taking screenshots of all those websites that you find out there and I have no idea if this is what would work because I've never done fuzzing but if you can fit it in that 300 second window that lambda gives you maybe you could run fuzzing payloads across it there's heaps of cool stuff you could probably do but the lambda environment invocations might
kind of get in your way so there's a 300 second limit and it's kind of just hard to get access to all of the things to debug while you're running it so I started looking into other things that Amazon provides and of course they came across their basic cloud server offering ec2 and auto scaling groups so this gives you a server in the cloud and you can spin up multiple copies of it and yeah it just seems to work um now to get that lambda feel to it the auto scaling didn't operate as quick as I would have liked so there was a few minutes lag time and spinning things off and tearing them down and well I wanted to install docker
on it and run things in containers anyway and that was more setup as well so I looked a bit deeper and it seems I've got something that already builds that so ECS is that container service and it basically adds that kind of dock layout on top of the auto scaling servers now this was nice but what lambda lets you do is just feed in events and get the data back out the other side and while I could build something myself like who has time for that so looking a bit deeper I came across their batch service which lets you define a job type in this case go busking and then connect a queue to it
and just kind of feed the data in and feed it out now that runs on ICS which runs on ec2 an auto scaling so it seems like they've got that abstraction of all that services figured Hart and I probably wouldn't have to do that much to make it work now I played around with it a bit but I didn't go too deeply down that path for this time but I did learn that basically if you need something i'm Aslan's probably already built it before he got there so I kind of read through the billion services they release every two days another area I found while kind of thinking how can I get away from those
lambda restrictions is this project called open files or open functions as a service so it came out around the end of 2016 and it's just been like growing super fast like they're pushing releases every day and new features are coming out and it's kind of just really exciting so it allows you to run those functions as a service kind of like lambda but on whatever hard way you want and it leverages docker containers to do so so it kind of already had some of the things I wanted to play with it comes with a command-line tool and essentially you can define a little bit of configuration about how your function should look and what container it should
run run build deploy and then you in vogue it just like Lando another nice thing with the command line tool is you can actually pipe the data in and out of it and so you can run this function almost like it was a binary on your system so to go from a docker container to an open-face function there's only like four extra lines of code you have to add to a docker file that's basically referencing your image adding the little watchdog binary which is what will accept the events that you're sending in tell it what it should run when it gets an event and then tell the container start the watch when I start so it's
pretty quick to get going so I was looking at other kind of offensive ways that I could use docker and what kind of tooling is already out there that I don't have to build and I guess the first level is just taking existing tools or operating systems that you use and wrapping them up in a container so that would be things like Kali Linux who's put out docker containers for their things or just common tools that you'd use day to day so there's go Buster and map maybe be for Empire and basically any tool you've got if you google for it it's probably out there the next level of kind of using docker in an offensive way that I saw was some
projects that are taking these individual tools and kind of running them together so brute subs is a project that does subdomain brute-forcing with a number of the common tools like go busto recon ng old dns etc and kind of combines the output from all of those together to give you a richer data set of information so that's kind of cool way to go about it from just this old run one tool look at results now the next way I saw and this is kind of going even more into that software engineering space is actually turning these tools into more of the tool as a service system so cubot is a slack bot that is running on top of some kubernetes
orchestration which is basically clustering for docker containers and similar things and what it lets you do is run a command like go in map this thing it'll queue it in the background run the tool in the cloud gather those results store them and get and then return the differential results to you and you just hang out and slack and drink coffee while this goes on so that's pretty cool because there's so many things out there like docker containers for everything and any tool you look at there's probably like a hundred different containers there's sort of a couple of rules that I use for myself to figure out which ones are worth looking at or using more if I should go build it
myself so firstly if it's the official container for the project it's probably a good place to start because hopefully they'll maintain their own stuff hopefully next up how many times has it been starred and how many people download it because if it's popular then maybe it's probably okay um a big one for me is whether the docker files available so this is basically like do I have source code can I build it myself can I say what that's built into it and what should or shouldn't be there and if you need to kind of take it and build your own version this is really nice because you can just modify what someone else has done tweak it to update or
whatever now automated builds let you link the docker build to a github repo and so whenever you push new source the images get updated so if if I find something like that it means it's probably more likely to be up-to-date which is good when it was last updated kind of self-explanatory and finally how big the image is so at least in the early days but even now there's quite a lot of docker images out there that are just like hundreds and hundreds of Meg's or gigs for a tool that should only be maybe like 10 or 20 Meg and there's just all this extra blow that doesn't need to be there so I try and look for things built on Alpine if I
can help it or just this most size available that fits my other criteria and while I was looking into how I could get really small containers I came across a couple of tricks how you can use golang and some features of docker to get some really tiny containers so first I'll go line allows static compilations so we can get a single binary without any dependencies and bypassing a few extra flags we can remove some redundant debug tables that don't really need to be there when we're running it next up take upx which is a executable packer and allows you to make them smaller but still run basically the same when you run the tool and then we leverage Dockers multi build
stage and what this allows us to do is define essentially a different environment for all of the compilation of the tool and the packing and then we just copy that compiled binary out at the end into an empty container and so I did this with go Buster um and I ended up with about a one megabyte container all up that you can just run so it seemed to work out pretty cool another cool thing that I found with golang is there's this project called Cobra which allows you to build sort of really nice command-line interfaces and this is used by a lot of projects out there like some of the docker Cole libraries use it and
really just heaps because it's nice to use one of the things I find delays me a lot when I'm starting a project is I want it to be pretty and I want to have just the right tools in place before I start and then weeks have gone past while I research all the tools and then it's time for b-sides and I haven't written any code and yeah so Cobra is nice you can basically just clone down the program and build it with go and then you just in it a new project go into that folder add some sub commands and run it and you've got a command line that looks almost as nice as Dockers I
don't know if you can read that very well up there but yeah that took about three seconds of commands to generate now gopher blazer was this cool idea I had to kind of get rid of all my shell scripts and hockey tools here and there kind of wrap up a lot of the docker run commands and just make my life easier if I don't have to think about it and it just works then that's good for me and I can get on to doing the fun thing like working out how to break that really crazy cross-site scripting payload and make it run so I wanted to add in connectors that would run docker or Landers maybe open fast
functions and just have it all from this single tool now currently don't really have very much there because starting projects is hard I did get the name done this I've solved one of the software engineering problems followed a lot of rabbit holes got a couple of proof of concepts working and that's kind of where it's at at the moment so there is a repo it's only got the proof of concept but I'm hoping that sort of over Christmas maybe I'll have some time to hop on this a bit more and yeah so future directions actually work on that tool I want to explore more tools that I use day to day and kind of getting them running in containers in a
nice and simple way and then going one step further and figuring out what are the manual steps I do with the output from these tools and how can I automate them and kind of build it into a workflow where I have to do less of the boring mundane stuff just reading more about docker as well and ways I could use or abuse it so this one project they came across called sonim it's one of these cryptocurrency Aiko's that have raised way too much money was way too terrible looking a white paper but it basically sounds like take docker container run on miners computer profit and I think there's probably going to be some security concerns
that space so it could be fun to look at um takeaways from this talk so originally I thought I was going to go down this really deep kind of technical way and talk about all these things you should do and why you should care but when I started reading this slide writing the slide I found that the things that I wanted to put here were more kind of cultural or attitudinal so the first one is just be curious like if you see a project out there or some new tool like play around with it and see what it can do and how you can use it to disrupt the work you do day to day so
yeah I guess don't get stuck in that this is how we do it because this is how we've always done it if there's a better way like let's do that instead and finally if you do play around and learn something like share it get out there speak about it send pull requests do the open source thing write a tool and put it on the internet like let's just get all of that stuff that's trapped in our heads and kind of behind closed doors and bring the whole industry up together and finally I just wanted to leave you with this quote that I quite like about thinking different so I believe that it's the crazy ones and The Misfits the
rebels the ones who just kind of don't believe in that status quo that kind of get out there and shake things up rock the boat and really cause things to be changed so be one of those people and we've got some time left so if anyone's got any questions thanks you