← All talks

Mo Khalilov: Linux Thick Client application - zero day hunting

BSides Calgary · 202152:0087 viewsPublished 2021-12Watch on YouTube ↗
Speakers
Tags
CategoryTechnical
StyleTalk
Mentioned in this talk
Show transcript [en]

[Music]

[Music] so um the topic for today is going to be the vulnerable to research right so first of all who i am my name is mohammed everybody calls me more but majority of the people at work calls mikhailov by surname because i'm i was from soviet times and uh people call by surnames um i'm also the author of about 30 plus zero days ranging from the syncline application to client linux browsers and so on and so forth currently i am head of the red teamers and i have got a small research lab where we concentrate on the one research hunting for cvs and so on so forth um i don't use a lot of social

media accounts so i got a twitter which i check only once a month so i'm not going to be stuck there quite often and i also maintain um i got my representative on github where i develop the tools and place them there now um what's going to be the agenda for today right agenda for today is going to be the vulnerability research on the client applications especially specifically on the open source applications um developed um on c c plus plus and so on so forth but majority of it is going to be the open source and based on linux but it also can be applied to the windows based open source applications but right now we're

going to look at only there a kind of a white box gray box approach where we have got the access to the source code you can also apply the same techniques for the black box now on the agenda we're going to look at what the common questions during the um for the fuzzing for finding the vulnerabilities where to start right and then we're going to look at the life cycle of the vulnerable to research and development where it starts way it ends and why we're gonna need fuzzing and we're gonna look at the type of fuzzing right there's a variety of ways to do the fuzzing and then we're going to try to understand the instrumentation so

instrumentation is basically a way to applying small set of functions which mutates that function call we're going to look at the some of the common languages which we try to break and find the vulnerability in like gcc like our c programming language right c plus plus and c line so it's basically c c plus plus and gm and and some um uh c line then we're gonna look at we're gonna use mainly the american fuzzing walk for the fuzzing purpose and then we're gonna try to understand what's afl and understand its layout and then we're going to look at how we can find the right targets too fast where we should actually search for the

right application where we can target to find the vulnerability and and find the zero days there we're going to look at how we can optimize and configure our tool which is afl then we're going to look at how to create the environment how we can create the corpus and optimize it corp we're going to look at the maybe right now corpus might be a bit not clear but corpus is basically what kind of data we're going to fit into the application we're going to look at afterwards triaging the bug if you find the box how we can actually try to try to find ways vulnerability did occur in the actual application which were trying to find

the vulnerability in then we're gonna we might not be able to write the exploit at this stage but we're gonna try to identify ways that you should exist and then what kind of issue it is and hopefully trying to find what needs to be done to mitigate this kind of vulnerability okay so common questions right how to choose a target so the target is basically what you want to actually assess where you want to fast so one of the common targets is that the target which should be able to parse some sort of content represent some some sort of data back into the user it should have quite a lot of functionalities to play with right the

biggest scope a bigger function calls bigger functionalities and features the better for us because we're going to have the multiple ways to interact with the application so this kind of applications are really good and on top of that the applications the libraries which are old is also good for us because there's a more chances that they're not actually patched up like i know three three years ago or so there was a vulnerability on heart bleed right it was basically the um overflow on the memory which actually led to expose some of the information from the adjacent memory addresses on the hip so nobody actually looked at it but with the proper fuzzing with the right timing

with the right approach this vulnerability was discovered so one of the things i would say is that try to find the vulnerabilities which are a bit old which has got a bigger scope bigger function calls and so on so forth and then obviously once we identify the target we need to try to find how to interact with this application what kind of files it needs to actually function properly for example if it's going to be there pcap network file packets right parsing applications like tcp dump ysr some lip pickup libraries through python right we need to understand what kind of files they parse and how what kind of formats they need to be so we need to actually understand them

it's not always a picab sometimes it could be for example ffmpg right it's going to accept multiple type of major content and convert them to some a different format or modify them so we need to understand what kind of files we can parse to them so then another question is that what's the best way to mutate the files and fit into the folder so there are different type of mutation right for the files there's a dump mutations there's a very well structured mutation right you can actually fit into the application any kind of file and the mutation will take care of the rest because it's going to actually change the content and adding the bytes

mutating the bytes and applying some sort of different algorithms are new kind of algorithms coming out every now and then but having is the right file having the um right structured file and applying the right mutation to the file is going to give you a quicker right result a quicker box so that's very important now how to take the real crash and box we're going to look at the example because during the fuzzing right you're going to have thousands of outputs but going through all of them is is going to be very tedious one of the thing is that for you to actually start automating it trying to find a way how you can parse the big large amount of sometimes

useless files into something useful and base your analysis on this filtered box only so we're gonna we're gonna look at this as well in the example and then sometimes we have got complex applications where we need to provide some different libraries obviously for them we need to find a way to write a wrapper rob is basically writing your own code which would call that application indirectly so that's something also we're gonna look at provided we have got enough time we have got only about 50 minutes so we'll try to make it um as fast as possible so let's look at what we're going to look in there in this course in this topic now now life cycle of the research is

basically is gonna start with um target selection as we discussed right once we identify the target let's take an example of the tcp dump but in this um real life example i'm going to show you the vulnerability iphone on parallel but i'm not going to go and follow the apparel right now because it's a big application it's written in c it's an interpreted language but we're going to look at as an example we're going to take the and tcp dom and then obviously it has got a leap pcap which you utilize as a library now let's say you found the target which is tcp down then we need to understand as a reconnaissance how this application operates right it

doesn't have to be there tcp dump it could be any other application but you need to understand what application you're trying to find the zero day in you need to understand how this application works what kind of future functionalities what kind of libraries it's calling to then you need to actually start enumerating on what kind of um flags it has got right for example sometimes it's got optimization features sometimes it's got a um add-on feature to call different libraries which is not by default enabled right so you need to look at the help of this application normally when you do that config help it's going to tell you what kind of features functionalities it has got understand it

and then you can start implementing them embedding them and including them as part of the compilation process then we need to obviously try to understand the interface selection now we have now the understanding what kind of models feature functionalities it has then for better purpose we need to try to limit our function calls limit our features saying that okay i want to um further specific uh function or specific feature of the application let's say on the tcp dump you might be only trying to see okay this application is parsing the pickup file and printing it out back to the user let's go and parse that one so we can concentrate on that one you need to select um specific functions

sometimes and go through the fuzzing process on that one now once you have identified that what you want to fast then you're going to start creating that a corpus corpus is basically the file which you're going to fit into the application normally it doesn't have to be only one file it can be it can be even empty file the fuzzing process is going to generate the file for you but till it finds this magic number on the header of the file it takes ages it's better that you're going to find the already available file for example on the network packet captures on wireshark there are a lot of sample pcap files which you can download

and then trying to find the right file to fit into the this application so that's going to be that your corpus then obviously comes the fuzzing part one of the major and most important part is that all of the parts are important but fuzzing part is is where you're actually going to be stuck for weeks maybe even longer in the fuzzing you need to um if you have got a but it is a better performance pc or servers you have it it's going to be um time saving uh you might for example in my environment i have got 12 servers which i mark one of the servers as a master fuzzer and 11 servers is going to be there

it's called right now slave fathers right all of them is going to communicate with the master fuzzer so you're going to distribute the first inter um multiple processes you can also in one pc you can have 16 cpus in this case you can say i want to use one cpu as a master father and the rest of the 10 11 cpus as a slave father which is going to communicate with the main father to interchange already good corpuses already from the box and place them in one place and once you fuzz it and let's say after two weeks you come back and then platform the application which you're fuzzing it says hey i found 1000 crashes then you can say okay

should i stop should i not stop uh it's up to you because fuzzing is not going to stop on its own it's going to carry on forever so you got to make the best judgment for yourself saying that okay two weeks passed let me stop let me look at the um crashes and trying to find if any of those crashes are usable right that's where your triaging comes in trying to identify going through the thousands of files and identify which of these files are going to actually cause a legitimate crash which are exploitable once you identify this file you're going to go and try to find out and which segment of the source code which you're fuzzing

is causing this kind of a bug and then you try to find out that line of code on the application which is causing this vulnerability that's where your triaging is once you actually identify the issue identify which line of code what kind of function is causing this crash what kind of vulnerability it's actually exposing to then you're gonna start writing the report for that specific vulnerability it's so that you can provide to them right to the vendor who is actually developing this application or to write entities for example if you want to have a cve you can go to meter so some other places where they can assign this cvs now exploitation part comes later it's a

completely different ballgame right so fuzzing is one thing but actually exploiting the vulnerability is completely a different thing for example if it's going to be hip exploitation it's a different process it's different than fuzzing so it might not be able to go that far right now but we might actually go and find the crash and try to crash the application outside the fuzzing process just let me check if everything is working okay so as already highlighted and a reconnaissance stage is that understanding the application purpose you need to try to understand the application pass and fail behavior of the application on unix systems right when you execute the application it always the process always kind of

responds whether it's a successful execution or not successful execution it's a different type of status it has got once you understand the pass and fail behavior of the application for example if you're going to pass let's say network packet into that media application is going to say hey it's not the application it's not kind of a file type which i handle and it's going to give you the error message then you might be wasting your time maybe you you're providing incorrect input maybe it's trying to parse it but it's giving you the completely um unexpected behavior which is going to be useless for you then you need to understand the flags and the functionalities right what kind

of features you can actually perform with this kind of application what kind of uh for example tar has got um archiving tool on unix it has got many other ways to archive the applications with different algorithms with different compression [Music] algorithms you need to understand them and you try to see which one of them you want to actually go and fast one of the best way is that is reading through the manual or going through the help section of that specific application developed by the vendor or third party once you actually understand it you can start selecting the uh correct test cases for those parameters finding the right corpus write files is very important during the fuzzing it's gonna get it's

gonna give you a smart way to file the application so um as i said one of the best ways a screen is quite small i know but you got the idea you can go with the help or man functionalities of the linux or help or readme file go through this understand the flags understand the features what's the main purpose of the application based on this you're going to draft and your development process for this buzzing

so i'm just going to skip some of it right about the test cases could be file types it could be the network based values it could be the user input from the files it could be that's where the parameters right so for example ffmpg it could be the file um file name or maybe let's say um id and pseudo function for example unix sudo will have some flags with some user inputs right so there are different type of to interact with applications that's also you need to identify so as i said one of the best way is that for you to find the right file for example here you can have a file type downloaded from the

available captures for example here is from the wireshark there are tons of file capture samples you can download them and save them and let's say you downloaded hundreds of files and then we're going to look at how we can select the right test cases and later in the slides and then let's say you have got the input files with the application accepting let's say sudo application right i think um last year or the year before there was a vulnerability on the sudo because sudo accepts the user input and in this kind of test case scenarios you don't have to have a file directly inputted into the it's not parsing the file it's part it's taking

the user input let's say and then uh taking actions based on this in this kind of scenario what you need to do is that you need to identify what kind of the inputs from the user is the application is expecting in this case how you create the file test and test cases is that you're going to create a correct test cases as a file for example test case one could be let's say file name one could be john doe and then second parameter could be correct correct strong password and then second test case would be john doe and incorrect strong password and then so that the afl or any kind of fuzzing is going to

understand what kind of use cases and what kind of functions it's actually testing um application if you look at it right it's got a condition if the username is correct and okay take me to the next test case is that is a password is correct then if the password is correct execute this function so you see the flow of the execution is going through the different process now if the application username is correct but the password is checked right and then is hitting that condition saying that password is incorrect and it's going to take you to different function so now we're calling the different function to fuzz as well so having the different variation of

test cases is very important as well so that you're going to trigger the right calls and then um you can start fuzzing those specific calls as well now um the fuzzing obviously how we start is that it's going to start executing it it's going to test the cases and then it's going to execute the program if there's no violation of the program right like let's say stack overflow or hip overflow what happens is that in the fuzzing there's a queue of premutated test cases and then it's going to go back and tag take a different type of test case and execute the program and again and again until it finds a um violation of the program execution so

when you execute the fuzzing what's going to happen is that obviously you're executing the program but for this program to be executed you need the test cases the test cases which you generated is going to be a key role to create multiple thousands of permutated by the application test cases and then the process is going to carry on till the application finds a write file which is mutated in such a way is going to crash crash the application once the application violation is determined then it's going to consider this as a bug and then as a for example afl is going to place it in a folder called crashes that's going to be your main target to observe

during the analysis stage so fuzzing it's not going to be able to find user always a good box for if you want to actually find the very excellent box very um it's very fast it can be smart it can be calculated right based on what kind of corpus you select it can be very well automated but it might also going to miss a very good bug which is not going to be able to detect through the um fuzzing because for that kind of logical box you need to analyze a source code you need to analyze how specific functions are triggered which are not easily um reachable by the by the automation so that's one of the

drawbacks of the fuzzing but fuzzing is always a good way to start the software security making sure that it's functional as functioning as it is as it should be so fuzzing what kind of box it can find it can find hip overflow right it can find the integer overflow stack overflow and you're not going to always find nowadays stack overflows because a lot of the compilers are taking care of this but it doesn't mean it's it's not there a lot of the time hip overflow with them no the reference is there use after free vulnerability is there um but for example form a string it's there but you're not going to always find the stack overflow or format string overflow

i'm not saying it's not there but majority of the time you're going to find the vulnerabilities on use after three null reference risk conditioning and integer overflow now if you want further after fuzzing if you want to go to the exploitation stage each of these attack vectors for example for use after free w is after freeing non-referencing there are different techniques for the exploitation so you might as well spend your time understanding those kind of exploitation stack overflow is good it's easy to understand you might as well cover this and heap overflow is also excellent but if you want to understand and exploit the vulnerabilities try to spend more time on the use after free

integer overflow know the reference exploitations know the reference i start to see a lot on the kernel exploits i found i think about four vulnerabilities cvs signed on node reference and quite a lot on the use after three vulnerabilities so briefly i did discuss about the type of fuzzing so there are kind of a dump fuzzing smart fuzzing so it depends on how you're going to provide the files it is basically generating the input with no structure right it's a kind of a dump fuzzing but providing is with a type of file which is expected by an application it's kind of a smart fuzzing and there are different type of mutation algorithms is embedded into the fathers nowadays it's

going to go through a different type of mutations as it rotates um fuzzing with uh predefined algorithms so um is that that's a key role what kind of air flags are you using during the fuzzing so i put here black box fuzzing it's a black box buzzing is that when you don't have a source code and you're not able to instrument the application but that's that's the way with the afl which is providing the camera based uh virtualized based uh fuzzing of the application which you don't need the source code it's basically gonna go through the file and mutate it and then file the application passing it into the application i'm just going to skip some of this part

right because we don't have a lot of time because i want to show you some of the demonstrations as well is basically what i said but a bit more in a pretty written format so function coverage i just gave an example of the input field for example for sudo or password input field right so it's basically what you what kind of file what kind of content you're providing to the application it's going to make a call to different function based on the return from the test from that call based on based on this if l conditions based on the jumps based on the uh um based on the response from the application so i already told you about the afl so i'm

just going to skip this part because i want to concentrate more on the practical approach or once i jump into the console so um i already explained to you about the afl algorithm right it's just gonna take the test case computer put it in the queue and then it goes through the queue iterated and if it finds the application which is uh if it finds a corpus which is crashing the application and which is it's going to put into the crash folder but if it finds an interesting file which maybe did hang the application but didn't crash it it's going to put it back into the queue or it's going to use it as a another

mutation for the next test cases and so the fuzzing is going to create thousands of files in a second or maybe it's going to iterate so it's going to be quite heavy on your hard drive so we're going to look at the optimization soon but it's basically quite a lot of reading right on your hard drive so um i'm just gonna i'm gonna tell a little bit about not a lot these are different type of compilers it's basically it's gonna be the assembly language which is low level interaction with the cpu but the fuzzing is going to be embedded on the function calls and then every time when the function call is executed on the application this um

instrumentation afl instrumentation is going to be there before the function call and understand how it's responding on the return from the function call you know that on the assembly every time when you make a call it's going to create this region on the ebp to return to the previously called segment of the application once the function call is executed when the return is coming back with invalid value this instrumented function is going to check if it's a correct value or not and then afl makes its decision based on this so i'm just going to skip this the sample code like how the gcc is going to look like for the same application for example on the c programming language is

quite simple the same applies for the c plus plus right but on the assembly it's gonna be quite you know um a bit more i don't think any a lot of people write in assembly anymore including myself out of interest i did write a few times but it's not something that will write a lot but at the end of the day anything which is come by compiled is going to be into the assembly language so this is something that i already told you about about the instrumentation where it's going to inject the piece of code um before each of the function call to jump and then goes through the return value see if it determines uh uh crash

or not now here i don't know if i can zoom in um but here as you can see for each of the co-function is actually placing its um afl it's placing its trampoline jump function call right saying that okay for this call i want to for each of the calls is a afl instrumenting and it's placing its value saying that let's see what's going to be the return value of this application but if you did compile the application without instrumentation it would haven't put this kind of function call now afl the look and feel is going to be something like this as pretty much always like this so at the top you're going to have your

application target right which you're trying to fast and here is overall results how many pass it has taken the passes basically how many function calls how many um um how many function calls is basically it's made how many of uh interfaces it means to interact right and which is um which is going to go through and increase a lot as the application is going to mutate the corpus it might be able to find different function calls to make so here we got we got to pay attention quite a lot on the unique crashes it might have fun quite a lot of crashes but this is something that we need to concentrate on unique crashes right now

it managed to find about 14 unique crashes but we don't normally stop on this kind of um unique uh crashes when it's 14 it's better that we stop when it's over 100 or over 200 and we give it the time at least like four days five days which i normally do and another thing that important part you need to pay attention is that execution speed right now we're executing 1 343 executions a second but that's going to be a key role when you define the right corpus into the application not only the corpus not only the file but it's also placed will place row your cpu and your storage ssd is much faster and hdd is not it's not really suitable

always for the fuzzing but it's cheaper right because fuzzing is going to create as i said a lot of q files so it it's going to go quite big so you might as well have enough space on your hard drive for that so here as i said it found like 2 000 crashes but out of them uniquely uh 14 crashes so here we're mainly talking about the mutation what what stage of the arithmetics we are looking at right what kind of flips it's doing and what kind of mutations so that's something that i would i would suggest right now it's going through the hawk mutation but i would mainly suggest that you concentrate on the execution speed

and the type of unique count of crashes as well as how many passes it found let's say you're providing the input file tcp down it's a big application if you got a total pass of only one for the for one day it means that there's something wrong with your application corpus that you need to revisit and then apply the right corpus for this one it normally should increase quite big time i think i'm i'm running low on the time so what's hunting for the target right way we normally hunt for the target i normally search linux itself for all the applications right i normally use up cache search and i put a keyword for example if i want to go and find a

vulnerability on xml parser or converters right which is quite easy i use apt-cache search converter or update search media or parser and then it's going to give me the list of all the application which deals with the parsing which deal with the compression which deal with some other kind of keywords which i am interested in and then i'm going to take this specific application or library and find the newest version online of from the github repository and use that application as my target you can also use a soft pdf you can filter down to linux operating systems specifically on the application on the command line prompt on the github you can also filter down to specific applications and choose the

right application which has got a quite a lot of people who are using it quite a lot of applications which are using it and concentrate on them because you also want to show the impact of your vulnerability right if you find a vulnerability on some application which nobody is using it it might not be interested for it it might not be very interesting so you might as well find the right application but with the least attention by the developers the new project got quite a lot of um applications uh obviously as a open source you can go to the new project and download some of the files for example you can download the tar file which i did um i think a few

months back and start fuzzing on it right and it's easy because you can create quite a lot of different type of archiving formats and try to attempt fuzzing it during the extraction of those files you can use a google door and google door you can use for example you can find a latest application on c c um or trial versions of some of the application and find the ones which are not quite well tested by the developers or by the community so afl installation is quite straightforward right first of all your unix system got to be updated you need to install the essential tools build essential c-links gcc and gnome debugger and then you go and download them and

afl from there from this website you can download from github as well and then once you download it extract it and then you just run the configure make it and make it install you can you don't have to go always make it install right but if you do don't make install it's going to be always available in your environmental variable on your linux system you can enable the optimization for the lvm for your c lang for some of the application if you want to if they accept lvm if they accept the ceiling like open ssl if you want to go faster you can use this mode um but normally when you go with llvm you

got to have a much better uh ram it's it's better if you're going for the bigger application i suggest to go with a bigger bigger ram um once you install it one of the key things that you need to do is that you got to make sure that the core pattern is enabled so that you go to caesar and crashes on the applications and so that it's dumped on the on the file it's not dumped in the log file only um then there are some tools which is going to help you to perform the debugging process um but this should be enough to install your application for fuzzing as i said before configuration what i

normally do is that i go with the configure try to understand the application and see what i can enable what i can disable and then use your flag to um compile with the instrumentation if it's okay i'm just gonna go to the console and just maybe also use a instead so i can show you the optimization as well so this sorry this is some of my servers so let me try to go to one of my servers

so this one i will come back

oh before i run pcp dump i'm sure that's a config file right what i normally do is that as i mentioned let's say we did download the country and tcp dump but which we try to understand what it does we know that it has got quite a lot of features functionalities and sometimes if you want to fuse it efficiently you might want to enable some of the critical features which are not tested if i want to enable everything what i do i i just use some grapping to take some of these values no i don't know if you can see it or not

maybe for example i i have got here only smb

let's say we have got 10 features which you want to enable that's something for you i'm telling you for optimization purpose let's say i'm going to put here flags dot txt enable smb now before i actually i start doing that and instrumenting it i normally go with um first i'm going to define during the instrumentation instead of using the built-in c compilers i'm using the afl gcc and then i'm using the instead of c plus plus i am using the afl uh g plus plus which is for your c plus plus then i'm enabling the some of the c flags and c flags is basically any as a flag which says okay there is a

compilation do you want me to optimize it do you want me to enable the extra debugging so it doesn't always have to be for the afl but um for in general you you would normally enable this feature for troubleshooting and maintenance purposes and then i'm going to go with configure and then i go uh this i don't know it's asterix right and then i'm going to say cat and then i said flags.txt

now what it does is that instead of going through on the configure enable smb blah blah blah and go all the way down here you can go and grab out all the enable or specific features you you want to parse and then you're going to go and execute it this way and then because i did compile this one already i believe and yeah compiled already so once you enable this and then compile it it's going to be already here one of the things that i want to say is that um during the compilation of one specific program there's sometimes it's it's a library which is missing in in tcp dump there's a leap cap which is

um one of the important one of the important libraries which tcp dump uses what i normally do is that i also foster instruments libraries also so that is going to be easier for me to fast the libraries at the same time

i don't think yeah i didn't instrument the library right now but anyway i did live instrumental libraries it's there and then if it's library i normally make sure that i also put it as environmentally accessible for the tcp dump

great now exam let's say now we go to tcp dump now we're ready to go and try to file this application let's see where it is

now we're fuzzing the dc we're instrumenting the um tcp down with the afl files now it's working let it go and instrument this application it's basically compiling the tcp dump with the afl fast function calls which try to find when the crusher course is going to tell me okay there's a crash i want you to correct it okay great now it did work and here i have got a small uh file called um iperf and this is one of the pcapp files which i did download before before you file that one of the things that you need to do is that you need to install the you need to create a directory called input and then

directory called output okay so what happens and the input file is going to be your corpus and the output file is going to be there any file which afl fast fast as a cross is going to place it in there

now what we're going to do is that tcp dump is our subject application and we have tried we're trying to actually um run this command called ecp dump let's try to make sure that the application is working properly and then we're going to give it a file name called iperf let's say make sure that application is working properly we're not fuzzing anything okay it's reading the file everything is working now what we do is that we're going to try to file it for that what we run we'll run afl first and then input we're going to give it our input file where our core process is and then we're going to provide the output file where we want the output

file to be and sometimes um i'm going to tell you in a bit and then instead of providing the actual input and fold and the file name we're going to give it uh at add sign what it means is that instead of added sign take the file from the input as a corpus and then put it as a main interactive file to go through now as you can see what it's doing is that it's taking this one specific file and then going through this through multiple type of algorithm right now it's going through the havoc it's going through spice and assigning different mutation and trying to find the bug and its execution right now is at 723 730 kind

of executions per second it crashed it's going to take some time um now now we have seen how the afl fusing is working and executing it i'm sorry i'm being a bit fast but let me try to be as you know fast as possible right now i'm executing with only one cpu but if you have got for example in my case i have got 12 cpu and as i said you can run the afl with master slave mode and let's say you have got um multiple cpus what you can do is that you can say hyphen m faster one it's going to be your master father right and then let me put it on the

screen for the one let it go to the background screen i don't think i have got that it's been installed at the moment yeah i don't have the screen installed now um you can have multiple files to be run in parallel in my in my case i can run 12 housing machines at the same time but what i have to do is that for the first one i can use master further and then for the rest i can do slave further as one two three up to 11 right so i can go and fast multiple uh in multiple cpu so that my process of fuzzing is going to be faster and faster um now i'm just while we have time i'm

going to show you one of the pricing parts before i run out i want to make sure that i will show you this one so that um you're not gonna go without actually seeing the proof of concept of anything so one of the exploits what i did for uh i did fine for the pearl let's say you found the crash you got the corpus you did analyze it and you find which way it's actually causing the crash what you do is that you go to run the application outside the fast pass this crashed um corpus which is your file in my case it generated the ifl generates a file called id whatever and some numeric value and then you open

your for example in my case it was perl i did open the perl in gdb and i parsed this file name into this uh um enter this application into the apparel and then what gdb did is that with speeda uh it actually told me okay this is calling the libraries uh linux libc calls and it's saying that okay the actual crash happened on the specific memory address now you might say okay where is this memory address come from if you remember when we're fuzzing we said okay enable the ggdb right it's a great debugging feature and with this information with a specific address what i do is that i go and try to see okay

um this specific address i take it and i place it into something called led to to line so a dd2 line what it does is that it's going to go and check the virtual memory address of this specific uh application and on this address what type of line of code is lying so here i'm putting the same address i'm giving the apparel name uh my executable and it has got a virtual memory address and that on the specific memory address um i have i want to know what kind of shortcut is lying it says on the specific memory address it's called under the apparel utility c and on on line number 402 now if you look at it

on line number 402 we realize it's basically a memory corruption hip overflow is happening here so here we are actually saying that okay here we are freeing the memory but we didn't actually assign anything on this region before that's why we're freeing the memory without assigning anything so this is basically called double free vulnerability um now let's see if i'm right on time i know this very short time to go through this and you know it's quite a lot of things to talk in 50 minutes but i try to do my best but one of the things that i want to show you and before i leave on is some of the notes right as

as as we're all trying to be you know hackers or security researchers right one of the things we do is that we try to automate it make sure that when you're fuzzing you're trying to use the ssd which is going to be much faster read and write process enables a debugging feature so it's going to help you to actually get the virtual memory address and then enable the f sanitized address it's been trying to find user hip corruption vulnerabilities and then try to since we don't have a lot of time it's going to run for weeks and days try to actually automate the process of crashes i normally when i put my afl fast i

cried i write the bash script it's going to take the output and put into my slack channel so if there's a crash i'm going to notification on my on my phone saying that hey that's a crash happened and then it's going to go to my slack api i try to actually also manage my disk space with the cron job to erase some of the unnecessary q files okay so this uh this is a quite fast-paced approach it's quite a lot of things to talk about but i wish we had more time