← All talks

Web Application Vulnerability Scanners: An Introduction & Discussion on Their Limitations

BSides Cape Town · 201942:331.6K viewsPublished 2019-12Watch on YouTube ↗
Speakers
Tags
Mentioned in this talk
About this talk
Web application vulnerability scanners are powerful tools for identifying issues like SQL injection and XSS, but they face significant limitations when dealing with modern architectures, dynamic content, and evolving frameworks. This talk examines the strengths and pitfalls of automated scanning, explores infrastructure and contextual challenges that lead to false positives and false negatives, and discusses how organizations can best contextualize scanner use within their AppSec programs.
Show original YouTube description
Title: Web Application Vulnerability Scanners – An Introduction & Discussion on their limitations Abstract: Web Application Vulnerability Scanners are becoming increasingly automated and are facing more difficulties as web technologies change and evolve. As is evident from the October 2015 “Talk-Talk hack”, where a 16-Year-old boy performed an easily exploitable SQL Injection attack which resulted in TalkTalk losing £60 million and where 157,000 customers had their details stolen, The effects of having insecure Web Applications can be utterly disastrous. Web Application Scanning tools are used by Penetration Testers and Security folk alike in order to help identify vulnerabilities in a given web app. They come in many different forms and some cost a significant sum. Scanners attempt to identify dangerous vulnerabilities like Cross Site Scripting (XSS) and SQL Injection among many others and these tools must be constantly improved and enhanced in order to keep up with the latest malicious attacker techniques but also contemporary development frameworks. For example, architectural changes and improvements issues such as Anti-CSRF tokens, recursive links and JS dynamically generated URLS have a massive impact on a scanners ability to effectively identify, crawl, scan and analyse a target web application for vulnerabilities. This presentation details how useful WAVS can be in helping an organisation develop their appsec program and attempts to highlight the problems that current web application scanners face in dealing with both traditional and contemporary web architectures and technologies. It suggests improvements and identifies pitfalls of using automation without applying intelligence and a contextual view of the target being assessed. Speaker: Robert Feeney Speaker Bio: MSc in Cyber Security & Digital Forensics GWAPT & OSWP Senior Security Consultant @ DarkMatter Abu Dhabi, United Arab Emirates Previous Speaker at AppsecUSA Orlando, Florida 2017
Show transcript [en]

I guess ladies and gentlemen thank you very much for joining me here today I appreciate it's the very end of the day and it has been a long day so no doubt there's a few tired faces around the room as mentioned I will be speaking about web application vulnerability scanning it's a topic that continues to evolve due to the nature of the fact that there's always so different sorts of applications being developed new languages new frameworks new components etc so you know who I am and why I think I might be able to talk about this topic I was on a senior security consultant in Abu Dhabi for a local Emirati cybersecurity company there I have five

years experience most of which has been through web application testing I did the sans G web course and so mostly have been focused on testing applications and running scanners against production and non production environments so I'm going to talk about many things today mostly focused around the pros and cons of using web application vulnerability scanners where they have their uses where their pitfalls are and things that you may need to know as individuals who will either use the scanning tools or if you're in charge of your organization's web app security questions that you should be asking your vendors and your suppliers and those people who conduct the web application penetration testing and vulnerability scans what I'm hoping

to give you from today is that you can understand how best to contextualize your use within your organization with the web applications vulnerability scanning tools so a little bit of a background and basically this is really what got me interested in this topic there are a number of different vendors that release reports yearly on web application testing breaches incidents vulnerabilities that are found you may have heard of the whole top ten all of those sorts of organizations so the the main one this year that interested me was the Verizon data breach investigations report and while I was reading that it led me to realize that a lot of the hacks that occur are still happening through

external facing web applications and networks that are facing the web so I know there's a common trend insider threats and you know people within the organization can do a lot of damage but we're still seeing a lot of globally a lot of damage being given or distributed through a web application hacking so some of the interesting statistics that I saw there where 8 out of 9 industries listed Web Apps as one of their main areas targeted within the incident or the breach 52% of breaches preaches featured hacking so half and 69% of confirmed breaches were performed by outsiders so the difference between an incident and a breach is a breach is a confirmed loss of data or where their

organization has stated publicly that they have lost information some notable web hacks that have happened in the last couple of years and some of you may be familiar with these some of you are not I know there's one website there that I'm not terribly familiar with the first one on the top left is Heartland Payment Systems that was a sequel injection which caused them to leak 134 million credit cards the fine cost them 140 million between the fees fines and the cost of investigations you have the talk talk which again was another sequel injection that was performed by a gentleman who I believe at the time was only 15 years old so quite an easy

breach they were fined 60 million and then you have Adult Friend Finder the one I know none of us are familiar with that was a local file inclusion 339 million accounts which included interestingly enough around 80 thousand US military emails and five and a half thousand US government emails and then finally you have the most recent high-profile one which was the Equifax hack 143 million customers affected and Social Security numbers credit cards personally identifiable information that was all lost through an Apache struts vulnerability which allowed remote attackers to execute arbitrary commands through a content type HTTP header really what I'm trying to get out here is the fact that while there are many different methods to test your web

applications it can be utterly disastrous if you don't do so properly there are a number of different methods as I said so secure software development you have manual and automated code review we have vulnerability scanning and penetration testing the last two were usually coupled quite closely together yet many vendors and organizations will do V apt as one of their professional services that they list for us now what we're going to look at is vulnerability scanning so I'm working off the premise that we perform these vulnerability scans with the web app tools and then where they pitfalls are where they fail to meet requirements that's when the manual penetration comes into it so the first question we have to

answer is water waves waves meaning web application vulnerability scanners so they simply put their tools used to test if Apple to test if applications contain security vulnerabilities they can be used internally to test your own applications they can be used by consultancy and managed security service vendors who will test their the applications that you give them because maybe you like the resources the knowledge or the manpower to do so yourself they are divided into a couple of different categories you have dust and you have sussed generally speaking dust stands for dynamic application security testing essentially it's testing the application while it's running so you will be testing it from the perspective of a user who is using the application or a

malicious attacker using it as a normal user it's black box so it's outside in and they're generally based through two methods they are automated or their proxy based so think of all what zap that's a proxy based tool then you might have arachnoid or a kinetics which is very automated it's point-and-click you give the information you need you let it do its thing and it generates a set of results what they do is they interact with the raw HTTP requests they flows the parameters so they give it a list of vectors and they attempt to interpret the response in the correct way to determine if there has if they have found of one ability sorry I'm

working off two laptops here so on scrolling as I go you have cest which is static application security testing so you're a traditional source code review so you need to have the raw source files to do that you had considered white box or inside out it's manual and automated there are automated tools that will check this like checkmarks I believe IBM have a version of app scan that does it as well and you have people who manually go through the code to see if they can find flaws within it themselves so we'll be looking at the - tools in this talk so you I want to ask or at least I'm trying to explain why do

we look at them what's what's necessary why do we need them so I have some statistics prepared so that I can hopefully identify that question I'm sure most of you have heard of the OWASP top 10 very common framework that people work off when they're performing web app tests in order to identify different vulnerabilities and they release lists every three to five years I believe on what the most commonly reported vulnerabilities are these web app scanning tools will attempt to try and discover many of these run abilities they will succeed for a lot and they will fail for others so we have some interesting web application security statistics from a company in Europe called edge scam they are a managed

security service that focus solely on dust so they basically perform web app scanning and network scanning for all of their clients they found that for their externally facing clients so all of those who are listed 19.2% had critical or high-risk issues reported within their web applications internally they had 24 roughly 25% critical or high most of the vulnerabilities that they reported were cross that scripting or caused by components that were vulnerable or are contained known vulnerabilities can anybody guess what the most vulnerable components or framework was go to m1 PHP our favorite friend interestingly enough they found that the average time to fix a critical or high-risk vulnerability after it was discovered was 69 days yes

and that's that's actually quite astonishing considering the gdpr implications that are now in place the massive fines that come along with that and the ability to basically ruin your business especially start-up businesses if they get breached immediately or in very early stages when they haven't got a mature security program it can destroy the business overnight then you have some statistics from white hat security again another managed security service vendor who operate out of the u.s. they do dust and they do sassed as well but interestingly enough what I thought was the most critical sort of takeaway from this was that if you look at the difference in average takeaway time to fix the vulnerabilities the difference

between the USA in Europe is massive and I think that's partially down to the fact that the GDP our regulations are in place so I was trying to wonder I was trying to establish maybe why this was continuing to happen AB SEC everybody seems to be getting more conscious about it but it's not getting as good quickly as quickly as we might expect I think probably good for some of us because it keeps us in a job but for Blue Team and offenders it's quite a difficult task for them to to undertake did you have a question yep exactly yeah sorry I know there's a an audio issue so the question was the remediation rate at 50% and at 40% that

what that means that the other remaining percentages was not fixed by the time the statistic was taken so web application vulnerability scanning tools how do they work so as I mentioned you have two types generally you have the proxy version and you have the automated version the proxy base can be said it can be considered a man-in-the-middle type tool it's more customizable it's very manual it's basically generally used by pen testers an awful lot more than an automated tool would be the user interacts with the proxy by manually hooking it up from a web browser so you'll have to install a certificate of course if you want to intercept encrypted traffic and then you manipulate the traffic between the web

browser and the server that processes the traffic that comes to it they generally are becoming more automated tools I know burp suite pro have released a lot of different features that have helped make it more automated then you have automated point-and-click tools so think of the one the first one that comes to mind for me is Arachne created by a Greek gentleman and the idea of it is you give it a URL if credentials are necessary you provide it with the credentials it will crawl the pages put the pages into a database inject the attack vectors into the get and the post requests and then it will analyze the responses from the web server in order

to determine if there's a vulnerability there highly automated highly computerized they usually have strong gooeys and you don't have a lot of visibility in what they are actually doing compared to the proxy based tools I think that's just the nature of the fact that one's more automated and one's more manual they invoke other web browsers so where the proxy based tool processes the raw HTTP requests the automated tools invoke web browsers so they will open up an instance of Internet Explorer and use the application as if it's a user from Internet Explorer there is a rack no use is phantom Jas as a web browser to perform the crawling the auditing and the response viewing so we have some

general considerations that we want to consider before we move on so things like when you're using the web application scanning tools you want to make sure that you consider these items so that you can best evaluate the application that's presented on front of you if you're not the one testing the application these are definitely things that you need to be asking your vendors or your subcontractors that you're using to to test the applications so is the application production or UAT if the answer is yes it comes with different covets did the spider or the crawler find all the pages and the automated tools they rely heavily on the program's ability to visit all of the a

hate refs to visit all of the links discover all of the pages all of that sort of stuff that links indirectly with the accounts that were being used during the testing so are they accounts that you're using while you're testing a high privilege are they low privileged and can they only see 20% of the actual application are you using an admin account that might be dangerous because you don't want to accidentally delete the entire database if you spider the the delete post request I've seen it happen I've done it baptism boy fire things like was the account locked out during testing so I'm gonna present a scenario to you imagine you get a given a web application you're using an

automated scanning tool you give it the URL you give it the credentials the scan runs they tell you it's finished but when you get the results they're a lot less than you expect it then you try to validate the results but when you log in the account is locked out can anybody tell me why that might be or what concerns you should raise from that that as well correct yeah so tools will produce a list of results that will contain a lot of false positives so you have false positives where it believes it's found an issue but it's not actually an issue upon manual validation you will determine that it is or is not an issue you have false negatives

that's when scanning tool misses a real vulnerability that exists within the web application and you have true positives so when they actually find a vulnerability that does exist also you cannot be sure that the scan completed because if the account is locked you don't know at which point during the scan the account was locked I've seen accounts where sorry not accounts I've seen tests where the account has been locked ten minutes into the into the scan consistently over a period of months without anybody realizing and then upon further investigation when they checked as to why it was because the application was a little bit fussy about the parameters that were sent and what happened was it was locking the

accounts scan wasn't finishing but when they did it the first time or sorry when they did it the most recent time with a valid scan they discovered many pages extra that they hadn't discovered before tested those pages and there were vulnerabilities in it so if the account gets locked before the scan finishes you can't be sure that the results are accurate the scanning policy and vectors used so when you give it a list of requests or URLs or hey trace whatever you give it it will inject or false those parameters in that raw post request the parameters that it uses are very important because some applications and again these tools are trying to accommodate for everything they're

trying to be really broad so a lot of them don't have massive lists some of them you can actually import lists into but if you look under the table on the right hand side you'll see Burke or sorry you'll see I was app has some encoded air parameters so that's that's to accommodate applications that are already doing certain levels of according to try and bypass that so the vectors that are being used are incredibly important they may be some may be blocked by a web application for a while so the more different types of vectors you have the larger your list you could take something off SEC lists some of the forcing lists that they have

and import them into the tool the more likely it will be that you will find things like cross-site scripting or sequel injection within your applications so their deliverables what does the the tool give you the tool gives you a list of issues that it believes that it it has identified as was mentioned previously by the gentleman it gives you a list of issues and you have to validate that they're real so you have false positives false negatives and true positives and usually they're able to generate reports as well there are some constraints and limitations on these tools so what I'm trying to do here is I'm trying to identify where these tools have gaps in

what they can do and where they need to be improved upon in order to respond to modern aspects of web architecture or sorry aspects of modern web architecture so we have some things that make scanning very complicated or very difficult when you're using the web application scanning tools so I call them the usual suspects the known pitfalls so infrastructure basically the connection that you have if it's hosted in AWS or if it's hosted in Azure or something like that you may need to get permission from the third party hosting provider to do the testing otherwise they'll block the testing if there's networking interference between yourself and the endpoint on where the application exists you may miss

vulnerabilities because packets are being dropped requests are being dropped that can introduce false positives especially on time-based attacks so there's a lot of tools will test for sequel injection using time based wait for delays and that can introduce issues if the infrastructure isn't up to scratch problematic functions so this is a very catch-all term that I came up with basically any function that can cause a bit of pain for somebody else that uses the web application an example of that would be if you have and again it comes back to is the application in production or is it in UAT if you have a blog let's say and users have the ability to comment on

a blog post if you go scanning or forcing the comment section you're going to create thousands of database entries within that one blog post so this can cause a bit of pain especially if you're doing it for clients they may give out to you at some point overdoing it another example would be a Contact Us page so if you're submitting a Contact Us page as a post request and you want to force it to see if there's any vulnerabilities you may inadvertently spam the email box are the the mailbox with thousands of burp suite pro vectors as the body of the email change passwords can be problematic as well because you can accidentally change the

password of the account exactly the logout button so if you're testing the log important you're invalidating your session so if it's an authenticated application the tests are going to be invalid because you've just clicked logout so these are things that you have to take into consideration when you're performing these scans I don't know why I have that in capital letters I think it was shouting at somebody when I did that what else we have so any out of scope item that the client requests or that you know you don't want to test so for example if we go back to the previous example with the blog post if your client tells you you don't want

them they don't want you to test that that's fine you know you you have to listen to them at the end of the day you should probably test it manually though as opposed to forcing it with the web application vulnerability scanning a lot of the proxy tools have a repeater module built-in so that you can just simply resend the request and I would advise using that to test those types of of pages be careful when scanning using any kind of privilege user so again the example is if we're logging in as an admin and we can create accounts or delete accounts and they haven't implemented like a you know one of those confirmed messages that pops up are you

sure you want to delete this account you can simply remove a lot of data within the application by doing so and again it's even worse if it's in production okay so we have some extra known pitfalls you have sessions so maintaining a session is very important of course in the authenticated web applications if the session is invalidated during the scan everything from that point forward can be considered invalid because if the session isn't working you're not going to be able to test the parameters assuming they have their session their session programming in place properly and you actually do get invalidated when you log in the automated tools generally struggle with this because most of them

rely on forms to do this proxy tools you can use macros and burp suite to to write session checks and to relog you back in when that happens account lockouts again I think I spoke about that already if you can't get locked out anything that happened thereafter is invalid and then you have up generally speaking application volatility so I know it's sort of a broad term but how likely is the application to dislike so much what you're doing that it will then invalidate your session so if it's using a wife it might see that you're putting some JavaScript code into a parameter it might not like that and might say this user is trying to hack me let's let's

kill that session and kick that user out so these are all considerations that you have to be aware of then we have the emerging issues or the new kids on the block as I like to talk or consider them so red teams have been finding vulnerabilities advising blue teams and recommending fixes for many vulnerabilities that we that have been found over the last decade or so and they actually have been listening to what we might believe so there are changes which are being implemented now in applications that impact our ability to test the applications and make it more difficult to find vulnerabilities one of them is captures so everybody hates captures they're a pain if you see them within an application if

it's yours or if you're testing it for somebody you have to be aware that they're there and try and work around them while using a web application vulnerability scanner so my general tip when dealing with them is if it's possible try and get the client to give it to you in a QA or UAT environment where you can disable the CAPTCHA and you don't have to worry about it if it's possible may not be I've seen many times where it is setting a manual session ID on the back end so that it doesn't get invalidated or putting a real in place so that they can do it for you otherwise it's going to be very

manual test web application vulnerability scanning might be extremely difficult you have multi-step logins so I actually just took a picture of this from my bank so it's when they ask for personal access number or something that's random that you're supposed to know generally speaking this isn't that common but it is becoming more common my tip for dealing with this would be to ask to see if you can set all the digits to the same so that it doesn't matter what is what it is so if it's all zeros are all ones once you put that into the tool it will know that and it will submit them some tools allow you to script responses or to write reactions to certain so what

I'm looking for if you're presented with a certain scenario what's the appropriate reaction in this case you might be able to write a script that responds to whatever digit is necessary and you have C surf tokens the bane of my existence one of the biggest emerging architectural attributes that are causing a pain for web app scanners there is a couple of three different types so what is the seesaw of token first of all it's a token that's embedded into a web application that the user must submit along with the request for it to be deemed a valid so you generally have per session C surf tokens per request and perform a per session C surf token is generally treated as a

second session cookie so I have a that's an example post request there sorry let me go back and it's generally within the headers so most tools will pick that up absolutely fine because it's just treated like a session ID then you have the more complex per request see surf token so these tokens are associated with a post request and then they change every time so you'll visit a page you'll submit the form and then when you visit the response you might get assigned a new token so they have to be updated every single time these are extremely difficult for the automated tools to deal with because sometimes the values are not so clear like I've purposely

given it the name C surf token in the example port in real life a developer can give it whatever name they want so it's not so clear the proxy tools you can automate these very easily especially with Bob using the macros then you have the per request tokens sorry perform tokens well this one I haven't seen a workaround for yet I guess that's the whole point of them right to stop automation so if anybody knows a way to do this that would be great you have non-standard error messages so this would be like for user interaction or to make the GUI look a little bit better so generally speaking if you try visit a URL and the page doesn't exist

you'll get a 404 error I have seen some instances where an application will give you a 200 okay web response and then it will just say on the page in text 404 page not found so that confuses the hell out of the scanning tools because when you try to crawl or if you're clicking on like if you're trying to use their poster for example to discover extra layers of the application that you haven't found before it will just think everything's real and it will give you a list full of junk the only real way around this is to review the scope before you start the scan some tools have URL rewrite rules that you can that they dynamically

generate URLs that deal with this very easily but generally speaking the only way to make sure you don't fall victim to something like this is to review the scan scope before you actually scan the the URLs and the the parameters it's a pain but it works then you have non-standard protocols so different pieces of web architecture that work not so common a few years ago but are definitely getting a lot more common so you have WebSockets and different types of apos I know Burt by default handles WebSockets now so does app but interestingly enough burps we pro have started to update their WebSockets features and you can intercept and replay WebSocket requests which is quite

cool because it moves so quickly that it was difficult to do so in the past one tip with dealing with these is to daisy-chain proxies so if for example you are using a tool that cannot import API requests or you know if it can't parse always or something like that maybe you want proxy to different web app scanning tools you can daisy-chain them so if you add an upstream proxy and add it to another testing tool you might be able to manipulate it and work around it that way you then have the emerging issues so this is something these are what what I call the other guys these are other issues that are generally speaking happening more and more

regularly now so the first one is the name level check so where the app does not accept more than one entry in the database where the parameter in question has the same name or is a duplicated value so in this example if you look down the very bottom in the body of the request you'll see appointment name first name and second name so by the way they're scanning works it will inject it to each parameter first so it will do all of its tests on parameter 1 it will move - first name and do all the tests and then it will move on to a second name and do all of its tests in this case the

tests for the first parameter will work for the parameter highlighted and bolt appointment name because the content is changing each time however when you move on to the second parameter which in this case is the first name and the third parameter which is second M you will get sorry and appointment of that name already exists so that means the first the first parameter all the tests will be valid the second parameter won't and the third parameter won't because the application is stopping you from creating a an appointment with that name so you have to be able to iterate through so in this case there are some tools that allow you to do this some not

this is just something to be aware of for your own testing or you'll see highlighted in red board meeting one the first parameter and then the injected script into the second parameter so the web application should successfully accept that request and give you the response so when you're testing the third parameter then you have to give the first one a different name so that it accepts it so you see it gets very complex very quick especially when it comes to different parameters and then finally you have component security so as we previously saw PHP Apache are the biggest causes of component related issues within modern web application scanning some scanning tools will do this by default some you'll have to

install an extra plug in 4 so I know in the I know a kinetics does it by default burp suite will have an extended is it the the extender or the burp app store where you can add extra checks I think it's the app store anyways so for example if you have a wordpress application you're going to do WP scan so this will check different areas within that application where plugins that it sorry that it's using might be outdated so the plugins themselves will be checked so something to think about when you're doing your scans or you're asking your your vulnerability assessment vendor that's the word I was looking for thank you another thing to be aware of personalize

and contextualize the scans so the default policies generally are not enough they don't specify or withdraw remove certain things like the parameters to include or skip the actual insertion points in the requests so of course in in a web request you have the headers and you have the body some headers you may not want to test because it causes you to be kicked out it causes your account to be locked like if you have a web application that's very easily set up and you only have one session ID in the cookie and you have nothing else if you start testing that you wanna invalidate your session so these are things that you have to be

aware of crawl a debt limit and recursiveness is around applications that are quite large so if you have an application that's written in multiple languages don't want to test English French or whatever it may be so you can remove them to make the scan go a little bit quicker URL rewrite rules we spoke about deeming specific folders and parts as out of scope can help speed up the scan and the for mentioned above limitations so that leads on to my final point on this section generally speaking the all the O's popped in what the web applications vulnerability scanning tools are good at testing for are things like injection xxe misconfigurations cross-site skipped scripting but there are other areas that

it definitely and not not yet anyway they're not so good at checking for one of them being broken authentication not something you can really check unless you do it manually although they are trying to build that sort of testing with into their scanning tools t serialization and of course logging and monitoring can't be checked from the outside in it's something that you have to check internally for unauthenticated applications generally speaking the the automated tools are fine because when you create authentication and user accounts some user profiles it adds so much more complexity to the apps that they have to be done with the proxy scanning tools in conclusion then my findings resulted in the default policies are not enough

testing has to be customized it needs to be asked of you or your own team who are performing the testing if not contact your vendor and see what sort of testing that they're doing use more than one tool some tools are really good at finding certain things in specific languages the more tools you use the better of course that can be an issue depending on your budget or your finances within your organization the more complex an app the more suited to a proxy tool it is and investigate your tools and what they're doing via proxy in them and daisy chaining them so you can create an Austrian proxy and you can see exactly what they are doing

I don't burp you have the sessions tracer which literally shows you every request and response that it sends my final conclusion then is check the OWASP web applications security scanning evaluation criteria project that's a bit of a mouthful but it will give you a list of criteria that you can check the scanning tools that you're investigating on and help you decide what's best for you so that's it [Applause] any questions go ahead yep yes sure sure

so the question the question was with regards to daisy-chaining why would you daisy chain with regards to to soap you soap apos so I've seen tools where they cannot parsed a whistle that's given WSDL or they cannot parse the XML of a movie an API so because they can't do that you can't build the requests that are necessary for it to work so you have to have the right content length you have to have the right formatting the right parameters all that sort of stuff so if you can build it in a tool that can parse it and use your scanning tool as a proxy you can pass it through it and then you can test it from there so

it's just in order for you to be able to form the requests in the right way so that you know your test is valid yeah so if you use soap UI and you connect it through burp suite you can then check the API requests now it just so happens that burp is very well capable of parsing wisdoms and and and apos so you don't need to do that but if you want to play around with it you can do it that way any other ones yep this gentleman here

sure that's a very good question so the question was can scanners pick up unsecure direct object reference vulnerabilities there in the very early stages of being able to do so because that's business logic testing right that's that's where the manual penetration comes in to many things some of the most interesting vulnerabilities I've found during pen tests have been those where you're supposed you pull back a documents you see an idea you're like hmm wonder if I give it this one will it work and it's like it worked but a scanner can't test that so generally speaking they can't but they are beginning to do it and the reason for that is because it's very specific to

that application it's not something you can write into a scanning tool it will be cool if you could but then that would remove a lot of the need for manual business logic testing so the answer yeah go ahead yep

[Music] I'm not that I've seen put my knowledge on this will be a little bit outdated so it could be possible sorry actually have to repeat the question so the question was do any of the tools show you each parameter that's being sent in a request so it will give you get and post and some of them sorry some of them will give you getting post I'm not sure if they go into so much detail as if to give you the types of parameters that are being sino so the question is are there any tools that demonstrate the exploitation of a weakness that might exist in a web app and I think that gentleman has an answer

for you next pose

exactly so well if you if you were at the deep fake top top then you know that scenic seeing is not believing anymore yeah so the gentleman said that an expose has a couple of modules you can use to to actually exploit and that would solve that problem also depending on the if you're using a managed security service vendor or not some of them will actually give you evidence of the exploitation so where they perform the cross-site scripting to validate it they will put like a screenshot or maybe even a video as part of the finding the tools themselves don't because the tool doesn't know what it can exploit on what it shouldn't exploit like if you find a

sequel injection are you going to immediately run a sequel map on it because you could cause some serious damage if you do that right so that's where the human element comes into it so yeah hopefully that answers that Cheers thank you very much [Applause]