
hello everyone um as nikhil said i'm carrie hooper you can call me hoop or carrie or whatever and i'm pleased to be here at vsat san antonio um big thanks to the conference organizers they've been working hard and uh you know a huge shout out is quite a bit of coordination too to set up i love b-sides and i love security conferences i love that we can teach each other things and at the same time learn things from different perspectives different people in different roles so please help me do that um i hope to teach you some things during course this presentation but let's engage in conversation in the breakout room uh let's chat about this
stuff i would love to learn from you as well so let me give you some context real quick why this topic why http and desynchronization attacks well i first learned about this attack in 2019 in defcon 27 i saw james kettles talk about http desynchronization and it took me a while to wrap my head around it it was kind of a complicated concept and there were a few key details that i needed to understand about http in order to fully wrap my mind around it well later on i saw many people at work whether whether blue teamers or red teamers alike they didn't quite get what this was and how to exploit it so what i wanted to do here is create
like a zero to hero presentation build up from the building blocks of the features of http introduce the desynchronization attack and they'll give a few demos at the end first off uh who am i as i said before i'm carrie hooper you can find me mac on twitter at no pants root dance and i'm a pen tester on a red team i love learning i love finding bugs i love golf i love finding bugs though it doesn't happen as often as it should and i hate i absolutely hate zoom meetings i'm one of the ones that tries to that wants to go back in the office can't wait i'm also a veteran of the the us army uh
and combat veteran so overview what are we gonna what are we gonna learn uh by the end of this presentation you the audience should understand one the basics of the http protocol including a few of the key features of http chunked encoding or transfer and coding chunked and also http pipelining we'll discuss what is http request smuggling what is a desynchronization attack and what is the dsync between a front end and a back end look like and then last we'll have two demos now four total demos will be available to you i will post the link later on and you can download each of the demo in gif in gif format um that way it might be a little
bit clearer rather than transmitting over crowdcast we might uh you might be able to get get a little bit more from it ask me questions about it later on so quickly let's talk about http and introduce it from the ground up http stands for hyper text transfer protocol the p stands for protocol right this is just a way of communication from generally a client to a server and that communication happens over tcp which means it's a stateful connection all http is engineered for is just is historically just a set to fetch resources in a human readable format and when i see human readable i don't mean that i mean that if you look at it
in wireshark or you look at an http request in netcat or or whatever or through a transparent proxy such as burp suite or zap you can read the words and kind of understand what it's what it's saying generally http is going to consist of a request can i have this thing can i have mypage.html and a response which is html response and this is an example very very simple one in addition to html you might also get css or javascript and those are constructed by the browser to form a page of what you look at on the internet five major components of http the method uh the path the version and the headers now the method
some of y'all might know this as like the e verbs or um the the you know the the the method such as get post options head there's a number of them i could go on alone actually probably not but let's not and then there's a path and the path it represents the resource that you want it slashes the root resource but this could be login.html admin.php etc and then also the version of the protocol that's self-explanatory now headers headers may appear in either requests or responses and there's many types of them but all you really need to understand is that headers are essentially metadata about the communication that would dictate which resource they shouldn't dictate as much what
resources as you get but but how you should get them and then last there's the body of the http request and response we'll cover that one later version 0.9 came out circa 1990 um and i say circa because this wasn't actually like formally documented in a way that everyone accepted there were different implementations of the protocols there's no headers there's only html and text now features were added by various organizations but there was extreme interoperability issues you know one network's requests were different than another network servers and it really didn't work and this this caused the need the necessity for version 1.0 in 1996 rfc 1945 came out specifying this is http 1.0 the standardization
for those of you that might not know rfc stands for request for comment and this is this can be thought of as a written standard or guideline for the protocol it dictates what should or shouldn't or must or must not be done when communicating via http now this the standard also implemented such features as versioning it mandated the version within the uh the initial get request it also uh specified how headers should work uh these headers might be in requests or responses and recall that headers are just metadata about the http communication and then last status codes this particular example has a 200 okay however some of you may under uh may know of like 404 not found or 403
forbidden there's a number of different status codes that mean different things finally we have http 1.1 this was an update to the 1.0 version and this was released in two rfcs between 1997 and 1999 and these specified such features such as having a host header which is used for virtual host routing there's content negotiation cache control connection reuse the ability to use um one tcp connection for multiple http requests and then what i would like to focus on is these two features http pipelining and chunked encoding and both of these features will be extremely important later on and these are features specifically of http 1.1 that when combined with certain architectures can cause a desynchronization
and some of those desecrationizations may be exploited next i would be remiss if i did not talk about https http version 2 and hdd version 3. these exist they're not they're not exactly pertinent to this particular discussion but i'll cover them quickly https is just http wrapped in encryption recall that http is just a plain text protocol it's human readable protocol https is a session layer transport for that clear text that's why when you visit an https site through a transparent proxy such as verb suite you can you can see the text and it's not encrypted at all that's because it strips the session layer transform mechanism which may be ssl or tls it should be tls nowadays
but those are both session transport mechanisms http 2 is also a thing it changes the it changes fundamentally the way it used to be is transported and it is not human readable but it's much faster uh it's fast because it supports data compression and header compression alike in addition it doesn't necessarily wait for the client to send all of the requests if if the client requests one thing the server may send many many different responses even even without without the corresponding requests this is out of scope for this presentation in addition to http 3 which in fact is not actually documented in a rfc um that is does not get out of draft status it's not yet finalized or published
but http 3 is http over quick quick uh this was developed in google by 20 in 2012 however uh you can think of this as like tcp 2.0 it's a replacement for tcp ideally a more secure tcp and really cool but i'll scope this presentation i know there are a few things that we should discuss that we need to establish before getting into the such a complex topic such as http request muggling and one of those is http connections the most simple connection in an http request is a tcp handshake for those of you in computer science class may know may recall that it's uh sim synack hack and that establishes the tcp handshake well then there'll be an http request an
http response and then then a tcp termination handshake so it's a one tcp connection per http request as you can imagine this is extremely this is extremely inefficient so http 1.1 introduced tcp connection reuse and this is a much more efficient use of sockets where one tcp connection could hold many different requests and responses in succession now one thing that this also allows for is http pipelining which i'll show in one of the images in the next few slides so this is the the least efficient method of short-lived connections each of those blue and yellow bands on the screen are separate tcp established tcp uh finalized and so there's one http request per tcp connection now with
persistent connections or connection reuse there's a connection established request response request response request response and then tcp connection terminated much more efficient however let's go one step further with http pipelining this makes use of the same tcp connection but the client doesn't wait for http responses it adds yet another layer of speed in this manner a client can go give me this and that and this and then the server can take its time parse through those requests figure out which resources it needs to return and then return them to the client all right the second thing we must discuss is http architecture and i'll briefly go over this but this is more for the those in the audience that may not
be familiar with enterprise level architecture typically there's going to be more than just a browser and a web server however for the purpose of this presentation let's think of this in this kind of abstraction where there's a front end and a back end now on the left hand side of the diagram this is port switcher's diagram by the way uh there's there's clients these can be these can be users they can be clients but more importantly they're they're using browsers so these browsers make the request and it goes through some sort of front and web mechanism or mechanisms this might be firewall this might be a load balancer it could be a proxy or a
web server and then it that front end brokers the connection to the back end usually a web server that actually provides the resources this kind of architecture has many uh as many pros and and helps with the availability of the platform overall it helps with the speed of the platform overall especially when it caches the front end might cache some resources or responses uh however we're going to think of the the abstraction in the front end and back end for the rest of this presentation last let's talk about transferring http messages this is the thing that i glanced over in the uh when i was describing the components of an http message was the http
body now we talked about the first line we talked about headers which you have one or many or none and then finally you have the payload which is called the body now when when parsing these requests the server or http middleware they need to understand where a request begins and where a request ends well how they do that is in three main ways that i'm going to describe here and the main way that the most common way is by looking at content length so each of these lines in the http request shown are separated by not just a return but a carriage return line feed these are two separate bytes it's a carriage returning byte
and a line feed button which is a hex d and hexa or colloquially you know slash r slash n so this is important to understand and then the uh the content length is the is is given in a header and that shows the number of bytes to expect so there's 27 bytes in this particular example to expect after the 27th fight the message ends and there may be a new request right after if it's pipelined next multi-part form data those of you who have actually inspected traffic within a transparent proxy such as burp suite or even even looked at say a file upload capture and wire sharp you may recognize this one so each instead of each parameter
delimited by an ampersand or having key value pairs in multi-part form data parameters are delimited by some sort of boundary and this boundary is defined in the content type header uh you can see the boundaries on the slides and then in the slide and it starts with the boundary there's another value and then a boundary and then to end it's the boundary with the dash dash this is just another way of transmitting data another way for the servers to interpret where a message ends and now last probably the least commonly known and this is what's what's tripped up a lot of blue teamers and pen testers alike and http message bodies have an encoding especially this was implemented in http
1.1 there's a transfer encoding chunk or it implements chunked encoding now this is used when a server or client both can both consent chunked encoding don't necessarily know how long the message is going to be so it's treated more like a stream we don't know how long the message is going to be in advance so we keep sending bytes keep sending bytes until you reach an end now each chunk is delimited by a carriage return line feed which is two bytes in hex and then we have the chunked data so look at the diagram the header which is again metadata about the request is transfer coding chunk that says we're doing chunked encoding in this request
and now each line has the number of bytes to expect and then a payload in this example line 5 has seven there's seven bytes in mozilla line seven has nine and there's nine bytes in developers and this goes and so on and so on all the way until he reaches zero and the zero byte chunk ends the message zero is extremely important in chunkton coding and it's also extremely important to the key of how this attack works and again this can be in requests or responses all right now we've all been waiting for 18 minutes in and now we're finally getting to http smuggling http splitting and and http desynchronization what are all of these how do they relate
you may have read these terms in a blog uh you may have thought oh no that's too complicated or looked into it and have your eyes glaze over but but here we're going to examine what actually these attacks look like and in order to do that let's take a look at the history in 2005 was the first time http request smuggling was actually documented and published and this was published by watchfire which was a security company they published a white paper the link is uh is on the slide it's a short link but redirects to cgi security.com so they first released research in a white paper on how to smuggle an http request and they did so um
with a with a certain setup that i'll i'll get into later but the impact was that they were able to deny surface to certain web pages and they were able to poison the webcast which was a super cool novel technique especially for 2005. it kind of ended there those bugs were patched and then it went silent for 11 years 11 years later regelero and defcon 24 uh gave some gave his presentation uh about http request modeling smuggling wookies within the http protocol this was not a popular talk and it i don't think it got enough of the recognition that it deserved for for completely turning the http protocol on its head and he found vulnerabilities in gold
playing varnish a number of other both middleware and web servers but the main thing that he was missing was a weaponization like reliable weaponization and reliable ability to detect this vulnerability from a blackhawks perspective he was infamously quoted as you will not earn bounties for http smuggling james kettle uh proved him wrong three years later uh he also goes by albino wax on twitter that's his handle he spoke at blackhat19.com 27 three years later and and the white papers attached here but though one of the novel uh achievements of james cattle was that he was able to code a custom tool uh a burp suite extension or a series of burp street extensions that could both weaponize this and
reliably detect this uh issue this ability to smuggle hdb requests in and cause a desynchronization and he he earned the adk over 80k and bug bounties uh and and he presented he he released the tools necessary uh all open to the public no paid tools or anything and the fact that he received bounties i think was attractive to a lot of the audience now uh two years later this year that brings us to this year last month uh james cattle also published on twitter that he has identified some requests smuggling vulnerabilities in http 2 itself which i i'm really excited for and in addition a meal learner also gave a presentation they kind of had simultaneous
research going towards the same in the same direction i haven't done a whole lot of research in http 2 request smuggling so i don't intend to cover that over this presentation um but but just know that these exist but however however we won't go into them today all right the attack what would an attack look like when we're smuggling an http request this goes by the watchfire's example of a browser on the left side a web proxy in the center and then a web server on the on the right hand of the slide there so this is again the same sort of architecture that we've abstracted that we'd look at consider the following request and these all three
of these colored texts are sent by the brow by by the same browser to the human eye remember http is human readable this looks like three separate requests there's a blue post request there's a purple get request to poison.html and then a red request down below however those those who who know a bit about the http protocol might have noticed something tricky about this request there's two content length headers within the blue within the blue request so what what's that all about well when these appliances are trying when these appliances are parsing all of this and these these requests are all sent one after another they have to determine where one begins and where the other one ends and part of
how they do that as we talked about before is in content like letters and transfer encoding headers but this is just we're just dealing with content-like patterns what they found in their research was that the sun one web proxy was programmed in such a way that would take the very last content link header that was presented and it would go by that it would just throw out the the other one and so the web proxy saw from its perspective the blue request as as the blue request as an entire request with the purple in its post by it it counted 44 bytes after that first carriage return line feed and it happened in and the watch fire engineers happened to
create the purple request which is exactly 44 bytes so it's going to see request number one as blue purple and then a request number two has an entirely separate http request however all right so yeah there's there's the uh diagram of a blue purple and then and then red and those are http requests going down the pipe however the same vendors web server interpreted it differently it interpreted the first header that was given and then throughout all the rest so the web server sees this as a content link zero request and then sees the purple and the red concatenates them together and because of the way they engineered these requests it was all seen as one single request and the trick that
they did was they they created a bogus header bla you'll notice that on line 10 and then the first line of the red request was then as a header and it really didn't count then the rest of it was headers and it just saw this as one complete request so the back end web server sees this as a completely different thing it sees request number one is blue request number two as the purple and red sandwich together okay great what does this actually get us the web proxy sees a blue and a red and the web server sees a blue and a purple so this difference in behavior between the requests number two from getting a
login.html and request number two from the server getting poison.html that's going to cause an issue when there's caching involved they found so the attacker requests sends this malicious request asks for login.html but the backend web server it doesn't see the request for login.html it sees a request for poisoning that html if this is a 404 the web proxy is going to return not found it doesn't exist on the server however the response is cached next time when a victim goes to log in to that webpage they get a 404 not found because they they're they're getting the uh what's cached in the web proxy pretty cool right so what is the actual impact uh we see a
denial of service so we're able to deny people logging into this website or deny service to any really any page that we can think of and then there's webcast poisoning uh which which we also saw was possible all right let's move on to uh james cattle's research because it's it's it's one of the coolest so james cattle took this uh five steps further and it's quite wonderful research that he did if you're interested i'd recommend reading the actual white paper that was submitted to black hat so again we're smuggling http in the same manner as watch fire except due to updates and how these http uh packets are are handled by the different middleware and and the front end in the backend uh
it causes this desynchronization where one thinks that one request is being being sent and the other thinks that another request is being sent and they may be mismatched where one thinks there's two requests or one thinks there's three requests and sends the responses and it causes a de-synchronization and a lot of unexpected behavior let's go find out how that can be exploitable all right so again we're back to the abstraction of we have client browsers a front end and a back end those boxes are different web requests and they're all being shoved down the same pipe using http pipelining the front end is sending this in a way to the back end that's usually chunked in coding for efficiency but but
not always so consider an attacker an attacker sends a malformed http request it gets shoved down the pipe victims send requests through the front end it gets shoved down the pipe to the back end and if the back end doesn't parse this exactly like the front end does is going to cause issues so the first example um again a repeat from the watch fire uh why is this called a dsync attack well what we're doing in this example is we're going to smuggle the x the x into the back end and that's going to cause a desynchronization this really isn't request smuggling but um this this poc is a way to detect it now you're not going to see this on on
all implementations there's a lot of there's a lot of servers that have been patched for this but there's also a lot that are still vulnerable and i've seen this in a while so the front end um ends up processing the first content like header and sees this all as one request the blue and the orange all one request so the back end processes the second content length header it sees only the blue and where does the x go the x is in the cache and it's kind of in the limbo well with the very next request the green request this is going to get sandwiched up behind that x and x is going to get prepended to the
victim request and the result is going to be a 400 bad request because exposed is not a valid verb it does not comply with the rfc you know straight to rfc jail speaking of rfc jail uh two content length headers is also disallowed however a lot of the applied http appliances are programmed in such a way that um allow are much more lenient with this behavior because they see their job as as you know do the job high availability system and you know get the information from place a to place b all right enough with the two content length headers what about transfer and coding what about chunked encoding how can this be exploited so james kettle deemed this the clte
attack which is the content length transfer and coding attack where we're two of these headers remember they're meant to do the same thing they're meant to show a web server or web appliance where one begins and the one http request happens but if both are sent which ones are they going to choose well some appliances may only use the first one some may only use the last one some may not even be aware of chunked encoding some may be programmed to not use content length at all there's a number of different scenarios but it's really the difference between the two that can cause this desynchronization and cause an exploitable condition so in this example there's a constant
length header transfer encoding header uh there's the zero remember that in chunk encoding the zero terminates the message however in this example the front end is uh it interprets this as one entire request it prioritizes this content link recall there's character turned line feeds in in between and now when the victim sends their responses is also going to result in an x post because the back end is transparent coding aware it's chunked encoding aware it's keeping the x in limbo what does it mean it doesn't know and then it's prepending that to the very next request causing an error all right now the converse of this attack the tecl where the the front end of the back end kind
of reverse rolls in this example the attacker again sends two requests and it smuggles or actually smuggles a request in inside it smuggles the orange request a post to hopefully 404 now this is a poc that james cattle used because the hopefully 404 is probably going to return a 404 not found because that page probably doesn't exist so much in the same manner as before let's count let's count these out we've got a content like that on form um but in this example the front end prioritizes transfer encoding and remember the transfer encoding does not end until it reaches a zero with carriage return line feed character from lining so this entire thing is sandwiched with one request
through the front end and when it makes for the back end the back end doesn't know about chunk encoding it's just a simple old apache web server from 2010. well it's going to take a look at that content length of four and terminate right after the three f there and by the way the three f is the number of bytes and hex of the orange request so the back end terminates after the three f it counts the three the f the character turn and the live feed for total four bytes and then it prepends that orange request on to the beginning of any other victim request and what's going to happen when the victim gets the response regardless of
whether they were going to log in php or admin.php or any kind of resource they're going to get back on 404 they're going to get back the response to hopefully far forward and in this way the attacker can control a victim's request super cool stuff yep any request so uh these are the main variants of the attacks however james kettle came up with additional methods to cause that desynchronization many servers might reject the content length and transfer encoding traditionally however due to lacks parsing or handling of these headers he was able to cause a desynchronization each of these scenarios that you see on the screen uh he could add x before one of the one
you know adding x adding extra spaces adding extra tabs or new lines or carriage returns all of these succeeded in some way or another across the internet and causing a desynchronization and some of these ended up being exploitable and the root cause for this is the system these systems hdb systems they have to be high availability they're not necessarily designed for security they're designed for performance and availability all right demo time this is the moment probably some of you have been waiting for uh feel free to take a note of the short link within the uh within the screen i will publish the demos later on uh you can see you can find these demos
in gift format at www.hooperlabs.xyz demos.zip but let's let's talk through that demo first so consider this situation we have an attacker and a victim on the left-hand side still with our front end and back end abstraction but in this example the front end is disallowing users to access the admin resource so we're going to demonstrate a controls bypass so what will happen in the demo an attacker is going to smuggle two requests both the blue and the orange requests as one complete request this is going to be done by kind of encompassing that orange request inside of the content link where the front end isn't chunked and coding aware so friend is going to treat that all as
a single request but the back end is chunked in coding where it happens to prioritize chunks and coding so when it sees that zero followed by a carriage return line feed care to turn line feed it's going to say okay that's the first request it's going to cache the orange and then prepend that to the very next request now if the attacker can be that very next request the the green victim request then the response that will be given to them will be the response to the admin server because the front end is the one that is controlling uh that is controlling this security to the admin page so again to review the front end is
going to see two requests the blue and the green and the back end is going to see the blue and the orange and the green requests can be treated as as post data so in my opinion a really cool example but let's see this practically and exit out of the presentation momentarily and let's run the demo all right so the uh the front end doesn't support as i said before the front end doesn't support chunk and coding this is a blog this is a blog actually created by the portsmouth team this is the portsmouth web security academy and it's an awesome tool it's one of the best tools i've seen for for practicing your skills in this and i
would highly recommend making an account so this is burp suite here on the on the side of the screen there what i'm doing is i'm just crafting a request right now i sent traffic from the browser through burp suite and this is what an http request might look like in real time i'm using the repeater tool and crafting my own post request with a single zero in the payload and i get a 200 okay that's that's a good thing that means it's working next let's go to the admin portal so i go to admin and it says path admin is blocked well we happen to know that this security is being is being um enforced by the front end
it's being enforced by the proxy so how can we smuggle something through to bypass that front end control and make that request into the back end ultimately revealing the admin panel so i put both a content length and a transfer encoding chunk header we include the zero in there because remember the zero is the completion of that chunked encoding and we're just going to smuggle and exit and see if that works so that's good news we got a 400 or 404 an error and notice we're sending the request over and over and over getting different responses this is one of the hallmarks of uh desynchronization when you can send the same requests and you're doing some smuggling behavior
and you get different responses each time there's clearly a desynchronization between the front end and the back end and that's when you need to poke further that's when you need to try to find out if it's actually exploitable so in this case we're going to craft our own request to the admin page and smuggle that request inside of the original request through the front end it's going to make its way to the back end which is chunked and coding aware and then that response to the admin page is going to get pre-pended or that response is going to be getting the very next victim request in this example the victim is going to be the attacker and this will be a way
to bypass controls but what i'm doing here is just using printf and wc within bash to calculate the amount of bytes and then i'll send that request and so when i send it it worked however it said the admin interface is only available if logged in as an administrator or requested as low cost here's one of the cool features of request smuggling is that you can put whatever host header you want in the smuggled request and that's what we're going to do here we add localhost in and the web server is not going to have any idea of that this was a smuggled request it's not going to have any idea that this request did
not come from actual localhost so i calculate the content length of the payload again and this will allow us to get through the front end we send this request causing the desynchronization 200 means the next request will will have that result of that desync and there we go we get to the admin panel we have full control apparently according to this uh this exercise and so when you're looking for this crafted in such a way that both the content length and transfer coding are are given and what you're looking for is differences in how uh you're looking for different different responses for the same input uh nowhere else should you get different status codes or different responses for
the same input for the same input and and i think that's one of the really cool um features of this exploitation so we bypass controls we took over the admin panel great let's go to demo number two uh this is going to demonstrate a session takeover so whereas before we were just bypassing server-side controls well this bug is so uh is so useful that you could also exploit arbitrary victims because we're going to demonstrate a session takeover what will happen again with the foreign end of the backhand abstraction we have the uh a little evil guy in the top left and in this example we have the same architecture as before however we have access to a forum
and that forum allows comments so again we'll send both uh both the content length and the transfer coding and we're going to smuggle that orange request in within the content laying header again the middleware or the front end is not chunked in coding aware it's only going to pay attention to the content link but the backend knows about chunk and coding and it prioritizes that so we're going to do that to cause a desynchronization prepend the orange request to the front of the victim request and it's going to cause the victim to post their own http request as a comment the back end's going to take a look at that chunk cut it off after the zero
and then the smuggled request is going to post everything as a comment and remember this is including all the headers it's including the verb it's including the host header it's including everything including the session cookies and as some of you probably know if you have the session cookies you can completely take over an account so let's take a look at that demo all right again we have a web security academy on the right hand side and again this is a blog this time a little bit different of a web application this front end does not support chunked encoding but the back end prioritizes chunks so we're going to smuggle a request to the back end
resulting in the victim posting a comment in order to do that first we routed traffic through the http proxy and we're filling out a common form to kind of see what it looks like burp will allow us to take a look at that in plain text so i pipe that to repeater i'm going to change some of the tab names generally i find this useful when testing rename the tabs do your diligence because when you come back a week later or a month later to the same project it's going to be quite difficult to understand where you left off one thing i also like doing is deleting any of the headers that don't really make sense or or are just
extra this helps both for pocs uh and it helps for my own sanity so uh this is the exact same poc as last time we have a zero and an x and a single content length which we're getting all two hundreds that's expected until we add in that transfer encoding chunk and now we're causing the desynchronization and we're getting different responses for the same input again this is how you detect it this is how you detect desynchronization vulnerabilities manually all right next um we actually want to smuggle a the post of a comment in inside of this uh inside of the smuggle request i'm gonna copy and paste this into i'm gonna delete the extraneous
headers again uh just to make things simpler otherwise it's it's it's a big long request i'm sending my own cookie so the attacker's cookie and that'll authorize the victim to to to make that post request note that we also use the same csrf because it's generally tied to the user session we've had an arbitrarily uh large content length of 800 and that'll be enough room for the all of the victims header the first line you know the method the the path the version and all of this is going to get put in the comment we don't know how long that's going to be and then finally the comment equals hacked and then a colon and then after after
that that will be the victim's center so i'm going to send this a few times and because because the desynchronization is happening i'll get delays the requests will be well formed because the smuggled request has a content length of 800 and if the very next request doesn't fit neatly into that 800 byte payload uh what happens is it hangs and that's what we saw so so because we're poisoning a victim's request here what we'll be looking for is two 200 statuses in a row two 200 statuses in a row means we we give the desynchronization receive 200. now if on the very next request the victim is exported with that dsync we sent another another dsync request another smug
request and get another 200. we successfully achieved that desynchronization we did not own ourselves and we ended up owning this random user so you can see in the in the demo we received the users cookies all of their cookies all of their headers this did not come from my host it came from a bot so complete session takeover with this bug really cool stuff let's talk about impact so um targets you cannot target someone directly unless they're the only other person in the world that's using this web server then maybe you can target uh you can possibly target things that make automated requests however it's it's going to be difficult what you're doing is you're causing this
desynchronization but you're not really in control of who this exports one of my favorite impacts of this is a redirect if you find an open redirect on a website um when combined with http request smuggling you can smuggle that open redirect that redirect is prepended to the victim's request and they get arbitrarily redirected which is super cool uh same thing with cross-site scripting if you have reflected crosstalk scripting or self cross-site scripting you can then cause the desynchronization and get that smuggled request pre-pended to a victim's request and in that way you can access the victim so a lot of these smaller bugs suddenly when combined with request muggle when combined with this exploitable desynchronization this can
severely increase the impact of these bugs and that's one of the things about it i think are really cool this is you're only at you're only restricted by your own creativity as an attacker you can cause denial of service conditions at least for a significant portion of the user base but you can take over accounts as we saw you can also reveal hop by hop headers uh it wasn't it wasn't completely obvious in that demo however if the front end is stripping off or putting on certain headers that might be might contain secrets you can also expose those with the with the same attack that we saw earlier uh what tools would i use to exploit this
well as you saw i was using i was doing it manually with with burp suite you can do it with both pro or community uh but if community is free by the way and nearly uh every bit as good as the pro version i would highly recommend playing around with that if you haven't but james kettle released two separate uh two separate extensions that are just talking about one is the hd request smuggler which serves to detect uh desynchronization conditions and the turbo intruder and turbo trigger was his solution to detecting this bug he was able to uh code in coding turbo intruder uh the ability to pipeline request in a much faster fashion so that
he could be the next request to that dsync so that he could detect that desynchronization especially in these servers that are getting a lot of traffic both of those are available in berkshire community by the way all right detection how the heck do we detect this well the the baseline is is inspect requests for rfc compliance this is the root cause of the issue is that messages are sent messages are sent in a non-rfc compliant way they don't follow the rules of the internet and then the web appliances tend to try and interpret them in certain ways and it's the differences in the way that they're interpreted that are actually exploitable you can also detect this with source
code analysis but this will be extremely time intensive it's going to be following the logic and making sure that each appliance parses in in the way that it should in a way that's rfc compliant but i don't see that as as beneficial for many organizations unless you're bug hunting you're looking for specific vulnerability in a specific appliance however there are a number of mitigation techniques number one i would recommend using hdb2 for back-end connections you can configure this with most web appliances though there have been desynchronization or http request smuggling vulnerabilities recently by cattle and can amit um they're they seem to be less severe and they seem to have less impact number two uh patch if you have an
appliance that is vulnerable to this there's a good chance that there's a cve for it that it's been reported and that there's a patch available so patch your appliances next strictly enforce rfc and this is much easier said than done which is why i want to introduce an aws http decent guardian and this is an open source tool released by amazon one of the one of the conferences sponsors by the way uh that essentially does that for you uh when it boils down when it boils down to it it's taking a look at the rfc and then throwing away any packet you know straight to jail any packet that does not comply with the rfc
so let's review this is a lot of information but let's let's review the main takeaways number one http 1.1 added a ton of features and that was an attempt to standardize the protocol but the two main features that we were interested in were choking coding in within the transfer coding chunked uh header and then http pipeline which allowed which allowed a much faster transport of http with one tcp connection not waiting for response general as http requests down the pipeline next we learned about smuggled requests so smuggler requests uh or crafted requests may cause desynchronization between web components and this desynchronization causes unplanned behavior and in many conditions this behavior is exploitable so some questions for you as we as we
close out uh where may this behavior where else may this behavior be found um maybe in different protocols could there be a desynchronization could it be in different technologies maybe not http but maybe maybe another protocol or another technology altogether maybe on the operating system what about away from keyboard could this could this kind of behavior be exploited in real life uh maybe on a social engineering engagement uh and and last can your organization currently detect these types of attacks uh maybe more importantly are they even aware of this these types of attacks because you know the first the first step is is knowledge in all of this and that's been my goal is to provide
awareness to all of you to give you a little bit of a hands-on demonstration on what it might look like as an attacker and and really share this cool bug that was popular by james cattle in 2019 this concludes my presentation uh thank you all for coming uh thank you to the organizers i really appreciate it i will be posting links from the presentation in the discord chat in addition um i'll be available for questions as well thank you all for coming i really appreciate it
[Music]
[Music]
[Music] you