
♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪
♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪♪ ♪♪ ♪♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪
♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪
♪
Thank you very much for being here. We are going to start the conference. The topic we are going to talk about is: "The story of common security flaws found in mobile banking applications." The people who are going to tell us about this topic, I introduce them to you, it's Gabriel and Pram Prit. It's the first time they accompany us in this event of B-Sides and by tradition, each one has to take a shot of tequila. Hello, good morning everyone. My name is Gabriel. He is Prampit. Thank you for attending the first talk. And for being here, today we are going to present for you something that we believe is very relevant and it is the issue of mobile applications, because every day they are
having more relevance in the market, whatever the industry, for this reason called digital transformation. But it is more important in the bank, because of all the money that moves. I have not heard of a single bank in Mexico or Latin America, and of course in the world, that does not want to have a mobile banking application today. And that's how a couple of months ago, Artsan hired a studio with a private consultant to evaluate or do a pen testing of mobile applications to different banking applications. Today what we want or what Prampit wants to share with you is a little methodology, how it was done so that you can also apply this in your companies or in the different roles that they
exercise, can apply how to do a good pen testing of a mobile application. see a little findings that were found when executing this test and of course leave a message of what to think when I'm developing a mobile application and I want to make a mobile app safe. A mobile app safe is not to have user and password, is not to have biometrics, is not Prampeet, thank you for coming here and the attendees are yours. Sorry, sorry. If you have any questions and you want to do it in Spanish, I can help you with Prampeet. Prampeet will speak or the presentation will be All right, sir. Thank you. Thanks for the... Thank you everybody for taking the time to come out here and hear us talk about some of
the security flaws that are quite common in many of the apps that are out there. So I'm not sure what he said, but just in terms of my understanding, so I think I'll just say that one thing I want to make clear up front is that this is not something that ArcSan did on their own. So we're not in the business of breaking other people's apps. We funded a study, and it was conducted by an independent third party. And then wherever there's caveats to that study, I'll talk about that here in a second. But myself, I've been involved with application security for six and a half years or so. If you look for the kind of work that we're
doing, it's more around what's called app shielding. What that means is if you've released an application, how do you prevent others from hacking it? And that involves app shielding as well as something called white box cryptography. And those are things we'll talk about offline. So let me go over a little bit about what we are going to talk through here. So obviously in the next 45 minutes or so, My goal is to talk about the vulnerability research primarily, but before we get to that, I want to talk a little bit about the methodology, what we did, what we could have done, and what we omitted as a deliberate choice. There we go. So who conducted the study? So
like I said earlier, so this was commissioned by ArcSan but it was conducted independently and we worked closely with this global advisory firm who conducted the study. So how did they go about picking applications that they were going to try and reverse engineer or, you know, look for static analysis for things that might be embedded in these? So they chose about 30 applications from the Google Play Store and Although it might seem like it's all financial services and I want to kind of differentiate the different things within financial services as well. So there are a number of niches within pentesting of an application. So at a higher level there's network pentesting and then within the application pentesting itself You can
do either static analysis or you can do dynamic analysis of an application. So there's lots of ways to attack an application. And for this study, the focus was only static analysis. And there's good reason for that, and I'll get to that in a second. Now, if you look at these financial institutions that are listed out there, so a lot of people actually imagine that a financial institution is just the banks, but it's actually a very wide industry. So it's made up of different verticals. Financial industry does, of course, include retail banking, but it also includes things like Stock brokers, cryptocurrency wallets. So in fact, a lot of the insurance stuff that you might be thinking of, like life insurance, health insurance, car insurance, those also
fall under the financial institution category. So things like if you make peer-to-peer payments, that's very common with millennials these days, that is also covered in there. So So what we did is we went online and basically we figured out the top 100 lists of each of these segments on the pie chart. And once those 100 lists were pulled for each of these segments, then we went online to Google Play Store to try to figure out if they actually had applications listed on the Play Store for public consumption. And then... What we did is then we obviously proceeded to download the applications and then begin the process of reverse engineering them in order to do the static code analysis. Now, so we wanted to be careful with this
study in the sense that we didn't want to just focus on a single market and by far the biggest market is United States for a lot of these applications. Which is why inevitably we ended up pulling about 80% of the total applications that we selected from financial institutions. They came from the US. For the others, there's 10% from UK, a similar number from EU. So we tried to achieve a certain level of global diversity because we want to see if there's certain trends that happen only in the US, as opposed to maybe in the European Union or the UK. And then you also have to think about who is building these apps. So notice there's this
thing about the number of employees, right? So some of these organizations that put out these applications, they have like 100,000 plus employees and employees. Whereas others have like 50, right? Because especially with a lot of the startups, a lot of the crypto companies, what we call the crypto wallets, those are typically very small. So we try to get a certain decent distribution of that as well. So last thing. is that for this study we focused primarily on Android. So all these things we're talking about, of course it's all Google Play Store, none of it is on Apple Store. So, I mean, we don't have any relationship with Apple or anything, which is why we're not attacking them or anything
like that. No, I don't want you to take that away from this. I don't want you to imagine for a second that just because the study focused on Android apps, that iOS apps are any different. Really, they are not any more secure. And the same issues that exist in Android, to a large extent, they also exist on the iOS side. So what would be different between iOS and Android? Just one thing. When you download an application from the Apple Play Store, it's encrypted. So you have to remove the encryption from that application. And it's fairly easy. You need one thing. You need a jailbroken device. And jailbroken devices have been around ever since... I mean,
Apple's put out the first iPhone. They haven't gone away. They haven't been able to make them go away. So in terms of security, the landscape is fairly even. Although, I mean, there are other things outside of just that part that are differentiating between Apple and Google. But primarily, think of it as it's a level playing field for the most part. Now, there is also another thing you have to think about. So there's almost 2 billion Android device users out there. I mean, that's a lot of users, right? And then there's almost a billion and a quarter Windows installations out there. So Even though it may seem to many people who live in urban areas that you know iOS is quite popular That's that's not
true. Android is huge by far overall so that is to say I mean, there's still two billion targets that that an attacker can go after and No backends for many of these apps are the same anyways, so if you have an iOS app and an Android app well, they're still hitting the same back end and So if your iOS is completely secure as an example, if you think, well, if Android is leeching that information out anyways, so the back end can be accessible in any case. So that's why the focus was also on the Android side. So now the third part of this equation was On iOS side, there's only Apple Store. Whereas on Android side, you have multiple stores, right? You
have third-party app stores. These are readily available markets, so where one can easily repackage one of the apps or even weaponize an Android app. So, I mean, this sort of weaponization is nothing new. Recently, I don't know if you guys are familiar with PayPal, but PayPal was recently... attacked using malware on Android. And that was done by basically somebody downloaded a battery optimization app. And the battery optimization app had some malware included with it. And using that, they were able to actually bypass the two-factor authentication that PayPal required from its users. Then the last thing is, I mean, there's a lot of young people here, so, like, to me, the way applications are delivered used to be a mystery. You
get, like, an APK, and for a non-security practitioner, It's kind of important to know that that's no security. I mean, just because it looks like it's packaged, it's really just a zip file, right? I mean, I'm sure many of you have already renamed an APK to a .zip, and you should be able to unzip it. There's really no protection just because you have an APK, right? Unless you implement something at the binary level within the binary that's included inside the APK, these are ultimately just compressed files. So how did we do it? Fairly straightforward. Did not take a long time to reverse engineer any of the apps, really, if you think about it. I mean it took, let's talk a little bit about what was done to individual
apps. Then if you, so if you, on average it took about eight and a half minutes to go from reverse engineering the app all the way to figuring out what things are in there that shouldn't be there. First minute, was really just using a tool called APK extractor, which ironically is something that's available for download on the Google Play Store. It's not a tool I use, I just use ADB pool. And then basically that tool allows you to download an APK file off your mobile device so you can take the APK offline for your own purposes. So, There are different tools that can be used, like the ones mentioned here on the slide. And for decompiling the apps, we sort of settled
on something called MobSF. Mind you, I mean, as I'm naming these, these are not things that we recommend. It's basically-- these are all free tools. They're open source. So they're freely available to everybody. It's not something that we particularly bless or are involved in in any way. The reason why we used mob SF over many of the other ones that are kind of command line driven is because mob SF sort of gives you this nice little GUI. It's everything is packaged together neatly in one spot. Finally, we did about four minutes to actually do the code analysis part. So think about it. It took us four minutes to find API tokens, to find API keys,
private certificates, which are oftentimes just left in subdirectories within the APKs or in the code of the mobile application. So it doesn't take a lot of time. Any ScriptKitty application or even an unsophisticated adversary could easily do this, right? These skills are not very hard to pick up. And these apps obviously are easily available to download as well. So, I mean, even the study authors are not a deliberately, particularly adversarial bunch. I mean, it's just a way to conduct a study, but there's no malicious intent here. So it's all very remedial stuff. There's one last thing I want to talk about that when you're talking about APKs, so remember now APKs are a representation of your company's network edge. Used to be that everybody had to be
inside a company to access email and you know use applications that were corporate distributed. Well, those days are gone, right? Now young people are going to be sitting at a Starbucks accessing email using those same APKs and the corporate assets. So think about the network edge is now blurred. It doesn't end at the boundaries of the bank that you work inside. I mean, it could be, it pretty much stops at your own mobile device and wherever you are. So the attack surface, is getting bigger. And just because this was done using static analysis does not imply that this is the extent of it. This is the first rung. I mean, static analysis is the easiest thing to do. So now, One of the things is, like I
talked about, we limited ourselves to static code analysis. And static code analysis, what that means is we don't actually try to run the application. So they could have easily been taken further very easily, right? We could have done dynamic analysis. What we would have done is install the APK in an emulator, run it, and see how it would have run. We didn't do any of that stuff. There's good reason for that. Because when we do that, We're also hitting back ends for many financial institutions and we don't want that, right? We don't want to create a panic for people who are just showing up to work and all of a sudden somebody's talking about, "Hey,
we were able to send some data to your back end and it looks like it was accepted." So that's why we refrained from using dynamic analysis there. The other thing I want to talk about is like often people think that, "Hey, I've released an APK and it's not debuggable." Generally speaking, it's very easy to circumvent. The reason for that is you just repackage the APK. It takes two steps. APK tool and you decode the APK out into a directory structure, modify the Android manifest file, make it debuggable, and then repackage it, re-sign it. So side loading is, again, very easy. We could have taken it further. We didn't do it. The last thing I want to talk about is on what attack surface was available
to us is this notion of network interdiction. So you have an app and then there's a backend. And then so we could have actually installed tools like what's called burp suite, configured a proxy and a soft access point. So basically, we could interjected ourselves between the app and the backend and then mobile app traffic really just thinks that it's talking to the backend. but the attacker can view the full traffic. And that typically happens when there's no certificate validation taking place. So I know that I'm talking about things like certificates and all that here. Those are the underpinnings. I mean, do pay a lot of attention to those. Without that, your APK is going to be prone to things like man-in-the-middle attacks.
So let's shift gears to quickly run through the vulnerabilities. And there are kinds of risks that we measured the apps against. Because remember, if we're trying to do a research study, we're going to have some goals. And this is what the goals are, right? I mean, so there's an organization called OWASP. If you are not familiar with its work, I encourage you to look for that. It's called OWASP. It's an acronym for something larger. But what they do is, They publish top 10 lists of vulnerabilities. They used to do that for desktops now they have similar things for mobile side as well and so really I mean The vulnerabilities that are listed here, the improper platform
usage, insecure data usage, insecure communications, insecure authentication, insufficient cryptography, these are not standards that we came up with. These are all things that came from the OWASP top 10 list. Now, we did not actually around our study against these five and there's good reason for that because these are all dynamic, right? So we wanted to limit ourselves to the static ones and OWASP does publish like 10 and three of those 10 are related to static analysis of code. So the categories in green that are numbering 8 through 10 are the ones that were focused on for this exercise. So now if you look at code tampering, I mean that is just messing with the code.
So Reverse engineering the code and making sense of the binary to look for interesting areas in the binary allows one to hone in on where the binary modifications should be made. So it's very typical for a lot of applications to have a function to check for whether the device is rooted or not. If you're not doing it today, you'll find that once you're in the corporate world, that is one of the things that the corporations want to know where their apps are installed. And it's very easy to patch out here, basically, if there is no binary protections because all you have to do is just search for the word root or in the case of
IaaS, jailbreak, you'll find those functions. And you can just easily patch them out and that's what code tampering refers to here. The second part is reverse engineering. So that's the ability if the app developer did not use any kind of binary protection. So being able to reverse engineer allows one to not just decompile, but allows one to get the original code back in case of Android Java, right? So what I'm talking about here is if you start with an APK, you can reverse engineer and you can get your Java source code back. So this, I mean, this enables basically most of the other attacks that you will see. The last thing I want to talk
about here on this page is just what we're referring to here as extraneous functionality. So what's extraneous functionality is typically developers will embed certain URLs, certain internal links and stuff like that within an application. So that's just sitting there waiting to be discovered. So if you're involved with the security team of a mobile development organization, it's very important to pay attention to the mobile top 10. If you were not aware as a developer before, you should be aware of these threat factors too. So remember, hackers don't use apps the way you designed them. They're gonna try and break the rules and these are the top 10 major ones that they typically tend to go after. So it's good as a developer to be aware of these risks. I mean,
think about how many QA organizations actually use a WASP Mobile Top 10 as one of their test criteria. I'm not aware of too many, right? So this is a big open door and it's mostly left open on its own. All right, so now this is really where we're getting to the meat of the study, the vulnerabilities that you see, how those can be exploited, why do they actually matter to the financial institutions, So later we will run through some of the problems in depth a little bit. All right, so those are the numbers. But I want you to be careful. Yeah, I mean those numbers look scary, but they really aren't, right? That's because percentages alone do not tell
you the whole story, right? Because severity of the issues matters. So... So it's important that we learn from those that are doing it wrong. So fewer findings in one app does not necessarily mean that it's more secure than others. It's really about quality. than quantity. So you can't pat yourself on the back without understanding the severity of the finding. That's very important. So there's no such thing as a win. I mean, security is always going to be a journey. You never reach the destination as long as you have an APK out there. It's new systems, new attacks will continue to evolve. So no need to assume that there's no attack surface there if your app is, let's say, one of the 3% instead of the
97%. So think about it this way, like 90% of the apps tested shared services with other apps on the device. So that leaves the data from some of the apps accessible to other applications on the device. Typically because developers write code, that's writing maybe to shared cache or some SD card on the phone. And just because you've written it now, it's out there. So the data, if it's stored in an area of the mobile device that's easily accessible, it's... It's a common leakage point for many of these apps that we found. And mainly the common leakage monitoring is generally you want to look for things like the cache, what are the logs outputting, the HTML5 data storage, and
things like with hybrid apps, what you're looking for is browser cookie objects. So, I mean, obviously, like I said, severity does matter, right? Some vulnerabilities are a bit more devastating to find than others. So, if I put my attacker hat on, so there's really four areas of focus, right? One is this concept of inter, excuse me, inter-process communication.
So one example of that on Android is your Android Intents. Android Intents basically allow apps to talk to each other. Intents is basically a command to an Android app to tell it what to do. So it can be sent to an activity, it can be sent to services, broadcast receivers, content providers, So a lot of apps that do not do intent filtering, and if they did, often times it was too broad. So one of the things that we were able to do is we were able to indicate to the apps that they could log their data in debug mode. So imagine you've released an app and it's printing logs and information at a certain setting like error or something, but then if you can enable the debug mode on,
it starts spewing a lot more to the logcat. So that was one of the findings on the study. The other one is hard-coded crypto keys. So this includes not just the crypto keys themselves, but also things like usernames, credentials, private certificates that they're often just baked in by developers. So things like open API token and secrets are also available there. So hard coding should be generally frowned upon for any of the crypto stuff. And then be careful of URLs. I'll tell you the reason for that is Typically, let's say you were a bank, you're going to have a developer URL. These are URLs that are being tested by development. So they're not supposed to be for public consumption. But when the app gets
released, those URLs are still in there. So those are accessible to your attackers as well. So that's something to be careful of. Now, all these things that I've talked about, on their own individually, they don't really add up to an attack. But these are all small crumbs and if you have enough of these crumbs in an app, an attack is possible. It is possible to craft like a curl statement and talk to the back end, send data. So we found enough information to do that in many of the apps. So in some ways, what does this teach us? What does it teach us is that the attack vector that were available 15 years ago, they are now still being replicated in the mobile applications today. Back in
the day, they were more on the web app side. Issues stay the same. They've just moved to the mobile side. So, you know, that's part of the reason why we want to evangelize about the security things that you need to think about. So I want to go through some of the critical findings in a little bit more detail. So this is actually a screenshot from the study, so we're not really just making stuff up. It's legit and it's real. So think of the web APIs. What's a web API? That's some company out there has a web-based API. server available there and so you can connect to it using REST APIs. So every app has to connect to the backend somewhere,
right? So one thing I want to convey here is that we did not send any packets to the backends of the financial institutions. I don't want any of the FIs to think that the purpose of the study was to initiate some kind of panic at their end. So we limited ourselves to just the static stuff. So based on information like API keys, private certs, tokens found in the back end, people often start thinking about abuse. Often the question comes up, if someone has an API key, is it game over? No, not really, because the API will often just provide you an assurance of some kind of an authentication check and access control, and it's not everything. But you cannot look at it as if it's game over.
It's not just any one piece that leads to a breach. It's a bunch of different pieces. So having the API key and the credentials is very helpful in mounting an attack. So these are what I would describe as critical pieces of information. So for this reason, we attach severity to such vulnerabilities. Now, for those of you who might be wondering what an API key is, it's just like a password, right? It's a key that you issue to the mobile app or to the developer or third party. So in order to give them access to your data or your service. So companies and SaaS platforms will issue you an API key that you hard code into
your app and allows you to talk to them and use their services that they're providing. So... So much information we actually found in some of these apps during reverse engineering that was actually possible to craft a full-on attack and we could have sent data and contacted their back ends. So if you look at the actual screen now, so these are obviously the actual screenshots but And that's the reason why they're redacted here. But this is evidence to the non-believers of the findings. So here you can see the secret key in the screenshot. You can see the expiration, the session token. You can see credentials in some cases as well. So yeah, I mean, if you're looking at the private certificate,
I mean, that's very important. We also were able to find... Private certificates that belong to many of the payment processors out there. I mean that's That's a pretty critical piece of information. All right, and So what else did we find? We found a lot of customer data Customer data. What what do we mean by that? So I mean customer data is becoming a big deal in Europe GDPR Finds you if if you compromise customers data So beyond the API information, it was kind of shocking to actually find this personally identifiable information in many of these apps. So authors, I mean, as authors, we did not draw any conclusions about what was actually going on in here, but we did
find it. So definitely this needed a little bit more time to figure out what was going on there, but there was no reason for having that in there. So looking at the screenshot here, you can see the person's name. First and last names basically. You can see their policy numbers, addresses. And by the way, this was for an insurance company. So we can also see issue dates, claim dates. So yeah, I mean, a bunch of personally identifiable information. Just think of it this way. You might have interacted with an insurance company and now your data is floating around there and somewhere in their app. It's kind of disturbing. The other thing that we found in a lot
of these apps are developer and QA URLs. So there are other things like developer QA and release URLs, right? So even credentials are embedded in many of the apps. Enough information to start interacting with the back-end servers for some of these financial institutions. So in case you're wondering if we ever found any working developer QAs, developer or QA URLs, we did. So the answer is, shouldn't come as a shock, that there were active URLs that were meant for developer usage embedded inside the application. So again, I mean, we did not send any data to them. All we did is simply loaded them in a Chrome browser. Clearly, the companies were trying to prevent that page from being accessible to your search engines because they did not want
it spidered by Google. It was obvious there. And so it did not show up in search results, but it showed up in the app. So now, then the question becomes, would a hacker have had access to these URLs any other way if they were not embedded? There was a time where they could have been available using what's called DNS reconnaissance, and that's things like DNS zone transfers and whatnot. They could have revealed them, but those problems got fixed long ago in the network layer. So this hard candy exterior and the soft interior of the app is now... mainly possible due to the way these apps are actually being crafted. So the good news is, you know, a lot of effort has already gone into
the network layer. Obviously there's more work to be done, and that's on the app development side. So, good code equals bad code. What does that mean? So, you know, if you're crafting your application like your computer science professors taught you, you're writing very good code. But it also reads like a manual to an attacker, right? So attacker does not have access to any kind of manual to how your app was created. But if you have like meaningful function names and meaningful variables, those are all helpful to an attacker as well. So that's what it means that when you write good code, it's actually bad for security in many cases. So... That's something to be aware of. So yeah, I mean, it sounds like it's going to have an impact,
at least on development teams. Because if you're going to have to rename those things, yeah, I mean, you got to worry about that. Like, how much work is that going to be? So there are different approaches, right? So it's not something that's just left to the developers. There are tools available that can mitigate the impacts that I talked about. So there are things available that will rename your stuff. That's why, I mean, if you look at Android, it ships with its own tool called ProGuard, which will rename things for that very reason, because they're trying to anonymize things without impacting the developers. So... There's another thing that we have to think about that's not there.
Cryptography as it was designed, it was designed for two people who are actually interested in having a secure conversation. And it's not very conducive to the way now cryptography is being deployed. So you have crypto keys. They need to be embedded in the application at some point. Or even if you're dynamically creating them, it's going to be there in the application. An attacker can stop execution and look for that key. So those are the things, those are open questions. There are solutions available off the shelf, but that's a conversation. So it's not really up to the development teams to do all of this. It's a larger discussion around it. All right. Yeah, so, I mean... I mean the threats
are growing but are these apps running on compromised devices? Are they being analyzed to understand how they operate? Are they being reverse engineered to subvert code? So I mean there's lots of questions. So one thing is clear, mobile and web apps are a potential source of vulnerability as attacks are shifting. They used to be on the corporate network. but they are shifting to the soft underbelly. And what's the soft underbelly? Well, you have access to the apps. I mean, you can just download them, and so it's easier to compromise those. So, and, yeah, I mean... We've seen some of the major breaches. I mean, you know, security is expensive. It's proving to be very expensive for a lot of organizations. So, you know, you have
headlines around Marriott and British Airways. So what's important to note here is that these new wave of attacks is targeting areas of infrastructure which are most vulnerable, right? And there's a new thing that's happening, actually,
So yeah, so just in the past few weeks actually, there's a company called FIServe and what they do is they produce SDKs and they give it to other financial institutions in the US. Most of the customers are smaller companies who can't afford their own teams, so they bought the solution And it's not a very secure solution. So they have sued FIServe. So if you are in the business of providing software for other companies, security is now becoming a little bit more, shall we say, critical because it's going to hit your bottom line a lot more than it would have. So There's three important drivers of development that don't always work in harmony. You've got speed
to market, you've got the customer journey. What does that mean? That means your organization wants to get the apps released as quickly as possible, and the customer experience should be awesome. That's what they're trying to do. But when you add security, it's adding a performance overhead. That's something that you have to think about because Should everybody out there start adding security? Probably not. The best way to determine if you need security or not is to conduct reverse engineering on your own app. So yeah, I mean, know your apps, seriously. Reverse engineer your own apps, figure out what's exposed because that will help you make decisions on how much security is needed or if you need security at all. Many of the apps don't need security at
all to be honest, but many of them do. So yeah, I mean that's one of the important findings and then avoid things like hard-coded keys. On the tool chain side, many times you On the Android side, you ship dynamic libraries that are .so. So you'll have debug versions of .so sitting alongside release versions in the APK. There's no reason for that. It's just a mistake. But it's also one of awareness, which is what we're trying to do here. So what we can do, protect where attacks occur. You want to secure application code. Securing application code is the only way to really prevent many of the attacks if you have exposure. Again, you have to reverse engineer your app first
to determine if this is something that makes sense for you. Because remember, I mean, on the left-hand side here, you've got the enterprise. That's within your company. But then your apps are sitting outside. So really the best way would be if the apps had a way of sort of sending security events back into your corporate network. Just like if your Windows machine at work gets compromised, it starts generating security events that get picked up by intrusion detection systems that your company's already purchased. And that is possible today.
So, yeah, I mean, if you determine that the exposed information is beyond the risk, please, by all means, do your research and harden your application. The best hardening is the one that is not just passive. So, you know, there should be some visibility and alerting and response capabilities that are available out there. So, typically, how do you know your app's been compromised? only when it's too late, when the attack's been published. So you want a solution that allows you to see what's coming up and gives you that intelligence. It allows the application the ability to respond. So think in terms of treating your app data as another feedback into your security systems. So the one takeaway that you should take, that you should keep in
mind, is that one should assess your exposure. So do not make any assumption about apps. Reverse engineer them and see what's visible. It's not that difficult. If you only, honestly, if you just put in reverse engineering and Android and Google, and Google, and it'll give you all the information you need. So, and then make a decision around whether other measures like app shielding are required or not. Thank you everybody for listening. So if there's any questions. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 1010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010 the current status of the security in Android operating system because the application is important, but being Android without being patched as we would like, what we can do to improve the security
of the application but also from the operating system, Android side. Got it. So there's two questions. What can we do on Android as an OS? I mean, that's a larger question. Hardware security modules are becoming more and more popular and that's already been implemented. So if you look at the newer models of Android phones as they come out, they usually have some kind of TPM module in there. So that's the right thing. It's just in the older devices that you did not have those where you needed to do something. So in terms of Android OS, I think, I mean, they're doing a lot of right things. The problem with Android is too many manufacturers, so everybody wants to do things differently. Some might
implement certain things that others might not. So that's a diverse market. and a response to different regions. I mean, there might be Korean market, and so Indian market, or Asian versus American. So there's high-end devices, low-end devices. So that's kind of hard to answer that. But there's no obvious exposure there. Now, the second question you're asking is how we can secure the apps. Well, I mean, if you look at your app and you reverse engineered it, you see symbols in it. And those symbols are basically your roadmap for an attacker to figure out what your app is doing. The best thing we can do is just get rid of that information. So there's tools that
Android ships with that will do renaming at a basic level. So at the very least, enable those, right? Because you want to get rid of as much information in the binary as possible. And so if you do those things and if renaming is applied, it causes no overhead to your developers and gives you the security. So typically, you look for the word app shielding. That's the word you're looking for. Okay. - Gracias. - I know you talk about static analysis regarding the dynamic one. Yeah, I mean that's certainly an approach, but it's gonna have limited, shall we say viability. Because remember, think about it this way, hackers are rare skill. And people who use hacked applications, those
are pretty common, right? So you'll never be, that'll always be a race on the hacking side. So if you, yeah, I mean, you set up a honeypot, it still gives you information on who the hacker is. You might be able to find out what the geographical location is. Maybe there's an IP address. So that'll certainly help. You can figure it out. But then at the same time, the best thing you can do is don't embed any hard-coded information in the application as much as possible. And if you have to, take some measures around it. All right.
Thank you, if you have more questions we will be in the place so you can look for us and we can talk a little longer. Thank you for attending and good day. ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪
♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪
♪ The next talk will be done remotely. Our speaker couldn't come with us. He is in Czech Republic right now. He will give his talk through a YouTube channel. I will bring a tablet. For questions and all that, raise your hand. We will pass with you. You can write in your live chat the questions that will arise during the process. The name of the speaker is Daniel. Do you have any questions? Okay. In this video I'm going to show you how I prepare this delicious chocoflan or impossible cake. In a simple and fast way, over the stove. Half a cup of sugar to make the caramel. We put the pan on the fire with the sugar and we're going
to...
In this video I'm going to show you how I prepare this delicious chocolate flan or impossible cake in an easy and fast way. Test test test. Alright, great. Alright, so let's go then. Okay, my name is Daniel and today I'm going to present you our project Atomic Flat Coverage which represents technical implementation and operationalization of Threat Center methodology using my ThreatTAC framework. Also, I will tell you the reasons which stand behind decision to develop these projects and some unexpected and expected pros we got after development. So, first of all, I work as a Threat Detection Simulator at ESOC and we developed this project together with our friends Mateo Švitra which is SOC instant responder in Tieto SOC Poland. There is also Mikhail Aksyonov, he
is working in Russia as SOC automation team lead. And there is Jakub Weinzettel who is working as threat detection specialist in Tieto SOC in Poland. So we basically were doing networking, programming, threat detection, instant response and we have quite different background all of us. Which basically helped us to develop this project in the way we did it. So you will see all the advantages in a couple of minutes. So let me first provide you some historical background. In 2004, when I just started working in computer security, I moved there from networking security and when I started working with CM systems specifically, I had only one question. where all of these people get correlation rules. And if you
remember that time, 2014, 2015, you can tell that everybody, like vendors, security experts, could provide the information about the boot force. Like, if you were googling something related to threat detection, you basically would find only information about this. Each and every vendor they were offering detection of brute force and basically that's all you you go to some webinars you go to some courses and you will not go further than brute force or maybe some two more techniques or so at the same time i started doing or working with offensive stuff like playing with tools like mimikatz and later i started preparing for some offensive security trainings attacking them and it was quite strange for me
like I was a script kiddie basically I was able to you know bypass all the antiviruses IPSs and all the security systems just downloading some framework like whale evasion or even some metal spot modules I can use mini-cards to download the memory and nobody can stop me I mean nobody even was trying to detect such activities and it was really weird for me like how come that a newcomer in offensive security could basically can basically do everything like dump credentials move laterally bypass antivirus and other security solutions and being totally undetected At the same time there is a market, like a huge market, multi-billion market of this kind of solutions and nobody basically provides you with the information how exactly detect these adversaries or how to
develop detection rules. And for me it was quite weird. I was trying to find an answer for that and started digging into like why nobody is learning a threat itself and attacks, like doing attack research to do better security, to develop the specific detection rules using CM systems. And then I found intelligence driven defense methodology which was developed by Lockheed Martin Corporation in 2011 or 2012. And basically for me it was some sort of light in the end of the tunnel because these guys were basically explaining the same stuff. Simply saying it's military approach and the main idea behind it is know your enemy. So it's quite obvious but believe me if you remember that time nobody was let's say following this
approach in security or maybe some small groups of totally unknown people or companies, but it wasn't like spread like it is right now. So basically you're supposed to learn threats, understand what, why and how exactly they're operating and according to this knowledge build your defenses. So I started digging into learning what exactly they are proposing, what kind of methodologies or methods they propose or using in describing in their methodology. It's quite good document, I really recommend to go through it. Later, in 2015, Mitre Corporation released Mitre ATT&CK framework which represents the next step, like step further in the threat centric methodology. But they call it different, in a different way. So basically Lockheed Martin call it Intelligence driven defense, Mitre call it threat based defense or
threat centric in general. So I just, I prefer threat centric because it's kind of when they're agnostic, you know, nobody can sue you for using a trademark or something. So just, okay, threat centric. And Mitre is just a step forward. Basically it is, I mean, Mitre ATT&CK is a step forward. It is a threat model. So if you're talking about threat-centered methodology where you're supposed to know your enemy, MyTriAttack released a threat model with exact description of exact steps which adversaries use during the preparation or exploitation or post-exploitation during the basically intrusion or hacking into your environment. that's how it looks like from high level point of view i believe you guys most of you are familiar with concept of cyber kill chain which was also originally
released by Lockheed Martin corporation in 2011 and rebranded by mytre corporation now they call it cyber attack life cycle and basically what they did they split it into parts. First part is pre-attack, second part is just attack and mostly it's about the exploitation and post-exploitation activities. Today we will focus on post-exploitation and exploitation stages, which is enterprise attack. For now it includes 12 tactics and each tactic represents specific tactical goal of adversary during the intrusion. That's exactly how it looks like. Here you can see 12 columns which represent tactics and more than 200 cells which represents techniques or exact ways how to achieve tactical goal. So, it's very important for understanding if you're not familiar. Tactic is a tactical goal. For example, lateral movement.
And there is a way to do lateral movement. like pass the hash. So pass the hash is a technique, the way how to achieve tactical goal and lateral movement is a tactic. Every like, you know, quite simple. So what's about this? Each technique represents some sort of wiki style page with the next information. There is a level description of what's it about, what kind of protocols it uses or some specific details about this technique. There are examples of use in the wild according to some reports or reverse engineered malware. Basically, Mito Corporation requires examples in use, so if you will try to contribute anything to MITRE Corporation to MITRE ATT&CK, they will ask you to provide examples of use or they will just not add
this technique. So one of the advantages of this framework is that everything has a proof in a while. So there are no theoretically possible things, just all the things is something which adversaries use all the time or used before. Another section is mitigation, like some sort of high-level description of how we can mitigate this specific technique or we call it a threat. There is a detection section which is the high-level description of how we can detect this threat and some references on original sources. So, basically, In 2014 I found the Threat Centering methodology and started digging into it. Later in 2015 they released the Mitre ATT&CK framework. In 2016 I've started with my team on my current position, like later in the past in
Russia, we've started operationalizing this framework. So implementing it in our problem is that description for Mitre ATT&CK is not detailed enough. you cannot just pick up the mitigation section, provide this information to your Windows or whatever administrators and say "Hey guys, you go and implement these advices and everything will be fine" or you cannot pick up the detection section from the Mitre ATT&CK framework and you cannot develop any detection rules out of it in most of cases. So when we're talking about implementation on a corporate level where you're supposed to communicate with different departments you need to be as specific as possible and if you won't be specific basically you will not achieve anything or you will have to
learn basically that's what we did we have to learn and detail the stuff to be able to explain it to other people and second thing is basically Let's say another kind of example which is following on from absence of details on specific techniques. We cannot, since we don't have specific description or exact things on a detailed level, we cannot explain our requirements to leadership or to colleagues and a good example of such problem is that when you come into your CM team or some data engineering team and say "Hey guys, how about collecting Sysmon?" and so they start asking you questions like "Alright, why do you need Sysmon? Why don't you like Windows Event Log
which is a native feature?" How much more resources do you need? What kind of configuration do you require? And most important, what kind of advantage it provides? What detection rules you will be able to implement using Sysmon? And so here you have to just start the, well, quite a big research in the topic to provide all the arguments or all the explanation business justification or just explanation to your colleagues why we need to do something and it's also requires some time and this problem is just the you know also it's also connected to absence of details on techniques how to work with them how to detect them how to mitigate them and everything like this and third problem
is that since we are using mytri attack for ourselves and for our customers, leadership was asking us for reporting on a monthly basis. Each time, like a couple of days before the report development, you just collect very huge amount of data and start trying to somehow represent it for managers in a simple way so they can understand what exactly happened, how the situation improved from the last months, what kind of things we are now covering. And it's a very important question and I believe that was the first reason why we decided to automate all this stuff. But Let me explain everything gradually. Alright, 2016. We've started developing the first solution for that. It's called Use
Case Framework. You might heard about it in 2017 or 2018 in a few conferences we were presenting at. But if you haven't, don't worry, today we will present to you a new framework with a focus on automation. So you haven't lost anything, but maybe it will be interesting for you to see the historical background. So what was it? Basically it's just a structure in Confluence. It's a list of things, so you see here alerts, data needed, hardening policies, mitigation systems, playbooks, use cases and everything. We were using mytri-tag technique as a separate use case and we were mapping these different entities to one use case. That's the example of our unique test, for now we call it triggers. This is the
thing which you use to trigger some specific or to test some specific mitigation system or security control or to test detection rule. Basically, for example, there is a technique to wipe or remove some sensitive data during the defense evasion. So it's called indicator removal on host. So basically we can remove logs or disable logging policy and we somehow need to test it. We need to test, we need to know if our detection works, if our hardening policy is working properly, So we were creating this kind of entities manually and to test it we had to copy paste it from here and execute it on the victim machine. Another example is detection rule. Here you can see that we were using our own detection
logic for detection rules and we were manually translating them to Kibana query. which also wasn't the best way because just imagine if you want to change something in detection logic like adding a new parameter or add a new event id you have to change it in high level detection logic and one query is one query and every everywhere else so that was let's say some sort of description of our entities here you can see use case entity data needed logging policy playbooks everything And the problem here is that everything is connected in a very bad way. So links are everywhere, from everywhere to everywhere. And when we wanted to implement a new entity like data needed, we had to restructure
and manually rebuild our knowledge base. And this was just horrible. And it was okay when there were like dozens of entities and yeah, there is one of the, you know, examples of what's happening. It's just so hard to maintain that when we're deprecating something there are still some links to deprecated entities in a usual, in a normal entity, which is not okay. So, It was like K when there were like dozens of entities, but last year, in the end of last year, there were 617. And if you think that this is not a big number, how about that? Alright. So, in the end of last year, we were inspired by two projects: Elastic Commons Schema and Sigma. We were contributing to these
projects. So, first of all, Elastic Commons Schema. we were really surprised or inspired by the way how they generate the documentation. Basically you only have to change something or add something in one configuration file, execute make command and it's to rebuild everything for you like all the documentation, readme, csv files, index pattern, everything and you don't have to do anything else. Another project is Sigma. We were using them in the very beginning, 2017, but then we decided to use our own high-level detection logic description for our detection rules. But last year, in the end of last year, they started mapping their rules to Mitra ATT&CK framework and that was the reason why we decided to try to use them again. And this is
how everything looks like right now. There are many automated workflows, many integrations and basically lots of potential for the future. But let me tell you everything step by step. First of all, when we started thinking about a new tool for managing our knowledge, We just put some requirements. Basically there are two requirements. First is to not work with web-based interface like Confluence. So it means that we want to work with plain text files in IDE, work with automation scripts or a shell like bash to, you know, to not wait until page will be opened, not click edit, wait until editions will be saved or hope that it will not back and crash. No, we just want to work in plain
text files with our favorite IDE, use bash and python for automation and that's all. We want to push everything to web-based engines or other engines automatically, that was the idea. At the same time we don't want to do anything manually, we want to do automation and it's tightly connected to the task where we had to provide some reporting on the progression or coverage to our leadership and to export to conference of course because you can't provide the article in plain text file to your manager or to your colleague or to other departments. You need some visualized example for other people who don't really understand what's going on there. So we still have to use Confluence, but we want to use it in an automatic way by
some scripts or tool. So we were focusing on automation and working on plain text file. That's the main idea. And first of all, we decided to replace our manually created unit tests or triggers by some other project and this project is Atomic Red Tea. Basically this project allows you to test your security controls or detection rules by executing some malicious or let's put it this way, executing some scripts on your system. which could be used by adversary or by administrators or by anybody else so these scripts are kind of reusing in real campaigns at the same time they are not proud to do any harm in most of cases so That's how it looks like, it's just a YAML file with description of what exactly
is supposed to be done. Here you see description of platform, command which is supposed to be executed and description. The most important thing here is the mapping to MyChatHack framework with the technique ID. Another project which I already mentioned today is Sigma. Basically it's some sort of... First of all it is a signature format which allows you to translate your generic detection logics to multiple engines or they call it backends. For now you just, the idea is this: you develop some rules describing some sort of detection logic, then you convert it to Elasticsearch, Kibana, Courather, Splunk, Greylock and many many other systems automatically. You don't have to do it manually or we don't. We just wanted to stop doing it
manually and we tried to do it with Sigma and we basically were happy about that. That's how it looks like. As you can see there is again description, some title and most important thing for us is the mapping tool. Here you can see in the tags, session mapping to attack tactic and attack technique using the ID. So then we had to provide more details about specific detection rules and first thing which we were asked by CM team or data engineering team is what kind of data we supposed to collect. That's why we created our own entity which we called data needed. Basically, it's description of data which you're supposed to collect to have this detection rule implemented in place. Because if you
will just implement detection logic or detection or correlation rule in CM but will not collect the data, obviously, you'll have problems. So here we have our raw work sample for CMT. Basically it's not the final, let me explain it in the very beginning. It's not the final view how it looks like for the people. It's the way how we develop it. It's a YAML file in IDE and I will show you how it looks like in Confluence. So here we have again sample, raw log sample, fields for further analysis and detail, description of providers, channels, category of log source, references and most importantly is the logging policy. The thing is that even if you will collect or configure data collection
it doesn't mean that this data will be generated or logged. In most of cases you have to configure something to have this data in place like audit policy in Windows or install auditd in Linux or just configure some log levels in network devices or anywhere else. So it's not that simple all the time. If you want to provide full support for CMT you're supposed to provide the detection rule, description of data needed and exact configuration they're supposed to implement to have this data in place. That's what we call logging policies. Here you can see exact steps to implement it, references, some description and some other metrics which we use for analytics. Another entity which we implemented internally
is customer. It helps us to provide reports for leadership like we have a few customers including ourselves, like internal customer or it could be basically, it could be anything. You can put, you can create customer entity for separate site or branch, other office in foreign country or you can even put there some specific web server. It's not a problem at all. But we use it specifically for reporting and some analytics. We can just list here data we collect, detection rules we have implement, and develop reports for management automatically. I will show you that just in a few minutes. Another thing is enrichment. It's very important to understand that some detection rules, even if you have all logging policies in place and data needed in
place, everything is okay, some detection rules require some sort of enrichment. In this specific example, we describe how we can configure enrichment for sysmon event ID. We can enrich it with parent integrity level or parent of parent image, which allow us in the future to detect privilege escalation using third-party exploits and token stealing technique. Basically, this stuff is already in Sigma. You can find special rule of which we which this enrichment is used for in SIGMA repository. Also you can find this article in our demonstration conference. But I will show you everything step by step. Another thing is response action. This is some atomic step which could be implemented during the instant response procedure or could be used in many
other response, let's say, instant response activities. And more high-level thing is response playbook. We construct them just using the response actions. As you can see there are lists of response actions mapped to instant education, containment, eradication, lesson learned stages, like six-step instant response lifecycle. So you just map this response action to some playbook and you can generate scenarios for instance response cases like plans for responding to some specific threats with mapping to my threat tag. And another thing is visualization. We just started working on it. It's some sort of sigma but for visualization. You just describe some sort of visualization in our language and then you're supposed to be able to translate it to some systems like Splunk or Kibana or
maybe something else. We have implemented translation to Kibana dashboards and we are now targeting Splunk visualizations. Probably it will be ready until end of this year. And again the same logic, you just map some atomic visualizations to dashboard entity and you can translate dashboard to Kibana or in the future to Splunk visualizations. How does it work under the hood? Again, we store all the data in a YAML file using Python we parse these YAML files and create new analytics or new entities which we can export to do the Hive, export to Elasticsearch for the Kibana, we create Navigator profiles, we create CSV files for simple analysis, and using Jinja templates we do export to Confluence and to Markdown. But let's start from
the main thing: how to basically execute our tool. It's very important to understand it's not Simple process, but you know, it's still worth mentioning. So basically everything starts from the institution of make command. As you can see something is happening, I'm asking for the password. Well, that's all. Nothing else. You don't have to do anything else. It's just a good make and add and it will do everything for you. It will populate Confluence, it will populate Markdown, Knowledge Base, it will create all the profiles, search indexes, dashboards, response playbook, playbooks, everything. But of course you just need to configure all the exports and don't do anything else, which is really just great. Another thing is what? one of the main things
as a result of the work is the exported Confluence Knowledge Base. Please keep in mind everything was created automatically. So you see here is the next structure. We again use detection rules, logging policies, response actions, playbooks, enrichments, everything which I just showed you. And that's how it looks like in the exported way. So now I will show you how it looks like originally. like here you see Sigma rule and that's how it looks like originally. This article was created out of this Sigma rule and these mappings were automatically created like mappings to attack clickable mappings to attack tactic techniques data needed entities were automatically calculated trigger from atomic height seam project were mapped here and
basically another thing is that there were There are automatically generated queries for Kibana, XPAC, Watcher, Greylock and basically other systems. You can configure it and do export to systems which you want. We just do this for demo purposes. If you click to data needed, And to see here you will see how it looks like for other teams leadership Again some title description of what's that automatically mapped login policies references and a raw log sample for CMT Basically, I've mentioned it before showing you a raw YAML file if we will click on trigger we will see original article provided by Atomic Red Sim Project. They generate markdown file with this description. So this is actionable documentation,
which means originally it is a text file. This could be exported to something which you can show to somebody. At the same time it is used to do actual adversary simulation tests. These commands can be executed by execution frameworks. So you don't have to copy-paste it from here. You can just configure execution framework from Atomic Red Sea project and execute them automatically, which is like great automation. It saves lots of time. And that's how response playbook looks like. We just mapped our response action to specific steps of instant response lifecycle and they all have been automatically exported to this article and you can just send it to SOC analysts or to instant responders during the
instant response case management or anything. But the real power of our response playbooks is that they could be automatically converted to the Hive platform. Just for a note, the Hive is the open source instant response platform. I believe the best one if talking about free open source software. It allows you to automate many things but main idea behind it is to help you to manage the process of instant response during some instant response case. We generate out of our response playbooks the hive case templates. What does that mean? We can import our templates which we automatically generated. That's what it looks like. And here you see that everything from there we populated here like tags, TOPs, severity, some description and tasks.
Main thing is here is list of tasks. What does that mean? You can create a new case using this our template and that's our test case creation scenario. And if you open tasks you will see that the response actions were mapped here as separate tasks. And you can assign the separate tasks to some of your team members, to one team member or second list or instant responder, everything else to another and this way manage your instant response procedure. As you can see, description of the task were copied from the response section itself. So basically, in our case, response sections and response playbooks are also some sort of actionable documentation. We can export them to Confluence to show it to
managers and we export them to instant response platform where actually this could be used immediately during the the job during the instant response. So that's the thing. Next thing is our Kibana export, the dashboard and index which we created. Basically everything started from CSV files. We were using CSV files for simple analysis, but then we realized that why we cannot export everything to Elasticsearch, for example, and do analysis using Kibana. And we tried to do that, we did it, and that's what's happened then. As you can see, this dashboard has been created today using all publicly available Sigma rules for Windows. It automatically calculated the needed entities and mapped customer entities just for testing purposes. And you
can build the same dashboard using your own analytics if you have your own Sigma rules or anything else. You can build the same using an alternate thread coverage project. So, first draw is the distribution of detection rules per MyTradDuct.
On this diagram you can see distribution of detection rules implemented per customer. It's just for, again, for reporting. It's a testing thing. It's a thing just for demonstration. We are highlighting that we can track implementation per customer and there are a few customers. So basically for both of them almost nothing has been implemented but the idea here is just to highlight the abilities of the system to show these things. And here you see the original data. We calculated that using publicly available SIMA rules. You can see that half of the rules depends on Sysmon, Windows Sysmon log provider. And another half is Windows Security Auditing. And there are some small pieces, small amount of rules using PowerShell or
any others. On this diagram, you can see distribution of detection rules severity level per data needed. This way we can highlight that Sysmon data provide us not just more detection rules, it provide us more critical detection rules with critical severity, which is also very important for understanding or useful during the explanation to you. management or to other systems why you need some specific data and keep in mind that you can basically build any kind of dashboards using our index so it's not doesn't has to be a sysmon you can try to highlight any other points you like and visualize basically everything you want related to the data So, here you see this distributional detection rule
development status. And again, this is original data from a Sigma project. As you can see, majority, like almost 80% of rules are in experimental stage, which means that you cannot basically go and implement them in your environment. You need to do some tests. And only 2.5% are stable and ready to be implemented, which means Again, you need to do some tests before working with Sigma. And also there are some TOPs, like Florian Roth developed most of the rules, and most of the rules, like more than 200 rules, depends on Windows with small process execution block, and native Windows process creation with command line 4688, which also means something. Last but not least,
One of the main problems for us was the reports for management. And we automated this procedure creating navigator, navigator, attack navigator profiles automatically. Here you can see that our project generates some attack navigator profiles. Here you can see that there are profiles per customer. And there is a general profile which you can use to highlight all abilities from detection point of view for your organization. So if you will download it, here you go, you see the current coverage by open source Sigma project. And if you want to highlight or provide some updates for our management about current coverage, we don't do it manually anymore. We just execute our scripts, automatically update attack navigator profile and basically our management always have
the updated data, so we don't work with any reports anymore. They have everything updated all the time. Alright, let's return back to the presentation. I had some links, but you know, let's just skip them for now. As I already mentioned, we're not limited to Confluence. At the same time, we do export to Markdown and you can visualize Markdown Article is in some Markdown engines like GitLab or GitHub. So if you have a local GitHub or a local GitLab node, you can just export or store our project there and visualize your own analytics there, not using Confluence. You don't have to do that. And as I already mentioned, we started from CSV files and they are still there. If you don't like
Kibana visualizations or you just want something simpler, You can use Analytics CSV, they can provide you with the same abilities, you can use, I don't know, Bash, Sort and other command line stuff or just Excel if you're finding things. Another thing which I haven't mentioned is PivotSync CSV. It's a list of data which allows you to find specific data source by specific data type. For example, you have a hash, right? And you want to know where or what kind of data source can provide you with the hash. So you put the hash here and see what kind of logs sources can provide you with this data. You see here that Windows is 1.36 15, 7 and Windows 6
model 1 can provide you with some hash values. It's quite fancy for initial step of instant response or even further steps. And again, dashboards, which I've already mentioned, that's exactly how they look like. We started using a Mordor project for our demonstration stand, for our demonstration dashboard. So basically, Mordor is a set of collected logs from systems under specific attacks. It's developed by Roberto Rodriguez. And basically what we did, we just put their data to our stack and automatically generated hunting dashboard out of our analytics and dashboard entities. That's how it looks like. All right, let me provide you with some conclusion. First of all, what we achieved using developing this project. We, for now, have enough detailization for MyChart attack
to make it real, to actually operationalize it in our environment. And we can do, we can actually create all the, all kind of real-world controls or reduction rules or configure some log collection using information which we developed before for MyChart attack detailization. It sounds quite strange but let me try to make it simple. My chatbot didn't provide us with enough details. And our project and everything I just showed you were trying to be detailed enough. And I believe that we achieved it. Of course we need some more analytics and we're working on them. But it already allows you and allowed us to basically get to the most of problems due to absence of details on my
chat attack framework. Since we are working with plain text YAML files, we dramatically decrease amount of work and time for managing the analytics. For now we do bulk changes using Python and Bash. We never work with Confluence directly or Marble Down files directly. If you want to change something, we just do it in plain text .yml files and regenerate the target system. Like we can separately regenerate or update Marmadon or update Confluence. We don't do any maintenance basically manually in the final systems. Everything, as I said, exported automatically, all the mappings created automatically, everything uploaded and updated automatically. Just as it could make and that's all. And there is some unexpected pros, which is visualization for comprehensive analysis in Kibana. For now,
our CM team has a special dashboard which we developed for them. And it highlights what kind of detection rules can be implemented already. What detection rules cannot be implemented due to bad development status like experimental. So when we test our rules in threat detection lab, we change the status to stable or ready to be tested or production data. So they see that, alright, yesterday we developed some rules or test them and they will see it immediately after update that they can implement them in production. So we don't manage this procedure anymore. They just go to dashboard, see updated data, updated information about the status of protection rules and they go and implement it. At the same time we have
updated information about implementation of customer environments. The same thing goes to reporting dashboard for managers. Now they see all the time updated info about our progression, how many protection rules have we developed, how the statuses were changing, how many dash rules were implemented in custom environments and things like that. So we haven't published these dashboards, but the script for index narration is the same, so you can use it to develop your own dashboard for your own needs, or you can even modify Elasticsearch index export to have more details or more insight on something else. By the way, it's open source. So, and we for now automatically represent coverage of the threat, but first for now only from detection point of
view and we're following the approach to cover one specific threat from many points of view like detection, simulation, mitigation. and response. But for now we can highlight our coverage only from detection point of view and we will do it from other points of view on the later development cycles. First of all, next things about our activities. We will split our project into modules because it looks very massive and we will have separate response sections, visualizations, everything related to detection rules, data needed. and we will develop them separately. We will try to involve more community support for that. At the same time we started working together with Roberto Rodriguez, CyberWardog, on his project Awesome, and we will
merge our analytics in the next couple of months. So we basically were doing kind of the same stuff, creating dictionaries for log sources, for Windows mainly, And a couple of months ago in Brussels we agreed that there is no point to do the same job at the same time, like parallel review, just focus on one project which will be sufficient or efficient for everybody, for the community as well. Also we are working on definition of coverage and completeness of the technique. There is a lot of work ongoing with the Mitre community and ThreatHunters communities. For now, we just pushed our ideas on the Mitre ATT&CK workshop in Brussels a couple of months ago. And probably
we will initiate new discussions this summer and we'll see how it will go. Next step is develop API and web application for the project because there are some people asking for that and we believe this could be handy for somebody who don't really want to work with text files, but we'll see. Anyway, all updates will be available now on Twitter. If you have any questions, please ask them in the chat. I will try to answer them. We'll see. Okay? I don't even know when it's actually ended so I didn't see the timing. Well, we're going to conclude with this conference. Unfortunately, we don't have time to ask questions at this moment. If anyone has any questions or
any questions, come with us and we'll share Daniel's contact so you can ask questions. Good luck. ♪ ♪ ♪ ♪ ♪ ♪ ♪
♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪
♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪
♪♪ ♪♪ ♪♪
♪ ♪ ♪ ♪ ♪ ♪ ♪
♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪
♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪
♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪
♪♪ ♪♪ ♪♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪
♪ ♪ ♪ ♪ ♪ ♪ ♪
♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪
Hello guys, good afternoon and welcome. Thank you very much for being here. Roberto Martinez is with us, who is going to introduce us and explain what is Investigating BIOS Wifi Malware Implants. And since it is the first time he is with us here at B-Sides, Let's see, is there a drink? He has a shot of tequila. I wasn't prepared. Thank you. If my throat is closed, I don't know why. Tradition is not lost, right? Good. First of all, thank you very much for being here with us, joining us, thanks to the organization of B-Sides. It really is an honor and a pleasure. I had the opportunity to present in other B-Sides, in Brazil, for example. but I really wanted to
present it here in Mexico. I think it's a very interesting event, especially because it's a community event. It's when we have the opportunity, thanks, to live together and share and to drink together as well, as you already realized, so it's going to be good and it's going to get better. Thank you very much for everything and well, I'm going to start my talk. In advance, I apologize, I'm not very good with my throat, so... Sometimes when I talk a lot, I get a cough, I'll have to interrupt to drink some water or tequila, whatever comes first. I think tequila would be better, right? Well, then, well, let's start the talk. I'll tell you a little bit about what it's about. Basically, the talk revolves around how to do research
when it is suspected that within a team there could be a compromise at the bios level. And as we will see at the beginning, this can be more common or it can be a very interesting attack vector. I tell you a little bit, I come from Kaspersky. Basically my function in Kaspersky is senior security analyst. And I belong to a GRATE group that I will tell you later what we do in GRATE. Within my work profile, well, I have time working in the forensic part. Obviously in the threat research part, going through some malware analysis. And this is a topic that I found very interesting to share with you. A little about my experience in recent years. Mainly, well, I
really like to give courses. I have a lot of time already working on the part of high-end courses, especially, specifically in security. And well, I'm going to start with this because it's very important. For a long time, even for the company I work for, a lot of people consider that their security problem has to do with hackers or has to do with malware, right? And, usually, all security strategies are focused on how I protect myself from hackers or how I protect myself from malware. But the reality is that malware or viruses have stopped being what they are now for a long time. That is, there has been a very important evolution and, really, I mean, those of us who are from the old guard will
remember what viruses were before, they are very much different from what they are today. Today they have really become a medium. And I wanted to start here because many times when we simply limit ourselves to trying to protect an organization of malware, our visibility of all possible attack vectors is very limited. We focus on my anti-malware or my anti-virus detecting any attempt of attack or commitment. And the new challenges have to do with understanding that malware, hackers, and I want to clarify the point here because sometimes there are people who get offended when they say hacker and it has a negative connotation. I think that being a hacker, I think we all know what it means. The one who is good or bad, I think
it has to do with the activities that each one does, right? That is, we don't say he's a good lawyer, he's a bad lawyer, or he's a good engineer, a bad engineer. but he is simply an engineer. So, I make that clarification because there is this conflict. At this point, we have to understand that really, what we have to worry about when we try to defend an organization or try to identify any possible vector of attack, that this has actually become a means, it is not an end. I mean, nobody is dedicated to doing malware just for doing it, as it happened in the times of hacktivism. to look good, to send a message. Nowadays, when someone is dedicated to develop malware, it's because they have motivation.
In many cases, this motivation can be related to money, or obviously to an economic interest, or it can be related to other purposes. For example, simply having access to strategic information from an adversary actor. So, really, the game nowadays is called motivations, actors and objectives. That is, when someone defines an organization that wants to attack is because it has a motive behind it, either financially or because it seeks information. And then, within the mechanisms that it will use, some of them can be related to malware. But, for example, I have worked a lot in the incident response part, and I can tell you that in many of the attacks that I have had to investigate, the attackers don't even use malware. The attackers use tools,
even from the system itself. So, it really is not a malware problem. It's a problem of why can I become a blank? What are the opponents looking for when attacking me? And what techniques could they use? It could be malware or simply hacking techniques by hand, tools that can even be from the system itself. That said, our focus has to be to focus on who would like or could want to affect my organization. And there are models, for example, who has heard about the Cyber Kill Chain? Raise your hand. There are models today that define explicitly what are the different phases in which an attacker could seek an objective. That is, we have to understand that hackers don't think, or
in this case, the actor who may be behind a cyber attack doesn't necessarily think in terms of linear form. And that's why, in many ways, the traditional pen test stopped being efficient. The traditional PENT test that we all know where a vulnerability scan is done and vulnerability is sought to exploit, the only thing that allows me is to put myself in the point where I am currently, that is, what is my level of exposure and my level of vulnerability. But in no way is it mapping how exposed I am to a real attack. Why? Because normally an attacker will always think in terms of strategy. If their goal is to have access to a database, So it will define a path
and probably as you are seeing in the slide, many times the attacker will look for, use, even tools of the system itself. You will not get this in a conventional pen test. It will not give you the time and definitely not the reach. So, for example, the pen tests of today have become only a baseline that tells you where you are going to start. Later on, you have to focus more on understanding which can be those attack vectors. And then we started to learn very different attack techniques, different from the ones we currently know. For example, normally when you think from the perspective of "I want to protect myself from attackers" or "I want to protect myself from malware", what is sought for example is
a good tool that filters the spam, right? That stops it. or that tries to block all possible compromising vectors. But what happens when the attacker looks for new ways to attack? Let's remember that in the end they go for a goal. If their goal is to reach a database, they won't ask you if you have PCI, they won't ask you if you have ISO 27000, they won't ask you if you have all the controls. "Ah, you know what? He has ISO 27000, let's not attack him because he's all controlled, all set." That doesn't happen in real life. In real life, I'm going to look for what you have, how you have it, and based on that I'm going to look for new attack vectors, new commitment vectors.
And then a very used vector are precisely supply chains. And that's why we have to start with something called threat modeling. What is threat modeling? It is to understand what are all possible attack vectors to which you can be exposed. An example: supply chains. For example, CCleaner. Who knows CCleaner? Okay. Who of you knew that CCleaner was compromised? Who of you had... No, don't tell me. Who had the version when it was compromised? Don't lie, don't lie. The reality is this: normally we use tools in our day to day life that we trust, do you agree? But what happens if the attacker visualizes that by compromising the tool you trust, it can reach your organization?
It won't come, right? Why? Because in the end, You can't put all the apps on a blacklist. There have to be apps that you can trust. So, the attackers said, "Well, let's attack supply chains. Let's attack those tools that companies or people trust, that they're going to install, that they're going to put blank lists, and that even if their security tools make noise, what will happen? False positive. Right? Hey, it's asking you to open the port. It didn't ask before. It must be a functionality with the new update. Open it. Right? a new attack vector. What does it mean? That we hadn't considered it our threat modeling, because we thought that all attacks come by spam, through the mail, come through an USB,
or use other techniques or attack models. The problem with this type of commitment is that, for example, they turned a tool that was free, that many companies downloaded, into a supply chain attack. And the problem was that Once the attackers made a first commitment, they launched a second campaign where they focused only on certain teams, obviously, that they were interested in, of certain companies specifically. That is, it became a more surgical attack, more directed. Who would think of attacking, for example, video games? It's an interesting attack vector, right? Think about it a little bit. How much information do they keep in their video game console? Usually, not only video games. We have accounts of different services, right? Think about
it. Who could be interested in their Netflix account, Amazon Prime, or whatever they have in their video game console? Another vector of commitment. What did the attackers do? They attacked directly, this is interesting, just like what happened with Zcleaner, the developer teams of the application. So, what happens? The developers were compiling, in this case the software, already with the backdoor. Therefore, when you received the update of the game, the game already had a backdoor. Again, we fully trusted our application and then our console was compromised. As in this case, there have been many. In Ukraine, for example, they compromised a accounting software. Imagine in Mexico the main accounting and administration software. That a compromise happens on
that scale. Practically the attacker has access to the computers of the accountants or the people who use that software and through there they can to move inside the network. They usually look for RDP services to make remote connections or even versions of TeamViewer. Sometimes in companies TeamViewer is used. So, the attackers use the tools that are in the system to achieve additional commitments. Who heard about the Shadowhammer operation? Another example of supply chain attacks. Imagine that you buy your new computer and then they get a message that their computer software is going to be updated. So, imagine that the attackers had previously committed this update. Then, inadvertently, you are installing a backdoor. For example, this was discovered in November, there were already a lot
of committed teams. How much can an update of a firmware, for example, have? or anything that can update a software of these characteristics. Evidently the impact is great, why? Because there are many native functionalities of the computer that can be compromised. Again, attack vector that we had not considered in our threat modeling and that allows the attacker to have access to different equipment. And not only that, for example, when we did this investigation of Shadowhammer in the company, we discovered that they identified specific equipment by the macadres. So, They knew who they were attacking, they knew what company they were from, they knew who was of their interest and who wasn't. They could discriminate at any given moment
and say, "You know what? I'm interested in this company, let's stick to it." Again, it appeals to reliability, in this case, of the applications. And not only that, they used valid certificates to sign the software. This is another point. Normally, we trust the software that is signed digitally. So, imagine the level of commitment. Again, supply chain attacks. And why did I start talking about supply chain attacks? Because precisely the issue of the commitments to BIOS or EFI has to do with supply chain, in a certain way. How much have you heard about BIOS attacks? Not much? Did you know that there is already ransomware that is distributed through BIOS attacks?
At the end, the software that we can have in this type of devices or components, better said, is fundamental or vital for the functioning of our equipment, right? What characteristics does it have, for example? What do attackers look for when they want to compromise a BIOS or a UEFI? Sure. Sure. execute in the form of privileges, another is persistence, right? For example, would it work if I formatted my hard drive to remove this? Would it work if there was a normal scanning with my antivirus? Moreover, you can remove the hard drive and change it for another. Is the threat still there? Of course. Do you realize? That's the issue. So it really is something that exists,
obviously, within a very specific context, because again we return to the same thing, motivations. In the end, all attacks are hard, right? The attackers have to invest. So this is a matter of cost-benefit. In December I had the opportunity to listen to Eugene Kaspersky say something. Today the challenge is becoming to make the attacks more expensive. If you make the attack more expensive, the attacker can say, you know what, it doesn't suit me. It's not profitable. Why? Because it costs too much. Here we have a classification of the different attacks that can be to this type of components and look, they are classified into two major types, it can be the result of an exploitation, that is, because vulnerability is found or it can be through
the supply chain attack, as we already saw, a computer manufacturer through which the BIOS is updated and that BIOS is already implanted, already compromised. The others are for example, compromising the firmware as such and putting implants at that level. In the case of the supply chain, it can be in an update of its BIOS or it can be simply if there is a weak configuration. For example, when we talk about security reinforcement, how many companies really consider reinforcing security, at least of their most important equipment at the BIOS level, at the WIFI level? It's very few, right? That is, there is no clear policy of In fact, in many cases, it is not even considered a risk factor, because it would imply, in many cases, physical access. So, we
would say that this is limited to a certain type of attack. So, as you will realize, there are several ways in which this type of commitment can be committed. An example of this, this is a rootkit that was fully identified, that was developed by an HT company, with a three-color flag, similar to that of Mexico, but without the shield. that sells software for surveillance, monitoring, and that, for example, they developed implant technology at the level of BIOS or at the level of UEFI. That is, they had the option that an implant of this type could be used to have access to the teams of the targets and through there, obviously, have access to the team's information. Very well. I'm going to present this. How long do you
think it takes to implement a component of these? A BIOS, a Wi-Fi, a computer? Five minutes, in fact? It's cut, obviously, because we don't have as much time here as to be talking. I mean, if we all had tequila, we wouldn't care that the video runs. But since we don't all have tequila, then we cut it a little bit. But I want you to see. How many company directors... how many diplomats, how many government officials leave their computers in the hotel room when they go to dinner or when they go to eat. This attack is actually called "evil maid" because imagine that the girl from housekeeping was given a piece of wool and they said,
"Look, you're going to do this." Or someone passes by cleaning staff, has access to the room and then grabs the computer. Obviously, for this, physical access is required, but I want you to see the time it takes and really the relative ease with which it can be done. install the device, take into account the phone clock, then all you need is physical access to the computer, a device to make the direct connection and at that moment the implant is generated or the implant is installed within the BIOS. The other can be through a provider, the provider that sells you the computer if it is committed to an update, that would be the other way, through supply chain.
But I changed the question. Do you think there are actors who have enough motivation to attack specific white people? High profile? Is it worth it to dedicate five minutes, a piece of wool and a cleaning staff uniform? Do you think it's possible? Of course. It's so possible that they are doing the demonstration of how this is normally done. So, in five minutes the equipment was implanted. When the computer starts, the implant is there. Without this being detected in many cases. Obviously, the companies that develop computers have already integrated, the other day I was in a talk with some of them, they already integrated security features that allow to validate, for example, any change that is in the BIOS, things of that kind, but well, we have to
take it as an attack vector, definitely, because we are talking about the possibility that this will happen, and this has nothing to do with spam, It has nothing to do with the conventional anti-malware that at some point it could have at network level or at computer level. Very well. I've already given you the context. Now, how do you investigate this? That is, when there is a suspicion that a BIOS can be compromised, how can you do it? There are tools that already have the functionality of scanning your BIOS to verify if there has been any modification or if there is any compromise at that level, because we return to the same thing. A conventional scanning
that will look for the RAM memory, and the hard disk. It won't go further. How can you do an investigation? Because here we are talking about another topic. In the best case, if the threat is already known and exists, maybe a tool that makes the scan detects it. But the question is, what if it doesn't exist? What if it's new? What if it's a recent attack? What if the malware was designed specifically to attack that target? Or in a specific campaign? How can I detect it? How do I do the investigation? These are divided in two steps. The first step, just like... Who knows the procedure, for example, to analyze a RAM memory of a
computer? For example, the first thing you have to do is a DOM, right? A DOM of the RAM. In this case, we would have to do a DOM of the BIOS. And if there was a tool, look for the BIOS. There is a free tool that we just released precisely because of the increase in this type of attacks. You can download it from that link. It's free. You can scan directly... Sorry. You can scan directly from your BIOS if this could be compromised. That is, you will look for rootkits, bootkits, everything that could be known. So that would be a step. With this tool, if you run, for example, in Windows that command, you will
make a BIOS dump to be able to extract it. That is, not only will you scan it, but you will also extract it to be able to analyze it. That's where the investigation begins, because we return to the same thing, if we are dealing with something unknown, it is not so easy to identify. There are examples, sorry, if you had a Mac or if you were doing it in Linux, because obviously it is different. Now here is something important, if you look at the second line, there is a procedure to remove sensitive or private data, because I don't know if you knew, there is a lot of information associated with you, stored there. So, obviously, it's a privacy issue. The ideal would be to remove that information
so that you can focus specifically on what interests you. At a given moment, what you could show where there is a compromise. This is a demonstration of how you would do it in Windows. The first step is the dump. Basically, you execute the tool. You can do it directly at the graphic part, but the graphic part would only be the scanning. If you really want to make the dump, you need to use that parameter, qpat, with qbios, you make the dump, and at that moment, you already have access to your bios. In the case that it detects something malicious, it will show you, and there it will appear, it will create a directory, and inside the directory, you will find the dump
of the bios. There is the dump of the bios. Very good. Once you have done the dump, What you can check, for example, in this case is the log file of what it generated. In this specific case, I was checking a virtual computer. Obviously, it doesn't correspond to the real information, because it's virtualizing many components. In a physical team, it would show more information. What is my recommendation? My recommendation is to download the tool and do a review of your BIOS, just in case. So, basically, well, What you are going to review is the last part where it says changes, the one that interests us. And you there could realize if there is something malicious. In this case, as it was
virtual, it did not generate reliable information. But that would be the procedure as it is. What happens if I want to do it with some other tools? For example, what happens if I have a Mac? This would be an example of how you could do it with a Mac. The first thing would be to obtain the firmware, obviously. I mean, we're going to do the dump. This applies the same to Linux. First step, here is the firmware dump. To be able to do the research. We are doing the dump. And we are saving it with the firmware.bin name. It's done. There we did the dump. As I was saying, it is very important to eliminate private information. Or confidential information. This would be
done in the second part. Well, in this case it will be the parsing. The parsing is, many times, that dump they do is not legible. or it's not something you can analyze, so you have to convert it to a format that can be readable. It's like when you make a RAM memory DOM, you need to make it readable first so you can use different tools. In this case we already made the part of the equation so that the firmware structure is readable and from there start. So here are two scenarios, if it is something known, we will probably detect it with some antivirus. whether it's a rootkit or a bootkit. The only thing you would have to do is scan the file.
Normally a BIOS file is composed of different files, DLLs, executables and others. So it would be to review those files to see if they have any issues. The other way to do it would be using Yara rules. For example, we have Yara rules that allow us to identify, for example, what is normal within known bios and what can be abnormal, that is, what can escape the normal. So, when I have the files, I can use Yara rules to scan and from there I can detect and identify things. When I'm with an unknown threat, the ideal is that you start with a file analysis tool And with that file analysis tool, you can verify the structure and, if possible, create
your own Yara rule. There are some complementary tools that we have that are free, even some are open source, that will serve you to do your research. For example, we have a project called Clara, I don't know if you had heard of it. The Clara project is basically when you create your Yara rules, you can do a search in our entire database of malicious code identifiable to see how efficient or how precise your Yara rule is. That is, it can allow you to test your Yara rules. It is a free tool, you can download your own server and you, through an API, make the connection and have access to our knowledge base. So, I recommend
it, it's pretty good. And the other is a tool that my colleague Vitaly developed, it's called BitScout, I don't know if anyone already knew it. It's to be able to make remote access to computers when you want to make incident response or forensic analysis. Many times you have the need to be able to, for example, make remote extraction. Right now, the demo I showed you was basically the same equipment, you had to run the program and do the dump. But what happens when the equipment is remote? With BitScout you can have access to the remote computer and you can do the acquisition of different elements. For example, you can do the triage, you can do the memory DOM, you can even do the BIOS DOM, but remotely. This is
a tool that was developed together with Interpol and that is quite interesting from the point of view of everything you can do. The only issue is that it is totally a command line. And well, the learning curve is a little bigger, but the tool is very powerful. You can download it for free from the internet, it's called BitScout. Some other tools, here I share the links, the ones at the top are for making dumps of different BIOS, of different operating systems, and the bottom part are tools from which you can make analysis. Really, when you already have access to the files, executable DLLs, you already have the possibility to analyze the behavior of an application. For example, a DLL. This DLL, it seems that it
doesn't come in all computers. In fact, Maitre worked on a project called Copernicus, which basically took the BIOS of a clean team and compared it with other teams to see if there was any modification. So, in this way you can focus specifically on what you consider suspicious or what is different or the manipulation that could have gone and totally review that file or that set of files. And from there, for example, you can create a Yara rule and that Yara rule is useful to search in the teams of your company, or at least in which you suspect that there may have been an important commitment. What is the most essential of all this? That in the end We are seeing new techniques that attackers are
using to compromise teams in an organization. And so, that leads us to think outside the box. For example, at the beginning of the year, there was an attack on a financial institution in South America. Do you know what was the vector of the attack, of the compromise? Skype. They created a screen of a fake company, they launched job offers specifically to certain people in the area, and obviously they hooked them up so that they could have a job interview through Skype. Sorry. And in the employment part, they were all hooked on, "you're going to earn so much, you're going to have a second interview with the engineers, I don't know what." "We just ask you to fill out this application, please, to have your data. Download it. Applicationpdf.exe. Oh
my God." What followed, you can imagine, right? Certainly, an application was opened to put your data, but obviously it was a troll. That is, if we don't think outside the box, We will continue to focus on protecting our teams, our networks, our organizations, only from spam, from the virus, from malware, and only having conventional tools. There are other very interesting tools, for example, I recommend Far Manager, extraordinary, he is a manager of of very good file systems, along with Hube. They can analyze, for example, files at the level of assembler or at the level of hexadecimal. So, they can help you a lot in the forensic analysis of the bios, especially when the threats are unknown. But the most important thing is
that you start using intelligence information. For example, how many companies or organizations... I, for example, should consider, if I am a government organization, I should consider this attack vector for the teams of my public officials, right or not? Directors of high profile of the companies, right or not? The most important thing here is the intelligence information, that is, if I already know that this exists, if I already realized that this type of attack exists, the best thing I can do is to obtain that intelligence to know who has done it, how they have done it, what are the compromising vectors and how can I identify them, so that at a given moment, I know If my team is not committed. In some of the talks, my colleagues
have talked to them, for example, about the framework of Mitre, the ATT&CK. So, it is very important that we use that intelligence information, of new attack vectors, of new commitment vectors, as in this case, supply chain attacks or devices through their BIOS or EFI. and then design a strategy that really helps me protect myself. So the key here is that, when I know of other attacks on other sides of different characteristics, I implement security measures to be able to detect them on time. And I tell you a little about the work we do because I don't know if everyone knows this area of Great inside Kaspersky. Many people still associate us as an antivirus company and all that. Well, we also dedicate ourselves to research. Within Kaspersky there
is an area called GRADE, Global Research and Analysis Team. We are a team of about 45 people who practically, globally, are doing constant analysis of different attack vectors, threats, commitment identifiers and so on. And part of our work, and that's what I wanted to share with you, and that's the beauty of this security area, is that this has become only a forensic analysis, an investigation, and the comparison we make is, for example, with a paleontologist. Normally a forensic does autopsies, and instead of the autopsy, he gets information about how an event could have happened. Here it is something different. A paleontologist usually starts by finding a small piece, a bone, and then he looks for
all the elements that may be around or associated with that bone. That is, it is no longer about what such malware exists, but in what context it was generated or developed. What is the motivation behind that malware? What is the objective? What is it that they were looking for? Who can be the actors behind that? Because that is the information that really helps companies to defend themselves, to know all this. And part of how this work is done, I want to share it with you today with this video. If you can put a little audio to the video. I don't know if there is audio, if there isn't.
Preserve skeletons of ancient monsters. In most cases, it's a very laborious search, analyzing a huge number of odd fragments, bones that might be part of a single creature or may come from different skeletons. Often you start with a single, small, badly damaged bone. Most people would probably just toss this bone aside and keep on going, but in security research we collect things. We work in different channels to collect artifacts that may or may not match the pieces in our collection. Sometimes we join efforts with other paleontologists and share our findings. And finally, we'll have an understanding of the APT we're really dealing with. But the investigation doesn't stop there. You need to find new samples, reverse them, reverse the communication protocol with
the CNCs, map new CNCs, find new victims. It's a lot of work for a lot of different investigations at the same time. Sophisticated cyber attacks usually have very professional groups behind them. They have resources, a clear goal and a plan of how to achieve it. And there are very few teams in the world capable of professionally investigating these attacks. Kaspersky Labs' GREAT is one such team. GREAT is not just about analyzing vulnerabilities or taking a look at malware. We are hunting the hunters. We are revealing the monsters. ♪ ♪
At Kaspersky Lab we process hundreds of thousands of samples every day. The art of figuring out which ones are significant is a bit like finding needles in a huge haystack. We are grateful for every needle we discover because this makes the world a little safer. Very well, how can we help you in In the topic of, for example, the identification of new threats, new attack vectors, we have a couple of totally free sites. One is called securelist.com. From this site you can download reports, commitment indicators, even in some cases, YARA rules, that can help you, for example, implement better security measures, know new commitment vectors. If, for example, a new attack in Europe for a financial institution, a power company,
it is very likely that this attack can be replicated in the near future in the region or here in Mexico. So this type of information will help you a lot to work in advance. Everything is totally free. And for example, large campaigns of directed attacks known as APTs within the same Secure List, only with the prefix APT. apt.securelist.com, also for free, you can access the information of all these types of attacks or compromising vectors, including some supply chains, which I think can be very helpful for you. I thank you very much for your attention, it was really very nice to be here at the event, I hope to see you again in future editions. I
thank you again to the organizers of the event, I congratulate them, they are very well organized and I think they have a lot of future. I don't know if you have any questions, any doubts? No? Very good. Questions, no? Yes. Yes. Yes, really anything helps, anything is useful. I think what we have to understand, look at the end, how can we simplify security? It is to reduce the risk level to an acceptable level for the organization. That is, who can be susceptible to certain types of attacks and, within that model, which devices would be suitable to make specific configurations or implement that type of technology. But definitely, anything you use will help you reduce the risk. Many
times, you also know why it is not used? Because of the lack of knowledge. That is, you do not have knowledge that you can represent a risk. Acuérdense, no sé si lo llegaron a saber ustedes, si el antivirus o cualquier herramienta empezaba a hacer la democión, lo deshabilitaban. O sea, es que se está poniendo lento mi juego, deshabilita el antivirus o… Es la realidad, o sea… So, many times, security is not implemented, sometimes it exists, or sometimes, as you said, a few years ago it was not so mature, and it probably has to do with necessity as well. When there were not a number of important attacks, it was not really considered a possibility or a high risk. When it increases, it becomes a
necessity. As I told you, we are working hard on that because we know that it is already an important attack vector. even the supply chains. The software I told you about in Ukraine, imagine, a accounting software, accounting software that is installed in companies, what could be the risk there? Another case we saw is an application that was installed for remote management of equipment, where you can control your equipment from a centralized point and you can organize them. They spoke to us about an incident of a financial company where they had discovered that there had been fraudulent operations. And when we did the investigation, we discovered that everything had come out from a DLL of this application. And then it was a valid application. But then, when we checked
the DLL, we realized that the application was trojanized. And then that's what had caused the commitment. So again, that type of attack had not been considered. Obviously, the focus was on spam, USBs, and all that kind of stuff. So, it was something that the attackers took advantage of. It could be the same case here. I don't know if you've seen companies where they have implemented this type of module. What happens is that when I was doing a research with Intel to be able to implement this type of module, - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - but you are doing the right thing. There are many adaptations that have to be made in those scenarios. Anyone else? Yes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - So the risk is that, for example, if you upload some kind of image or something that is going to be compiled or used, it is an ideal method, for example, to be able to attack that supply chain. Here, well, the way to reduce the risk has to be, we already saw, ok, I sign it digitally, but what happens if I steal your certificate and use the certificate to sign it digitally? So, here it is basically to take it as a vector of commitment also the fact that it is in the cloud
and depending on the type of information that is going up, you also try to reduce the risk. There are services in the cloud, for example, that are totally encrypted, with a point-to-point encryption. So, many times the ideal is to use this type of service, because the point-to-point encryption at least protects you that although the provider's servers are hacked, they will not have access to your information, for example. So, these would be ways to solve it. We have a question over there. How are you? Good afternoon. In your experience, when you have identified an attack through the BIOS, what has been the mitigation, eradication and recovery that you have implemented? Because many times, in some cases, it
has been seen that even the manufacturers themselves are committed. So, this - These implants come from the factory? - Yes, from the unloading. - So, how can you do that cleaning method to make sure you are not infected in a certain way? - Yes, the first thing is, look, many manufacturers, as I said, already included in their devices, protection mechanisms. For example, to verify the integrity of the BIOS. Any modification, block it, or validate that this update has to do with an authentic update. For example, that could be one. Implementing that type of measure. When they don't exist by the manufacturer, the ideal is to perform periodic scanning. or even through YARA rules, to do it in a preventive way. There is an activity
within this whole process that is done in a proactive way called Threat Hunting, which basically allows you, in those teams that you consider to be high risk, to carry out the verification process, for example. You do a BIOS scan, you try to identify if you could be compromised. Let's say that would be the combination. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter.
Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter.
Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Twitter. Thank you very much again and we hope to see you soon. Thank you.
♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪
♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪
♪♪ ♪♪ ♪♪ ♪