
The B-Sides DC 2016 videos are brought to you by ClearJobs.net and CyberSecJobs.com, tools for your next career move, and Antietam Technologies, focusing on advanced cyber detection, analysis, and mitigation. My name is Greg Norcy, and this is My Usability Goes to 11, A Hacker's Guide to User Experience Research. So to just give a little bit of an introduction how you sort of come along this path. I started out like a lot of you, just, you know, random script kitty in the 90s. Towards the end of undergrad, I got really interested in the idea of, you know, we've got all these security tools. And the problem doesn't seem to be that we need new forms of encryption, new tools. It's that these
tools are really hard to use. And around that time, I started working for a usability researcher at Carnegie Mellon and got some practical experience and actually ended going off to grad school to specifically study usable security, to specifically work on how can we design these privacy technologies with the intent of becoming a professor. And partway through that, I decided, eh, I don't really want to be a professor. Ended up leaving with my master's, came here to Washington, D.C., I started working for an organization known as the Center for Democracy and Technology. They're not as well known outside the policy sphere. A way that might put that in perspective for you is they actually used to
be the East Coast Office of the Electronic Frontier Foundation until they split into two organizations, which is an interesting history I won't go into. Currently doing some pre-election consulting. which is very interesting and unfortunately I can't really talk about. So just to give an outline of what we're going to cover here. First we're going to talk about why usability is important. In the community there's a phrase we love to use, RTFM, read the fucking manual. And you know that can be both good and bad. It's good to want people to learn, it's good to want people to not have knowledge spoon fed to them. On the other hand, That doesn't necessarily mean that we should overly complicate things for the sake of overly complicating
them. Second, I'm going to talk about why usability is hard. This is a hard problem. Third, I'm going to talk about why usable security in its own special realm is even harder. And then finally, I'm going to talk about how can we evaluate our usability in a systematic manner. And this is where it becomes really important. It's really easy for two people to say, I think this should look like this. I think this should look like this and there's no way to move past it. You've got your opinion, I've got mine. How do we gather data and in a scientific way try and decide how we should design our privacy tools?
So to just give a little note on terminology, when we talk about security, you know, we're talking about the normal stuff, confidentiality, integrity, non-repudiation, privacy is simply control over your personal information. So a lot of times when we're talking about X technique can be used to achieve usable privacy, that also means that we can use it to achieve usable security. We're also going to use a phrase a lot in this talk PET, that stands for privacy enhancing technology. It's a term that comes from the academic community. There's actually a privacy enhancing technology symposium every year which hosts a lot of things. There's a large chunk of the Tor developers hanging out there every year. It's a cool place. You should check it
out. So, why encourage pet adoption? Why should we care? First off, a healthy democracy thrives on anonymous speech. Mass surveillance has been extensively documented at this point. The IETF has gone so far as to say that pervasive monitoring is an attack on our infrastructure. Third, and this is where it starts to get really interesting, we're now seeing some studies from the realm of sociology and psychology that the existence of this mass surveillance has a chilling effect on free speech. If someone feels that there is this large surveillance apparatus looking at not only what they say publicly, but their browsing history, what they're reading, people may sometimes go so far as to not look at certain sources of information. And there have been studies shown, for example, with Wikipedia,
that people are, after the Snowden revelations, after Patriot Act and all these things, that people are less likely to look up things that they think may be related to terrorism on Wikipedia. And finally, without free speech, republics don't stay republics. We've seen, for example, I've got a picture of Pussy Riot up here for a reason, because when you start not letting people express their opinions, when you start arresting journalists, when you start doing things like this, has a very negative effect and it's a very slippery slope to go down. And the cornerstone of that is anonymous speech. If you have anonymous speech, go ahead. They can pass whatever law they want on Capitol Hill. Doesn't matter who gets elected. If
you can get the information out there, you can help preserve a healthy democracy. So why else help do this? Well, for one, The math is good, as Bruce Snyder said. If we look at the Snowden slides, the NSA thinks that TOR sticks. That would imply that TOR's pretty good. And these five eyes are mostly focused on bulk collection and then injection into unencrypted strips. Tempor and Quantum, if you want to look up the programs. There's no evidence that most common encryption standards like AES and PGP have been subverted. We did have a little bit of wonky stuff with the, I forget the standard with the curves, but a lot of the more popular things, we don't have any
evidence, for example, that AES has been compromised. We don't have any evidence that PGP has been compromised. So these underlying privacy enhancing technologies are solid. And we need to resist what Mika Lee called security nihilism, the idea that no matter what I do, I can't protect my personal information, might as well just post everything on the clear net anyways. This is not a good thing. So we know that our map is good. We know that these tools work. Why else encourage this adoption? The second is the idea of network effects. So I think some of you may have heard of Metcalfe's law, the strength of a network is the N squared. This is why, for example, whenever somebody says
something is the new Facebook, it probably isn't. Why do people use Facebook? Because all their friends use Facebook. Unless all their friends start using something else, it's really hard to scale that back up. And this is especially true for anonymity systems. We've seen that usability barriers to Tor can hurt Tor's adoption. And we'll go into this later, but you know, The more people using an anonymity network, the more anonymous you are. So it's in everyone's best interest that the tool be as usable to use as possible. Thus, usability becomes a security property. And finally, this is a pull quote from one of my friends who I was trying to get to download Signal. He said, nobody wants to download a special app to talk to Greg Norsey. And
this is the barrier that we're going to face. People don't want to, you know, security is not a primary task. Nobody picks up, pulls out their phone, says I want to be secure. They want to text, they want to order, maybe listen to some Taylor Swift. So why else encourage adoption of these technologies? Because nothing to hide is a load of shit. Everybody has something they want to hide. If you do not feel that you need privacy, see me after this talk. I'll put a Nest Cam in your bathroom.
So once we've established that, once we've established that there is at least some base level of privacy everybody has, then the question becomes, where do we set that line? We all want privacy. We may have differences of opinion about how much privacy we should have. But this idea of a transparent society is BS. And we really need to be clear about that. We need to be clear that we're not living in a society that's going dark. We're living in a society that is going light. We, our intelligence agencies have reams and reams and reams of information. We haven't had a surveillance society like this since East Germany. And remember that the next time an elected official comes to you and says, oh, there's this piece of data I
can't access. So, OK, so usability is important. So why don't we make these usable privacy enhancing technologies? Why don't we just do it? And to that, I go to my friend Neil deGrasse Tyson with this excellent quote, which you may not be able to read, which says, Obama authorized North Korea sanctions over cyber hacking. Solution there, it seems to me, is to create unhackable systems.
The problem is usability is hard. There is no definition of usable. I can't write a program that can say, this program is usable. It's inherently undefinable, unquantifiable. There was a very famous obscenity case before the Supreme Court. And the Supreme Court was trying to say, what is obscenity? And I believe it was Justice Potter who said, I know it when I see it. Well, unfortunately, that's not helpful if you're trying to know in advance if something is obscene. Likewise, usability, we know it when we see it. You can look at something like an iPod and see that it's usable, but it's very hard to define what makes that usable. And the other problem is that
the feedback you are getting from your users may not be granular, it might not be as helpful as you might ask. People will come to you and say, there's a problem, but they don't necessarily come to you with solutions.
And second, on top of all the stuff I just described, everything I said up until this point, you would see at any other human computer interaction conference. And then we have the additional burden that usable security is harder. And there are a number of reasons why usable security is harder. In fact, there was an entire academic paper all the way back in, I believe, 1999 by Witten and Tiger, that should be TYGAR, I'm sorry JD. Basically they did what's called a cognitive walkthrough and a follow up lab study of PGB 5.0 and this was one of the first times somebody used these human computer interaction techniques to study a piece of security software. And they came up with some really
interesting conclusions. According to Witten and Tiger, Johnny can encrypt if he's made aware of the task, security task, if he's able to figure out how to successfully perform those tasks, doesn't make any dangerous errors, and is sufficiently comfortable with the interface to continue using it. Unfortunately, there are several properties that make this difficult. Unmotivated users, abstraction, there's a lack of feedback, there's what they call the barn door property, the weakest link property, And we're going to go over each of these and talk about in detail why they're making it hard for us to create usable security software. First, the unmotivated user property. Security is not a primary task. I don't go onto my computer and say, I want to be secure.
As I said earlier, listen to Taylor Swift. And users aren't lazy. This is another thing I see band-aid or out, like users are lazy, users don't want to protect their personal information. Users are making a calculated cost benefit analysis.
Earlier yesterday, I went into the gift shop and I bought a Red Bull for like $5. I used my credit card to do that. Now, some people might say that's bad. That's bad security. Somebody might, you know, That credit card swipe data might be floating around in Russia right now. But I know I am not liable for any fraudulent credit card transactions as long as I report them within 30 days. So I choose to take that risk rather than walk over to an ATM and get some cash. Similarly, people's time has value. If somebody is not being evaluated on their performance at work for being secure, but for delivering results, why would they go and take extra steps which will impact their performance and
not give them any perceived benefit.
To give a real world example, a lot of systems, they say you need to have a very long, complex password. Do not write this password down, and we're going to change it every 30 days. Then we get irritated when the user writes the password down. Second, we have the abstraction property. Programmers deal with abstraction all day, every day. These concepts like public keys and signing, they're hard to understand. And not only are they hard to understand, there aren't really good real world analogies to use for something like public key encryption. When we're talking about other things in computer security, sometimes you can get your point across. I was talking to my dad on the phone the other day. and the topic of
the Internet of Things DDoS attack happened. And he was like, well, can you explain this to me? And I was like, well, let's imagine that everybody flushed their toilets at the same time. And let's imagine that that sewer was Facebook. There you go.
But because of this abstraction property, users aren't, there's what we call in human computer interaction a gulf of execution. The user knows where they are right now. The user knows where they want to be, but they don't know the steps in between. There's no way to move slowly towards that angle. And this can lead to some interesting situations, because users try and follow our directions, but because of this abstraction, they don't always do it as well as we'd like. So for example, we tell users, use secure Wi-Fi. So then a user goes into a coffee shop and there's a password written on the wall for the WPA pre-shared key, WA2 hopefully pre-shared key. So they
think, oh, this wireless is secure. There is a password. And they go and log on to their online banking and do other secure tasks. Now, thankfully, usually at this point, you've got HTTPS going for most of your important sites. But still, anybody who also has that pre-shared key can sit with Wireshark and sniff up packets and decrypt them at a later date. So it's very easy, it's very hard to simplify things and it's very easy to give advice that will be misinterpreted. The third is the lack of feedback property and that picture right there, if you can't see, is the wall of sheep from Defcon. If you're not familiar with the wall of sheep, basically back in the good old days, what would happen is if you sent
a password in the clear text on the DEFCON network, it got thrown up on the wall. And what we're talking about here when we talk about this lack of feedback property, if I do make an error, I don't know right away. I don't get this immediate feedback. That's how humans learn. That's why, that's how anyone learns. That's why if you, for example, they say if you have a dog and you come home and the dog's made a mess, punishing the dog won't really do anything because it's not going to associate the punishment with the act. It just thinks you punish me when I come home. You need to wait for them to actually be in the moment and then say no, I go in a crate or something. So
the takeaway here is that oftentimes people can be for a very long time doing insecure behaviors and they don't ever get any feedback until it's too late. And then finally, the barn door property. Just as it's futile to lock the barn door after the horse escapes, secrets once leaked remain so. Small errors lead to irreversible consequences. So let's talk about everybody's favorite person, Dread Pirate Roberts, who, if you're not familiar, was convicted of running the Silk Road, the darknet drug slash, let's just say poorly managed marketplace. So, How did they find Dread Pirate Roberts? Well, it turns out that the trail started with him plugging this dark net website with his Gmail address on a clear net site
associated with his real name. So they began to monitor him and his mail. Eventually, some fake IDs were delivered to him. So they came to talk to him. Why do you have like 20 different IDs coming to you? All a bunch of different names. same picture. He's like, oh, you can't prove I ordered those. Maybe somebody used some dark net site like the Silk Road to do it, to frame me. He couldn't really give a good reason why that would happen, but no. So they started following him. And eventually they followed him to a public library where he was logging in. And they literally grabbed his laptop and pulled it away from him so that
he couldn't encrypt it and arrested him. And this long chain of events came from one simple error. I'm not even going to call it a stupid error, just a simple error. And finally, there's this weakest link property. The network's only as strong as its weakest component. We, the defenders, must always defend. The attacker only needs to get in once. Only need one unpatched system. And here's the thing. A user who's going to click on a phishing message is an unpatched system. If your organization has created a culture of fear, where if I get an email from my boss that says, open this immediately, that I am in fear of my job if I don't do so, you don't get to complain when users
unthinkingly open attachments from someone claiming to be you.
So now we're going to get to the meat of the talk. I feel like we've sold it at this point. So how do we make our privacy software usable? And the first, and this is, we're going to start at the high level and get really specific. But at the high level, if you have hiring authority, try and hire what IBM design calls T-shaped designers, or in your case might be T-shaped engineers. And the idea is that you don't want somebody who just has depth in one area. You want depth and breadth. And these three areas are usually software engineering, which pretty much everyone at this event is going to have. The other two are research
and design. Usually when we say research, we're talking in the experimental psychology sense. When we're talking and then we separate out design to basically mean making things look pretty. And you know, people talk crap on that, but you know, it is a skill. It's a valuable skill. It's one that gets better with practice. And there are heuristics you can use. You can use things like color wheels and stuff. It's not completely made up on the spot. So you might be sitting there and saying, well, I don't work for some big company. I work on an open source project. We don't have a lot of resources. So there are two techniques that any small project can
use. The first is what's called a cognitive walkthrough. And then the second is like a small scale user study. And I'm going to talk about how we can do both. So first is the idea of this cognitive walkthrough. The idea is that we think about, will users be able to try and produce whatever effect the action has? Will users see this control for the action? Will they be able to find the way to initiate the action? Once a user finds this control, are they going to recognize that it produces the effect that they want? And then after the action is taken, will users understand the feedback they get? And you ask this for what are called core tasks. So for
example, let's take installing and using Tor. We could say that our core tasks are to successfully install Tor, to configure the browser to work with Tor and the components, to confirm that the web traffic is anonymized, and successfully disable Tor and return to a direct connection. And there is actually an entire academic study done on Tor that looked at this. And at the time there were a bunch of different ways of using Tor. And they found that an integrated browser was probably the best experience, which is now what Tor does, the Tor Browser Bundle.
So at this point you may be thinking, it sounds like I, sitting alone, can do this user experience. Why even do a user study? Because, again, you want to have some empirical data. Anytime you're trying to make a change, you're going to have some pushback. Things were built a certain way for a reason, and sometimes that reason could be as simple as it's easy to just throw 20 different buttons in a GUI and say, find the right one. When you use empirical methods, you avoid some dev coming to you and saying, well, that's just your opinion, man. Because it's really easy to design things poorly, especially when we're an expert and we can't It's hard to think like our users.
And I love this analogy. If you ever watch Always Sunny on Philadelphia, there's an episode where they have a box of hornets. And Charlie says, you know what I'm going to do? I'm just going to pop a quick H on this box for hornets. That way people will know it's full of hornets. Sometimes that's how it can be when you're an engineer, that you just don't know the level of knowledge you have. And it's really easy to, anyways. So. To finish up, I'm going to give you a little case study. This was actually my master's thesis called Why Johnny Can't Blow the Whistle, Identifying and Reducing Usability Issues in Anonymity Systems. And basically, this was two studies. I did the first study, went to a very nice conference, the
one I mentioned earlier, the Privacy Enhancing Technology Symposium. It was in Spain that year. Got to hang out in Spain, got to meet a lot of tour devs, and they were really open. So the idea of making Tor more usable and they made several changes. And then this is the most important part, we didn't just make changes, after these changes had been made we went back and re-ran our experiment to make sure that this had actually increased usability. So before we go in, freaking Tor, how does it work? You know, I don't want to assume that everybody in the room is familiar but I'm going to sort of go a lot quicker than I have in other venues. So, you know, anonymity service uses what's called
onion routing technology, originally developed by the Navy, briefly funded by the Electronic Frontier Foundation, now it's its own 501c3. And the basic idea is that you're going to route your traffic through three nodes to mask your identity. And another key point is that when your traffic is exiting the network, it's going to be in clear text unless you've already set up some sort of encapsulation. That's actually how WikiLeaks got started. They created a tour exit node, sort of sniffed the traffic for a while and found that it was being used to pass around some interesting stuff by, I believe it was Chinese hackers? Don't wanna, don't wanna say exactly since I don't recall off the
top of my head. So the basic idea, we obtain a list of nodes, we plot a route through these nodes, we have an entry, middle, and exit node. traffic between the nodes is encrypted and each node only knows the next hop. So for example, the middle node here does not know where the traffic came from, but it knows where it's going. And then here's the most interesting part, is that every 10 minutes, you're gonna switch circuits. So even if somebody is starting to, somehow figure out who you are, you're hopping circuits every ten minutes, you have another set of three nodes. There's been some previous research on Torus usability. You know, the paper I mentioned earlier, they did
a cognitive walk through, they generated a set of heuristics, they did what was called a heuristic evaluation, which is heuristic, it's just like a sort of, you know, this is something you should do. And they concluded that a browser bundle was the best way to do Tor. At the time you had, if you wanted to use Tor, you would take a stock instance of Firefox, you would install some sort of proxy management software, usually like Foxy Proxy or something like that. You didn't have like a standalone application. So Tor takes this feedback, they create this Tor browser bundle, custom Firefox build, It came with Fidelia for proxy and identity management. And then Firefox had Tor button installed,
which was just a little button that turned Tor on and off, defaulted to on. So to give a little timeline, back in 2012, we presented these results. 2013, Tor makes the changes we suggested. We also worked on a custom extension, which we'll talk about. And then in 2014, we presented this work.
So we did two studies and it was pretty simple. We brought 25 students into a lab, we had them download and install the Tor browser bundle and we had them note along the way the changes or the usability issues and make changes. And that sounds really simple but to achieve that actually took a lot of detail. So our study methodology. This was the lab section of a computer security class at Indiana University. So we had some pretty educated users. We pass out consent forms. When you're doing university research, you need to get consent from your subjects. You need to inform them of the risks and benefits of the study. For studies like mine, this is mostly a formality. There's not really much of
a risk to talking about your feelings about using a browser bundle. But for example, things in the psych department, things in the biology department, it becomes a very important role to have. So we instruct users to download Tor, install it, and then download a new desktop background for the virtual machine that they're using. Users are brief that this is not a typical lab, and that each time an issue is encountered, that you can raise your hand and say that you're having a problem. And the me, the experimenter, the TA, will be happy to tell you how to solve that problem. All that I ask is that you note down that you had the issue. And whenever somebody says, I am having trouble with X,
come over, tell them how to get past the stop point, and just say, can you make sure you note that down on your study sheet? Then we did a little coding. Coding in the field of usability is not your coding. Coding means assigning categories to responses. So at this point, we have done this study. We've collected demographic information from people and we have a long list of complaints about Tor. The first step is to just separate those out. So if we have one user and three complaints, each of it gets its own entry in a spreadsheet. The second step is to come up with a mutually exclusive list of categories that encapsulate all the issues that were encountered. Third, we then
have two people sit down and assign each reported problem to a category. Why do we use two people? Because again, we want to be empirical. This can't just be my opinion. Not everything's black and white, especially when you're looking at free response data. So we want to make sure that it's not just our... Oh, come on. There we go.
So we have two coders. They go off. They assign categories. Then you come back and see what the intersection is. How much did we agree? And there's actually a mathematical formula called Cohen's Kappa. And it is designed to make sure that the level of agreement for this sort of category coding is such that it's not just random chance. Because hypothetically, you know, if I was just throwing darks at a board, we could both randomly assign categories and still both agree. And we don't want that. And so, in the end, we came up with this nice little set of categories for issues, descriptions of what the issues were, what percentage, and we're not gonna talk about every single issue because we're
a little pressed for time. Instead, we're going to sort of do a deep dive into the main ones. And if we look at the data in a visual, visualization, we find that about 56% of the issues came from just, or 56% of the reported problems came from just three issues. I'm going to talk about each of these. The first was the complaint of what was said to be a long launch time. And it's a little hard to see on the screen here, but this was the Vidalia control panel that pops up when you start TOR. And what would happen is you would click the start TOR, And this panel would open, and then as this
panel was opening, Firefox would also open. But there was often a lag between this panel opening and your Tor browser opening. Sometimes a long time, sometimes up to even a minute. So what would happen is somebody clicks to start Tor. Vidalia opens up, and it would set up the Tor circuit, set up the proxy, and say, you're now connected to the Tor network. But during this gap, some users would see the message, you are now connected to the Tor network, and figure the entire web connection is connected to the Tor network, not one specific browser, pop open Chrome, go and do the study task. That's all well and good. What if we weren't doing a study task? What if I was signing up for a Facebook account
and starting a protest that could get me shot? It becomes a serious issue.
So our proposed solution was to alter Vidalia so the lag between the two is shorter. That's really simple. You can just delay showing this window. And the longer term solution, this one required a lot more heavy lift on Tor's part, was to move Vidalia's proxy management functionality into the Tor button extension itself. And now if you look on the screen, a lot of these functions like creating a new identity, seeing what nodes your traffic is passing through, those are all handled within the Tor browser now. You don't have this two windows, one application situation.
The second was complaints about browsing delay. Users complain that the Tor browser bundle has a noticeable lag. Well, that one we can't do a ton about. When you're routing traffic through a series of nodes, it's always going to be slower than if you had a direct connection. But you can control perception. For example, users know that connections in coffee shops and Wi-Fi might not be as good as their home connection, and somebody might not be incredibly frustrated if you sat down in, say, a Starbucks and was unable to access Netflix. So similarly, we hypothesized if we can pop up some sort of message and just say, you're experiencing lag. This is normal and expected. It will pass. That that would reduce
frustration with the browsing delay. Again, we can't solve that issue. We can just try and inform. And then the third was the idea of what we called window discriminability. Now we were talking about how the tour browser bundle was a custom Firefox. When I say custom Firefox, I mean they used the Firefox icon. If you were looking at your taskbar, there was a Firefox icon, but then when you moused over it, it said tour browser. This could get confusing if you had two actual Firefox open and Tor open at the same time. Very easy to alt tab into the wrong window. So our suggestion there was really simple. Alter the Firefox logo, give Tor browser its own logo. But there are also things that we suggested that
were not done with good reason. For example, we talked about altering Firefox's theme. So it's a nice bright green and it's incredibly obvious just from the window itself. We talked to some users who do not reside in the United States, who reside in countries where internet access often comes in the form of internet cafes, and we were told that this was a terrible idea, that governments in these countries are very repressive, the internet cafe operators are willing to have a sort of don't ask, don't tell policy as to what people are doing on those computers, but if a policeman comes into that cafe and the policeman can tell from across the room, that somebody is
using Tor, they're going to assume that the cafe operator could as well, and then they're both in trouble. And then that's going to lead to people being told, you know, don't do that here. So we dialed it back a little. So what was the result? Between our two studies, the number of people saying I had no problems doubled. Because we made this clear in our methodology. had people coming to us, if you're having an issue, write it down. But conversely, somebody, we told them, if you're not having an issue, that is a data point as well. It's perfectly valid to go do your study task and say, I didn't have any issues. If we look in detail,
we see an across the board reduction. Long launch time jumps way down. Browsing delay goes down. Window discriminability goes down. Even some of the other problems that we weren't really getting into went down.
So, I ran over, finished this a bit quicker, so I'll have some time for questions. So in summary, usable security is a hard problem. Try and hire T-shaped engineers. If somebody has some experience in experimental psychology, a little experience in statistics, a little experience in design, it's a good thing. If you focus solely on being able to pump out shell code, you'll get someone who can pump out shell code. If you do cognitive walkthroughs, you can very easily and methodically spot major usability issues. And with minimal resources, you can sit down with users and you don't have to do a user study in parallel. You don't, because it can be a little hard to get 30 people in one room doing a user study.
You can do serial as well. You can do a one-on-one multiple times over. You can reach out to people in your company. When you're doing a user study, you want a lot of diversity because you want your results to extend to the general population. On the flip side, if you're talking to somebody who has a master's or a PhD in computer security, somebody who has certification, somebody who is a security expert, and they're getting confused by your interface, that also can be useful information. not some dude I met at Defcon who can't even use my software, how am I going to expect grandma to? Because that's what we want at the end of the day. We want this software to be usable. Now, we've created
iPods, we've created so many things for entertainment. We can apply these same technologies, these same techniques that we use to dumb ourselves down, and we can use them to enhance our privacy. And we need to do this because it's important for our democracy. Ben Franklin once said on the subject of America, a republic if you can keep it. And we need to be able to speak out anonymously. There's always going to be people who use this anonymity to say vile things, but we need to make sure we're focused on educating people on making sure that we don't just silence the vileness, but actually try and change the opinions. If we just hide bad opinions, let them fester,
that won't be a solution. So if you're sitting here and you're intrigued, let's say I've sold you, I want to do more usable security. What can you do? Read Why Johnny Can't Encrypt, the paper I just mentioned. And there's a great tool called Google Scholar. If you haven't gone to grad school, you might not have used it. But most of these papers have a cached version on Google Scholar, even if they're technically behind paywalls. A second paper to read is called Users Are Not the Enemy by Angela Sasse at the University College of London, which talks about this idea that users are making informed trade-offs, that users
Security is not a primary task came from that paper. Third, read my paper, why Johnny can't blow the whistle. I mean, I'm not biased at all. It's the greatest paper ever. It's the hugest, most luxurious paper. And fourth, there's a website called hcibib.org. It has a ton of methodology papers and things like that. And treat it like you would learning a new programming language. And then finally, your own. Usability is not a checklist. Like if there's one thing I want you to get out of that talk is this. You know, usability is not something you can say I did usability. You can't just say well I did a cognitive walkthrough, it's usable.
It's a state, not a checklist, and it's gonna be continuous. Your software is going to continue to change and Azure software changes are going to need to continuously evaluate. A lot of the larger companies now usually have pairs of I shouldn't say pairs, trios of people. You'll have a designer, an artist, a user researcher, something like experimental psychologist or anthropologist, and a software engineer all working together to push out individual products. And your company should be like that as well. If you've never hired somebody who doesn't know how to code and see, reevaluate that. And then finally, shameless plug, if you want more comments on user experience, information security, or I've worked on K Street for a year and
a half and love to talk about politics. If you want to hear my ramblings on policy, I have a Twitter. And at this point, if anybody has questions, we have a little bit of time.
Sure. Oh, sorry about that. It's a hcibib.org.
Oh yeah sure.
Well, if there's nothing else, I'm going to pack up if anybody wants to come up and talk one-on-one.