← All talks

Ground Truth Keynote: Great Disasters of Machine Learning - Davi Ottenheimer

BSides Las Vegas32:231.0K viewsPublished 2016-08Watch on YouTube ↗
Mentioned in this talk
Malware
About this talk
Ground Truth Keynote: Great Disasters of Machine Learning - Davi Ottenheimer Ground Truth Keynote BSidesLV 2016 - Tuscany Hotel - Aug 03, 2016
Show transcript [en]

so so this is the keynote track uh keynote talk for this track uh so i'd like to introduce davi ottenheimer that's right uh president of flying penguin um yeah so give it up for davi all right thanks everybody for coming to my talk especially because yesterday i was sort of hiding in the the general population yesterday i was walking around and i was standing behind some people i know quite well who are in data science known for a long time and they said you're going to the ottenheimer guys talk and they're like they're like ottenheimer's right there and they're like oh yeah probably probably going so appreciate you all coming they're not here by the way that's why i told that

joke uh yeah this is a very packed talk because actually it's it's based on books i'm writing and then also a course i teach over two days which is about 16 hours of lecture that i'm going to compress into 30 minutes so i've pared it down substantially i hope there's lots of questions but my style of talking is not like ted talks i'm not a fan of ted i feel like they stretch one idea over 15 minutes to just hammer home this one point i prefer to just glut the talk full of lots of ideas which because of technology you can go back and watch later or if you have questions you can always ask me later

but i'm trying to compress and push as much as i can through some anecdotes and if one thing gets you out of the 50 that i tell that's my style so the agenda today is really i sort of boil it down to a poem or a style of i'd like this to be a haiku but i couldn't quite get there a style of poetry which is really i'm going to talk a little bit about myself and why my perspective on this i think matters in these discussions usually i try to take myself out of the discussion but i think in this case my experience is unique and and helps it and then it's basically that we believe these machines

this future of technology is better than us this sort of this implicit assumption that we're going someplace great a utopian society even and it actually turns out they repeat a lot of our mistakes which we can tell from looking in the past so as a historian it really i think helps us to look at what's happened in the past and then we can see the obvious reasons that they fail hopefully and then we can fix it if we see the reasons that we've seen these failures in the past so in my past presentations a little bit about me in 2011 i gave a talk at b-sides i'm still kind of proud of called dr stuxlove

where i said you know it's probably not just one actor it's probably like the united states israel and someone else i don't know england or germany that helped build uh you might have seen some of this lately in the stuxnet news but that's pretty much accurate and the way i described it was it was so well written probably that you don't have to worry about it which i think also has turned out to be accurate so it was one of my favorite presentations to put together and it goes way back so i hope five years from now i can say the same thing about today's presentation but i've been giving presentations since 1984 hard to

believe i know my first one was about a secret language i discovered in africa i wandered off into the jungle literally and came back speaking something so no one had ever documented and so you can read about it in this book called the anthropology of language so it's a bit strange i know but but actually it's over 30 years of presentations some recent things i've been working on you might have seen i did a jeep of death patch fail the day before they released the the patch failure news and it was all over the news i released that the infrastructure for patching the systems is totally broken so you could literally post a patch and anybody would grab it and put

it into their jeep which to me seems like a terrible situation to be in when you're telling people how dangerous their cars are so you actually make things worse it didn't get a lot of press but i think it was actually brought up by some lawyers who are suing jeep and then the tnu backdoor fail you might have seen some of that i i think i was the first person to point out that this was a really stupid back door it wasn't really a machine learning fail i got picked up but i don't know how much i was able to find anyone else who figured that out before i did the tesla autopilot fail i'm one of the big uh i

feel like i'm maybe the biggest proponent of tesla being at fault for this disaster that killed a life and i'll talk about that today and then recently i did this guccifer 2 metadata reveal where i said hey the links in the document are russians tell you a lot about what was going on and these are the books i've been working on so in 2012 i really securing the virtual environment and then in well last year i was supposed to have this book out but it's because the the topic just keeps expanding really and so it's getting bigger and bigger ironically all right so the tay bot i'm not really going to talk about much because it was

a really dumb failure uh i right away identified that it had a back door where you could just tell it something which is literally or ironically dictation so if you dictate to it what you want it to say by telling it this command it would just repeat it that's not learning there's no learning iteration there it literally is just taking the input and just repurposing it as output so as much as we talked about in the news as this machine learning disaster blah blah it's not really because it was designed to be bypassed so instead i'd like to talk about real machine learning iterations or deep learning where you have to learn very quickly over a long period of time in

order to get to a level of expertise that's sort of amazing so this is the bodei sale a lot of people don't realize sailing has come this far but you're literally flying out of the water and you're on 160 foot 160 pound device that requires just instantaneous knowledge you have to be able to push the knowledge into your subconscious so that you can zip around at 20 miles an hour with no brakes and a fleet of 100 votes there's an article now called safety is only an excuse for why you should limit the use of these vehicles it's pretty interesting to read from my perspective these are incredibly dangerous not only because you can run

into things very fast which i have done i've crashed pretty much every vehicle i've owned and i own a lot of vehicles motorcycles cars i still have lots of vehicles i still have boats but not only can you really crash these things and and cause great disaster but you can injure yourself which isn't the case with a lot of vehicles these days you're in a effectively a roll cage so when i crashed i flew off the side of this at 15 or 20 miles an hour and landed on another boat smacking the side of it so it's pretty exciting stuff but it really is and also when you think about flying penguin we look at the the

lower foils those look like penguin wings right penguins literally fly and this is what penguin wings look like when they're under water so that's a whole other talk but something a lot of people don't realize so this is me sailing across the ocean and one of the things i realized when i was sailing around the ocean was you know and people told me this too if you see something far off in the distance you act on it immediately you don't wait until the last minute because you're going at a certain speed and you have no brakes and you have no ability to maneuver the way that people assume with big brakes and cars so you change

the way you think about things you learn differently and so honest to god when you're out in the middle of the ocean and you see something like this this is actually big it would actually be smaller if you see it you change right that's a big life lesson and it turns out to be true because they move very fast and also don't have breaks and if you look at this and look down time passes very quickly you look up they might be right on top of you blowing their horn it's happened to me many times and it's a very scary event because then you're like what do i do now so that's sort of about me in a

sense that maybe gives you some perspective on how i look at transportation and i like you know i do all sorts of transportation i have lots of vehicles so i'm really into it this is a more realistic depiction of what it's like in the ocean though this actually happened to me once in the sea of cortez where i was sailing for many days weeks and you see this sort of blinking red light and it's coming towards you and you have to react to it but what do you do because it's coming towards you which direction do you go and then you really only have these instruments that you can see it's pitch black uh it was a pretty scary

night pretty interesting still to this day we're not sure exactly what happened and this was just one of many days when we saw very strange things and had to react to them it had very little information that's the reality of sailing survived so we believe machines will do things better than us and in fact what we find in a lot of cases is they do things quite worse but we still believe we still believe this is better there's someone out there right now but look at the articulation it's so much better than the human arm right so this dangerous because it gives this impression of passing when we're actually failing and you see this with a

google car they came to las vegas they had no weather uh they had no unmarked lanes like roundabouts they had no crossings nothing unpaved they had no judgment zone where it's like unclear what to do because a bunch of kids are walking at a speed that's slower than normal like it's just not hard so they give them a pass and give them a driver's license when it's very easy a contrived example and you actually look at the someone did a foia on this it was pretty awesome and it actually says it's an automated car so let's just let it pass anyway even though it failed two or three times which just doesn't happen to people usually right and so we see

this also with the researchers saying hey we were consuming more washer fluid than gasoline just to stay on the road just to keep the sensors clear but we'll pass that anyway as normal right so over and over again we have these examples one of my favorites is when uber got a letter that said you're driving on the wrong side of the road they said well what really happened here is our drivers are creative and so they're and this is a real problem because you know gps for example has a certain accuracy to it and they're just revealed that australia is literally shifting the continents are moving and so gps could actually say you're in the wrong lane

there's a 20-foot drift means it's thinking you're in the right side of the road and you're on the wrong side of the road so accuracy is very important to machines but we're sort of giving them this sort of passing when accuracy is more important we're giving them a lower bar that is a very dangerous combination and what's happening is everyone's being pushed into this world because data collection has become so cheap and machine learning has become so easy everyone's expected to play with it airbnb for example might say to everyone in the company you should just go and work with this stuff because you know it's easy to use and it's available and computers therefore increasingly are

bearing the burden of making our decisions we're thinking they're going to do it for us so that's what happened in tesla's case there's a world-class expert this guy was a navy seal who was very experienced in all sorts of different risk situations he decided to transfer the burden of decision to his car and on april 17th he said i actually wasn't watching and this horrible thing happened but the car saved me it was a mistake on the other driver's part not true and i became aware of the danger when tessie his car alerted me with a takeover chime what i saw when looking at that video immediately remember the boat is here's a car coming into view

you need to react now that car's coming into your view that car starts merging over it's not allowed to because the double white but as it starts to merge over here's an exit sign now there are three pieces of information to the human that says this guy's coming my direction very good likelihood they're going to come towards me car reacted at the very last second so had paste and ended up in a collision situation that is a terrible decision late people will tell me over and over again that these machine learning algorithms are faster this is slower should have reacted here it did not react till here now of course it's faster in the last second it can react

faster than the human can more safely but that's not what we're talking about so i actually pointed this out and i said that far earlier we're detecting what tesla is blind to and this is april 17th this is important to me because i predicted this guy's death effectively this is the guy who died saying this and i'm saying to him no this is not true do not trust your machine so what happened next is he said when i say he i mean the ceo of the company said you're steering to avoid a collision at this point that car is so smart it's doing you know the steering for you really what it's doing is cruise control and

lane keep assist but actually what they're marketing it as is far more so was joshua brown a victim of innovation well i would say yes the same way this kid was a victim of innovation this robot was designed to move around a mall and try to figure out you know who should really be protected but instead it ran over a toddler twice right didn't see him which is all it's supposed to do is see things so tesla's telling you that right tesla's telling you the auto has arrived and it's not true they're telling you that it relieves you of your tedious potentially dangerous aspects of road travel this is very misleading marketing so brown in this situation thinks again

early victory i can go to heaven now elon musk has noticed me this is literally what he said right before he died tweet when he said you know wow look at me i'm in seventh heaven so other people weren't talking like this they were saying oh my god my car is very dangerous the lanes went away because of snow my car freaked out or i followed a truck and it changed lanes and it tried to change lanes my car tried to change lanes into the car next to me these are all incidents that have been reported but they keep saying like there's nothing bad it's really good just keep going with it so this is what actually happened his

car basically changed lanes here and then went another 900 feet because it didn't slow down and if you really look at it as i have over hundreds of hours i've poured over it i looked at things like tesla saying it was against the brightly lit sky and that's not i think effective as a theory it's plausible if you look at this because look it looks like a bright sky over a road right you could actually see that as a human that there could be some confusion there but this is the actual situation there's a discont discontinuity here what's more important is that the trucker said that he changed lanes why would a car change lanes at

the last second any ideas

right it actually saw the car that is the opposite of saying it didn't see it so it actually it actually saw the truck what actually happened was is that it interpreted as an overhead sign i don't believe this but i'll take it for face value i think what it interpreted as is a moving bridge i think it saw these two side panels moving and it thought the road is moving why is that important because the gps is telling you right now you're on a road that doesn't curve the gps sensor should tell you on a straight road it also should tell you there's a left turn lane those are two important factors that should be not

just should not be discounted because if you know you're in a left turn area high risk of someone crossing the lane of a straight road right so humans can do this much better than cars this is what the truck saw a thousand feet away a car coming towards him so this is where you need to make that decision just like the boat example i gave you he was going 75 speeding but he effectively had about 10 seconds that's a long time to not do anything so when tesla finally did something the last second it was too late all right so what is that i call that the learning expectations gap and it kills you think that things are getting

better because tesla tells you we're improving things all the time you really don't know what's happening in your car they may be rolling things back making it worse they may be moving things forward making it better you have no idea even as a world-class navy seal expert in demolitions and electronics this guy was an amazing person understood risk better than anybody and couldn't figure it out and so what you actually get over here is actual features versus what you expect this thing to be doing which i don't think anyone in the world can figure out that safety margin i really don't think it's safe and i think tesla has some accountability here so here's some

examples of what it looks like segnet i don't know why they called it that it's like skynet is a system that they call remarkably good because it can take input and figure out how to segment it right this is for cars so it does learning they call it remarkably good the people who make it call it remarkably good so in england for example here's a road and this is what it looks like road tree it's pretty good right it does some fair so i ran it through botswana this is a building okay so what are they doing they're reinforcing things they know as good they're not looking at exceptions and trying to say well it's good in very

certain circumstances but otherwise we suck your car would be an absolute paralysis looking at a large building here all right so it actually seems like machines are repeating the same mistakes that we're making and they're just doing a little faster so here's the guy getting angry and the machine says oh yeah you want to get angry i'll show you angry take that so what's happening is that we have this market that's basically driving the price down so fast and making it so accessible that everybody's rushing in to grab this stuff and do great things create our utopian society and we're making as brain dead as possible right this is google talking about brain dead this guy's an expert by the way because

he works at google and he writes speeches for the chairman on an ipad and a smartphone so if you really look at history i'm a historian by training that's what my degree is from london school of economics uh history tells us that we'll turn hell we'll turn earth into hell with automation that's what the machine gun did if you think about gunpowder as a positive thing you can easily use it for negative things if you think about data as a positive thing you can easily turn in negative things so what if we use data like gunpowder for machine guns to create hell on earth what's to stop that it's not going to be machines i can

tell you that it's going to be humans so how do we stop terrible things well here's a great example of what's happening 2008 data-driven campaigns well data-driven journalism and today what do we have rage incoherent rage and fact-free delirium it's actually possible that we go the wrong way and in this market we're basically saying everyone gets a sword this is literally a post about how every data scientist in every possible field is going to get a sword to go play with so we're looking at them as weapons so anyone can get out and grab their weapon and pull the trigger and here's an example of how to do facial analysis it's a step-by-step it tells you how to

get the data and do the the work and they end with how does this work it doesn't matter it really doesn't matter to us they literally say we have no idea it doesn't matter the fact is we just get it right and so what are you measuring getting it right today or getting it right a thousand iterations from now when it really matters life is on the line so here's some examples of where it might matter i mean syria war criminals somebody can smuggle out a lot of images we can do machine learning on it and we can figure out who's actually being tortured and by whom there's real useful world case examples of important stuff i

worked on a project like this for torture so i can advocate for this quite effectively here's what facebook would look like if you think of it slightly differently facebook has non-lethal analysis of trajectory they're targeting by project by prediction but i always try to tell people that means i can drop a hellfire on you if i can predict where you're going to be effectively and there's a deep deep area of research here that people should look into for example why somebody keeps getting hellfired when they're innocent on the flip side versus hellfire when they're actually guilty so and this is actually a kaggle competition if you help with this you may be helping with that is

kind of my point uber says they're not doing fraud when they do data analysis but we have many instances where it does look like they're using it for fraud like they're giving you higher surge pricing when your battery is lower i think it's a pretty good indication and in fact when they actually raise their surge they say the algorithm did it don't blame us no one's in charge which is complete bs i see this a lot of times in san francisco i see the algorithms of the shipping company saying block the lanes cause safety hazards because it's profitable for them but they're saying hey we're not doing any fatality data analysis we're just doing you know common sense data

analysis amazon did this where they said that they were delivering to places that made the most sense but it just ended up being very racist the white zone is black the blue zone is white the traditional lines of racism reinforced by amazon shipping using data analysis pokemon don't go zones same thing they crowdsource the data not really machine learning but easily could be where they're not going to black neighborhoods because the data analysis they're using didn't have those as part of their data set so we end up with this police algorithms for policing the police it becomes a sort of iterative auditing the auditor's problem we saw this also with machine bias in criminal data probably one of the best

examples where they say they're going to predict who's going to be a recidivist who's going to continue doing crime and it just happened to think all the blacks would and not the whites so there's so many failures to go through uh here's an example where i was convinced it was a failure but actually i might be wrong and it might be right i tried to do all the wrestlers and it pointed out that the people who are on the bottom are very happy maybe they are having a good time i don't know but actually ultimately when i was doing a lot of facial analysis i found it very easy to trick the systems extremely easy

because they use these patterns for features and i just had to put some other shadows and things in place and they just go crazy they can't find me right this is an extreme example but this is one of the first examples i found that they couldn't find me so your outrageous speaker request sir oh thank you well it's hot that's right on time all right 10 minutes we have a lot of content so let's just zip on all right so anyone want to guess what unprofessional hair looks like this is real i did this search this came up from some black women that pointed out to me and i did it so professional hair do you see a problem already

do you see diversity in the sample set right away somebody who learned by traveling a lot or knows a lot about the world would look at this and go whoa all right we have a huge problem this is google girls women even men they go and look at professional hair they get that result what what do they get if they look at unprofessional hair black women no kidding this is a disaster this is supposed to be a smart learning system i mean this is the actual example that everyone knows probably right where that said it the failover from the learning system was to animals and when they did fail over for whites it failed to dogs

apparently and they're like oh that looks terrible so they fixed for the whites and then they didn't think about what happens if the blacks fail over to animals well they failed over the gorillas and it was because of shading errors like as soon as there's dark shading or some sort of weird uh the features were lost once the features were lost it just failed over to this other thing and it obviously upset some people and then here's the discussion between them about how to fix it like oh yeah really interesting problems and image recognition here no you're you're offending people like you're causing great harm all right so some obvious reasons for this failing even in the 80s people were saying ai

can't solve all our problems we need to be realistic about it they can solve for very narrow data sets our learning doesn't like iterate fast enough we need to be realistic about it don't expect an easy help button right on your computer that just figures out what you meant when you're typing gibberish and just turns it into shakespeare all right so and trump does this a lot actually he says no that's not what i meant i meant all these other things when i said that this sort of like just figured out just go google it google will figure it out for you it's just nonsense people need to think harder google has an ml ethics guide where they

talk about all the things they should avoid and it sounds good but as a trained ethicist as a trained expert in history and philosophy i looked at this and said what they're doing is saying avoid cost to us avoid cost to us avoid cost to us the do no evil at google really is make sure that we avoid costs that could cause us to like do something else avoid negative as opposed to thinking more generally as ethics should be which is categorical imperatives for example like privacy fairness security these are important things we should achieve so their goals are like down here when we should be putting goals up here like have you achieved fairness or

have you achieved avoiding an impact to you and so tesla does this effectively by saying well we can avoid people blaming us by just saying that you know 130 million miles have been traveled and we haven't seen any problems well that's not what we're talking about we're talking about we found a problem and what's going to happen in the future have we achieved something that's worth you know noting as a success or have we achieved something that's still a failure they're not qualifying it as auto autopilot oversight for example so we don't know how many browns there are out there maybe he was the only guy that really pushed the edge he was an expert

in risk so he felt safe pushing pushing all the way to where there's no margin of safety i wouldn't have felt safe from my sailing experience he didn't sail so maybe that's why there's a big difference between us but i think his life was a tragedy that did not have to happen in fact i tried to warn him on april 17th all right so there's data and then there's data i mean if we look at what we're trying to actually figure out every aviation failure i've found 100 of them have been investigated when we look at cars we've done like point zero zero zero seven percent of accidents so i don't want tesla telling me that we

understand the problem and that you know there haven't been any accidents this is not a safety zone we're at 1960 levels of car failure if you look at the the map and this is what it looked like after the investigation they took nine days to report it they left all this debris in the front yard that doesn't happen in airplane crashes you don't leave all the debris and just take your time to tell people that there's a major accident that could affect the future of everyone's lives google does the same thing they say hey we don't know what the laws were it's not a big deal we haven't any accidents and i did the research on google's

errors and i found there are 40 states that have two slow laws but google's saying i bet humans don't get pulled over for two slow laws well actually they get pulled over a lot all the time and there are more laws being passed all the time so it's just this basic ignorance of what our goals are so for example we have levels at the nhtsa that we should be talking about not tesla's marketing collision avoidance avoidance actual levels that cars should be measured to and even if you don't accept the government regulating it this is a commercial version that has six levels that tells you what level you should be at tesla's down here but they're talking about this

right there's a big difference false expectations so in reality cars are struggling down there at level one i mean they're struggling to deal with freeways that's what brown died on essentially there was a left turn across the freeway that was the fault but don't put them on boulevards and residential places when they can't even figure out how to save lives down at the the low levels and we see false victory even here google said the other day in a presentation wow driverless 2 human 0 when a bicycle ran in front of it and it stopped well actually that's false the bicycle as a human anticipated the car coming towards it and rode around it and

google takes the victory as their own because they hit the brakes that's not fair again as a sailor you see this all the time nobody has brakes everybody's doing this all the time and you don't say wow i narrowly have missed that person when the world champion sails in front of somebody you don't take the victory you look at them and say that guy just sailed in front of me at full speed and didn't hit me that's amazing right which has happened to me i thought i was going to win and then all these people sail in front of me and they all get the bell and i go i just lost i totally lost

that race in the last five seconds 20 people just went right in front of me all right so how do we fix this don't do things like this where you say in the future everyone's going to be saved what did you do in the past well no one died that way oh no of course everyone died that way oh well then immediately we should start you know accepting solutions that will allow that won't allow humans to die because really we get into this ethical dilemma where everything causes death i mean if you accept this or if you think this way ultimately you should stop driving now because cars kill so start taking the bus done don't try to improve cars just get

out of the car business but that's not what people will accept they're like no no i still want a car okay then let's talk about the trade-offs right instead think about what we've been talking about for hundreds of years uh this is john locke for example who said in 1693 be reflective in your process of thinking a big advance from 1637 where i think i don't just take answers from google as law and he was talking about god but today people say i read it on google so it must be true but reflective process really means accepting feedback admitting possible inadequacies challenging assumptions that expressing your fears saying i think tesla's going to kill me and

basically being the grain of sand to create those oysters in machine learning so taking control don't allow stuff like this where people fall asleep in the tesla and go wow that is a hard ringer of things to come we can all sleep in our cars amazing no take responsibility this person has an unauthorized transfer responsibility and is avoiding really the augmentation in order to get to author imitation which is my word for uh this sort of automation that has been authorized it's basically being like a no-brainer not a good place to be we don't want to be no-brainers instead we should hold i create another word i can't resist so the owner of an algorithm the algoner

you should hold them accountable and really we should try to improve the world so we set our measures high enough that we're actually trying to make machine learning successful and and not accept these disasters even the small disasters as exceptions or outliers we should really think of them as harbingers of what could happen uh we should be trying to create or practices we should practice and and promote to preserve security so really what i'm looking at is a trusted reflective learning model and if i was going to give an example of what that would look like there's a u.s doctor who just did some amazing work and one of the things that came out to me when i

was reading his success in how to get rid of problems was he educated humans as owners and then he assigned them sensors which if you think about it if you're an owner of a tesla you're being given all these sensors what if you push that ownership down to those individuals and brown i get that he loved his car but what if he had been filing all the flaws and thinking about it in those terms instead of trying to convince people that he could push success higher and higher when he was way ahead he was in a beta mode they didn't even understand the risks of so this has been very successful for this doctor and the thing he talks about

specifically is he got out of the i'm from america role which a lot of people do when they move into countries where they're trying to solve health issues like you have malaria well i'm from america i'm here to save you it doesn't work and so there's a lot of philosophical and anthropological reasons for that but bottom line is you don't want to tell people what to do and tell them they're ignorant and they should just wait for this utopian society to come what you really want to do is give them responsibility make them some part owner the same is true in security when you talk about patching when you talk about all these risks and

social engineering it's about pushing responsibility to people where they become intelligent actors and they have control of their own destiny and adding sensors to this doesn't change anything if anything it gives them more to work with so you give them a machine gun you have automation it doesn't change the responsibility of how to use that machine gun right sure they can kill more people but they're still responsible for who they kill and so that's why i want to think about it right and we don't really hold the manufacture of the machine guns accountable so this is why these worlds collide i feel like we're really talking about ethics cognition we're talking about responsibility it's not math

math is involved but a lot of people say just wait for engineering to solve the world's problems it's just another engineering problem we're getting out of engineering at this point and we're getting into these deep issues of philosophy cognition and so that's basically uh two days worth of lessons in 30 minutes and hopefully that satisfies the keynote requirement so thank you very much

question you want to use the microphone

so was that working now there we go so you're actually talking about like don't trust full automation but what level of machine assistance is perfectly acceptable in your head i would actually say trust full automation but you don't trust it until you have some proof that full automation is safe i'll give you a good sailing example we have auto pilots all the time and we set them and we go to bed and it takes over the entire boat everything but here's the problem that happened to some friends of mine i happened to be giving a presentation so i missed this particular trip luckily they said the autopilot went to bed for a spit of land

sticking out and they ran the boat right into the land so it did not detect the land same thing that happened to brown so you've said it where you think all the conditions are satisfied and if you can't account for all the possible conditions as a human you see much bigger space then it shouldn't be handed over you don't authorize it when you don't think it can handle the situation coming up which is not i'm not making this up like stanley kubrick said this in 2001 he said you give complete control to hal at some point how will try to kill you and you need to have this other plan okay which for him was disabling hal but

yeah thank you questions i think you are done with your speech now dave thanks for getting the talk um you you'd mentioned that google's search biases um did you factor in the idea of the the search bubble where google can basically reflect your own biases back to you oh yeah for sure so there is a lot of that you're affecting your own situation in fact there are other examples of women being given lower paying job ads and they say well that's a reflection of that you being a woman right so it's not just that your biases are reflected back to you but it's not being able to account for historical biases as error and trying to change that and so

thinking of well i'm reflecting back what i think you would like as opposed to i'm reflecting back what honestly would be what we should achieve or strive towards oh so i mean do you think that like google should basically be sort of setting more or less like the where society should go at that point yeah that's right but i don't think it should be google should be sitting where we should be going it goes back to humans should be thinking about their responsibility individually so i'm not saying google's responsible i guess you could say the corporation is a person you get into that whole thing but as an entity we're all responsible so google should look at that and take

the feedback and go that doesn't look like a fair result maybe it's a function of us reading something from them and we're misinterpreting it so yeah i mean i looked at professional hair like while you were doing it and i got basically 50 50. they changed my results but they they changed it since the result okay yeah but but it's because see this is the this is the key they've changed it because of the the pushback and so they respond when they feel like there's regulation but they're anti-regulation so that's what you have to figure out how do you take feedback from somebody as authoritative if you reject it from a regulator because you're a libertarian

and you don't believe in regulation so what is the level of this is where unions come into play the united states became a union of states not many people realized this but it was a union exercise the colonies became a union to fight the king so the history of america is unionizing and so what number of people have to get together and say to uber or google or ups this isn't fair before they actually accept it as feedback because that's a power distribution that's not fair you could end up very easily with google saying screw you or uber saying it's good for us good for business sucks for you that's that's why it's difficult thank you

sorry the next talk okay