← All talks

KN - Konrad Koerding

BSides Las Vegas · 202150:10325 viewsPublished 2021-08Watch on YouTube ↗
Speakers
Tags
StyleKeynote
About this talk
KN - Konrad Koerding Keynote BSidesLV 2021 - Camp Stay At Home - July 31 Video Tags: bslv2021-etc-keynote-konrad_koerding
Show transcript [en]

hello happy campers i'm damon and this is david you got it backwards again dude i'm david and this is damon very important yes so i'm glad we got that sorted out we are very excited to have all of you with us this summer don't worry even if you know your friends and families haven't exactly managed to get rid of this time hopefully you've all managed to find your cabins and get your computer set up by now if not don't worry there'll be plenty of time after the dock hockey and dodgeball games this afternoon assuming it stops raining you know yes indeed you never know in vegas though in the meantime damon tell them about the talks there are

talks at these sites i thought it was all crafts and like contests and stuff yeah unlike defcon there are actually talks at b-sides all right right right that's the whole point yeah okay i forgot uh talks so yeah as many cool ones as we can manage to squeeze into the space and it was a challenging year on that front we always thought it was hard to fit in everything at the tuscany but let me tell you this year we basically have two rooms well streams fortunately it's a non-euclidean space so i think we'll be all right absolutely first though the bad news um when we were reimagining b-sides for the virtual camp we ended up deciding we needed to lose a

few of our favorite features of the physical conference so this year you won't be seeing a lot of really popular things like our trainings or sessions like ask the eff or ask a fed or our entire underground track however don't be alarmed gang we're not removing from the conference permanently we just make some hard choices this year in keeping up with the virtual format right and that said you know while we're certainly delivering a paired down space constrained b-sides this year we're still keeping true to our big tent philosophy when it comes to what happens in those streams right you'll find talks across many disciplines and from all of our other tracks that is very true and i can't think of a

better example of that kind of cross-disciplinary work than our first keynote speaker this year dr conrad cordin conrad is a professor with the university of pennsylvania departments of bioengineering and neuroscience and he's also a fellow at the canadian institute for advanced research impressive dr cordian's research tries to understand how the world and particularly how the brain works his current focus is on causality and data science applications how do we know how things work if we can't randomize but he's also very excited about understanding how the brain does credit assignment right so let's all give our full attention to oh okay okay inside voices please everyone inside voices okay okay now all right okay now take it down do a one

okay okay all right very good campers very good okay and now i need a zero right zero all right very good very good okay i promise we'll all be able to get all the wiggles out when counselor nick from the speaker operations bunk walks us through arts and crafts during the break until then let's give our full attention to dr cordin and his presentation on brain hacking wonderful today i'll talk about brain hacking but not in the borrowing self-help kind of a way but hopefully in a slightly more interesting way so who am i i'm a professor at japan i'm between neuroscience and bioengineering it's one of these weird positions that are just there to help

different fields talk with one another and that's why i'm so excited to be here today i teach deep learning here i teach causality and of course i'm involved in doing a lot of research in neuroscience so i worked on theory in that area i worked on data analysis i worked a lot on conceptual integration doing some physics and a lot of machine learning and so on and so forth i co-founded the new match conferences and then your match academy which is a small group experience that we teach every year to 3500 students this year and people learn about neuroscience and deep learning there i'm also a failed experimental neuroscientist i quickly learned that i'm not so good at

running experiments and that's why i do that's why i do theory and data analysis otherwise i'm interested in skiing hiking and salsa you can find me on twitter mostly as at codinglab so why are we talking about brains ultimately security is about people which you of course know extremely well like this we have the world as a network of people and computers and robots and cars and things like that but what are the properties of the people that we can talk about and this is of course something that we all hear very much about humans are of course a major weakness much worse than any technical system arguably so if you ask a person give me your secrets

they'll be like no way if you say well what about you also get a free screen saver that's like yeah sure take whatever you want um but this is this kind of social engineering is of course a standard way of thinking and people have worried a lot about how we can make systems more secure and ultimately it boils down to having people that are not willing to take those bargains so let's talk about brains so i will argue that brains are themselves a matter for security and that they can be the target for hacking and that we might be moving into such a domain relatively quickly so today i will talk about cryptography and brains cryptography is defined as

the art of writing or solving codes now let's see how that applies to brains you can say we want to solve the code of the brain i want to find out when i when i wear my hat as a neuroscientist wanting to understand how brains work i want to solve the code of the brain what is it how does the brain communicate how is meaning start in the brain and there's of course to approaches that we can think of the first one is with known plaintext if i know what the brain wants to do at a given point of time and the alternative is with ciphertext only if i don't know what the brain wants to do at a given point of time

and ultimately i also want to and i will not be talking about that to write the opposite if i understand the code that's in the brain i can also write into the brain and say how should i write into the brain so that i maybe start really liking bmw says cow or something like that now but first what is the code used by the brain and this is something that i started working on early in my phd in fact in a way it's the thing that drove me into neuroscience how does the brain communicate what what do we mean when we say there's a code so let's see what happens in the brain while we think well first

like every system let's have the specs of the brain how big is the brain what how is it built well the brain consists of roughly 10 to the 11 neurons now we have between them there's connections between the neurons we have roughly 10 to the 15 weights those are 10 to the 15 connections between pairs of neurons and these connections these weights are drawn out of 10 to the 22 potential connections now each of the 10 to the 11 neurons could have a connection with each of the others but not all of them do now if we look at individual neurons no one quite knows what the computational power of them is but arguably every single neuron is

equivalent in a way to a multi-layer neural network now every neuron spikes only roughly once per second so if the spikes is the code that they use then they're extremely slow but here's a cool aspect of the design of the brain which is compute is exactly where the memory resides arguably the compute in the brain happens that the synapse is the connections between neurons in the brain and at the same time so this is where the memory sits but this is also where the compute happens so if we think about this as a total perspective it has amazing compute not like it has 10 to the 11 neural networks if we think about it with 10 to the

22 potential weights so it's a massive system but it has a ridiculously slow clock speed now if we compare that to an apple m1 for example which has 10 to the 10 perfectly borrowing transistors that do only very simple things 10 to the 11 bits of ram 10 to the 2 compute units 10 to the 9 clock speed we can see that it still has a rather impressive amount of compute to offer relative to current computers but still you can see they're like sort of in a similar situation like 10 to the 10 transistors 10 to the 99 clock speeds uh that is that is certainly in the same range of where neons might lie cool so

now let me briefly mention a fast insight where you can say well what is a neuron so on the left hand side you see what's called the dendritic tree of your neuron you have one nav cell it's roughly where this pink dot is and what they're plotted is other dendrites now the dendrites are the structures on every neon that integrate information coming from other neurons and what we did in this work with alana jones we asked well what is it that they could do and we we have we compared the architectures lda linear discriminant analysis a linear technique fully connected neural networks this is the state-of-the-art methods on on prompts and here we this is for

example an mnist case which is object which is character recognition a single neuron can tell us if if the input is a certain digit with an accuracy that at least is roughly in the same range as if we have little neural networks so an individual neon might have remarkable computational powers now let's talk about how they communicate the communication of the neuron to other neurons happens through spikes on average as they said before neurons will spike one time per second and so what we have is if we listen to them in fact that's how a lot of neuroscientists in the early days would study neurons you would hear like click click click click and this is

these are the spikes that come out of the neuron now neurons are connected to often very long wires called accents and across these accents it's only these digital pulses that really get through it now that doesn't mean that we know what the code is now what could the code be it could be when every spike comes relative to maybe some global clock it could be when one spike comes relative to the next spikes we also let's say we have multiple neurons not like the multiple neurons will spike at different times but sometimes multiple neurons will spike at roughly the same time for example if you look at the fourth spike here from the left you see i drew them aligned with one

another now you will ask me well what is the code that the brain uses and i have to let you down on that we don't actually know at this point of time so what are the going hypotheses one of them is what they call rate codes where they say well there's some relevant time scale maybe 100 milliseconds what matters is how many spikes each of the neurons produces within a short period of time that's called a rate code there's other people who believe that the brain uses a synchrony code where it doesn't matter when each neuron spikes it only matters if multiple neurons spike at exactly the same time there's other people who said what

matters is the timing relative to a clock signal where you could say no like the output what's communicated here is if you want that long vector of when each spike happens relative to some global clock so there's lots of hypotheses that we have here so does that mean we can't do anything about brains given that we don't positively know the code now what do you do when you're somewhat clueless about the system well arguably you first try linear things and counting the number of spikes that we have in short intervals and doing linear techniques seems like the first step that we can have here and this is the technique that most neuroscientists do so all these

other ideas about codes exist in neuroscience but they have relatively small communities so what kind of access do we have to the brain and like it has the human brain has 10 to the 11 neurons um and we cannot get the signals from all of them that would make our job incredibly similar simple so here's the result of a study we run with ian stevenson so what he did is he went through the entire literature and asked for the various published papers when where they're published this is what we have on the x-axis here and how many neurons would people simultaneously record from and what we can see is people started recording from roughly two neurons at a

time in the late 1950s and right now people record from a couple of thousand neurons at a time those are of course the top labs there's still lots of labs today that only work out from a single neon at a time so what we can see here is exponential growth it's like moore's law everything should happen very quickly well no it turns out that the doubling time here is very relatively slow so the long-term doubling rate has been roughly every six years but if we look closely at this graph it looks like things are speeding up a little bit and what are the reasons for that well at least one of the results of that is

that all of a sudden your recording techniques are very much height here we for example see elon musk he runs the company neurolink that aims at getting lots of electrodes into brains with the idea that if we have more neurons in there with a broader bus we get more information out of it at least one side effect of those developments is that it advances the technology we use in neuroscience and there's certainly a lot of driving lots of companies and this is just one of them towards allowing us to record for more neons and to record better for neurons in any case this engagement of industry leads to professionalization and all of a sudden i think the trend will

accelerate and we should expect that we get considerably more information out of brains with within say the next 10 to 20 years in fact daba had had an aim of recording from a million neurons within a relatively small number of years now let's talk about known plain text attacks on brian now there's one wonderful things about brains in lots of cases the brains are very willing to work with us for example you have patients who cannot move and these people would like to be able to still type or be able to do things or at least they can't move the way we can move but they had a spinal cord injury and their movement is

very very limited so in that domain they have been for a long time people doing research putting electrodes into brightness now let me give you an idea here so you take a little grate here 10 by 10 electrodes all silicon electrodes they get inserted into the brain in fact they come with this special device it's a bit like a hammer that goes like boom and then it's in the brain and so what they have is they have hundred channels and on those channels they roughly see one neuron on each of those channels and then what we can do is we can try and decode what is it that the person is trying to do so here's a new study from this year by

the chennai group that worked like this they said okay now imagine writing an a and the patient imagines handwriting an a and we see what's going on in their brain and you do that many times with them and then you're like imagine writing a b and so on and so forth and then we can ask well can we use algorithms to distinguish between when there's an a and when there's a b now that's a known plain text attack and in fact we can and this new result just showed that we care that they can do 90 characters per minute which is really spectacular given what the field had been been doing before it's not as fast as we

type on our keyboards but it's pretty impressive and it just uses it uses deep learning approaches uh and to basically be able to decode the words that they write and this is wonderful it really is promise for patience i will later argue that it's also not so wonderful because it introduces a new vulnerability to brains there's other research there was a paper that just came out i believe last week of chang and the team and the new england journal of medicine where patients were just imagining talking and they could decode the words that were there and i should also mention my lab has been deeply involved in that area using deep learning based approaches towards decoding things from huma

from from brain usually we work of course on animal brands so a similar approach here just to give you another case this is from the schwarz lab where they use these brain signals to serve a robot and patients can then self-feed with with that by just deciding how they want to move that hand now i i told you that over time the number of simultaneously recorded neurons grows up goes up and you can directly see that in here if you want if you if you have a prosthetic device that doesn't have enough data rates to it what's going to happen to it it's either going to be slow or it's going to be noisy so what really

happens they do this they're not very good in that sense what happens through that progress is that progressively they get more reliable less noise and there's been a lot of wonderful tech effort that that goes into making these devices work better all of that implicitly forces us to break the codes that are in the brain there's a lot of mysteries we don't quite yet know what's the best way of decoding from brands and i should mention one thing here using modern techniques definitely helps a lot a lot of the field is still using linear techniques which you can see on the left here we know filters and then we knock cascades common filter and so on and so

forth if we go to the modern deep learning deep learning systems like lstms we get considerably less noise to it but of course much of the much of the work is also done by prosthetic devices over time get better now what's the what's the long-term term goal here we take data from motor cortex we call that m1 we decode that tells us how we want how the patient wants to move their hands and ultimately once we can properly break the break the codes we hope to be able to get it back give it back and that is somewhat working interestingly but you can say we then take what the prosthetic device feels and put it right into primary sensory

cortex which makes it feel a little bit like the extra hand gets touched now let's let's zoom out a little what is happening that was what i was just telling you about we have super partial data now we were celebrating that we now wake up from more than thousand neurons but we're recording for more than a thousand neurons from a 10 to the 11 dimensional system now that's an interesting problem like what can we say about the system if our access is so limited and we have somewhat noisy decoding and of course in this case it's a non-adversarial code design now the brain has not been evolved to produce codes that are really complicated for us so we may hope

that the codes that the brain uses are the kinds of codes that we can relatively easily decode at the same time it's a decisively non-human design like if we break other systems we're often using weaknesses of the human designers and in this case evolution was the designer which might make things like decoding much more complicated so um let's let's think about a little about the non-plain text approach that we have here how does training in these cases work i tell a patient imagine moving your hand forward or i tell them imagine drawing an a imagine drawing a b imagine drawing a c and that's of course incredibly repetitive so in a way people would want you to be able to

decode it without non-plaintext now like they would like to use the prosthetic device as they go about their lives and would like to still be able for the system to decode now why is this complicated in large parts it's complicated because the brain always changes so if you have these deco if you have these electrodes in the brain and you wait a little maybe 20 minutes or something like that then the brain will have reorganized a tiny bit now if you wait two months then the brain might have reorganized considerably so you want to have systems that in a way can recalibrate themselves so it's getting rid of the need for known plain text is one important goal

in that area now a lot of you will of course know this but let's briefly review why ciphertext only attack is even possible so alice says hello bob it gets translated into a code that we don't currently know the code gets transmitted to some line we are listening in on the line if we are neuroscientists no we have our electrodes in the brain and ultimately the code can be decoded namely maybe in the parts of the brain that say that actually use the muscles to say hello bob so there's the encoding and the decoding component here that's of course a key which is if you want like the descriptor of what the nature of the code is here

so with the plain text m we have we use a key and an and encryption here to convert it into cipher text a text we then want to invert this function which gives us the original message shirt now why is that possible we can in a way we cannot know what what alice wants to say but we do know that alice talks english and therefore you can say we know a lot about what's the nature of english and here's a nice way of of looking at that so this is somewhat dated here but we can say well what if we have the zero gram we just know this is this character well in that case we need 4.7 bits to

encode every character namely log 2 of 27 which is the number of digits that we have now you can use the one gram now the e is more common than maybe any other character but we can also use the two ground where you can say well the e is more frequent after an r than after a q and so on and so forth and there was like an old approach which is ibm y trigram there's also the shannon game which is a fun thing where you basically just ask people to guess what the next character would be and you go in english language and you don't need many guesses to guess what the next character is

so what do we have as we effectively have samples from uh from the from the set of messages which is just english language and then we have a decoder that takes key k as a key and that gives us samples of what the decoding would be for an hypothesized k and now we can say what's the generic solution for that we want to minimize minus the kl divergence from q the distribution of decoded text to p the distribution of english text in this case which of course depends on k and we want to maximize over k basically give me the key for which it gets to be most probable and this is of course a pro a

procedure that i hope will midterm be able to use about the brain and we'll talk a lot more about that in a little bit so here we get to uh to a decoding project by my former postdoc eva daya we decoded movement because back then data on speech was hard to come by and there wasn't enough of that what do we have was we have a typical distribution of movement now in the uh in practice we used that the kinds of experiments they do on monkeys where the monkey always moves like that but like here's the here's the big picture idea we have a typical distribution of movement take my life in my life i

do a lot of coffee cups moving to my face in fact that's my favorite thing in a way i'm somewhat obsessive about coffee sometimes i have to run and then of course much of my life consists of just moving my hands over a keyboard so that's arguably the typical distribution of movements and in fact early in my postdoc i measured the distribution of movement where you see during everyday life where do the elbows spend that time that's what you see in blue and pink on the other side where do the hands spend their time which is c which is what you see in red and green it was actually kind of funny at that point of time

you needed to carry about 10 kilograms of equipment on your back to be able to do those measurements today you could probably do it with the wrist watch and certainly minimal effort but but that's the thankful development of technology now what's the idea if you have a wrong decoder from it the distribution that it will produce is not going to be the right distribution and now what we want to do is we want to change our decoder so that the output of it looks like the everyday movements now like it's the same ideas we have it with text where you can say for text for a cipher text only attack we want to make it so that the

probability distribution approximates english here we want that the probability distribution of movement approximates those of everyday life so how does it work we take the neural data we have on the x-axis here we have time up there on the y-axis we have the different neurons and of course movements happen in relatively low dimensional spaces so what we do is we do a first step where we do dimensionality reduction where we project this high dimensional neural activity into a much lower dimensional space and we also have in a low dimensional space of course the kinematics and then we use distributional alignment so what we do is we basically do search in some space to find the configuration where these two

probability distributions batch best align with one another and with this that we can then do decoding without any training data so we don't need any known plaintext data so what we have here on the on the x-axis you have time this is 10 seconds of a monkey moving on the y-axis you have the velocity along the x-dimension and along the y-dimension and in black you see the ground truth of the actual movement and in red you see what we have if we have supervised data so if we have the full plain text available and in blue you see what we get if we if we use this cryptography inspired approach to decode from the brains of the monkeys

and what you can see is we're doing a pretty good job at recovering the movements that the monkey is making at that point of time and of course we can quantify this where we have um where here we have this there's four settings that we can have the first one is in red which is the supervised approach no we have the full known plain text we know exactly what's going on in yellow you see we call that distributional alignment decoding but that's basically a ciphertext only approach the difference between those two is not significant now you can combine those two approaches and you'll do a little bit better what we can also do is we can use the

movement statistics of one monkey and the neural data of another monkey and we can also decode in that setting which is quite interesting now that we can that we basically never need both data sets from the same subjects we have movements from one monkey and we have brain activity of another monkey and we can still decode that and of course the confidence intervals here are 95 percent bootstrap intervals so now why is that worried worrisome in a way what you saw there is a ciphertext only attack a civil text only decoder and in this case we can decode how the monkey wants to move their arm and you can say well who cares about how a monkey moves they are

but think about it one step further you might not you might not tell me your secrets if i ask you first for your secrets but there's a very worrisome possibility what if your inner voice sometimes talks about your secrets and what if i could get hardware access to your brain maybe with or without your will in that case i should be able to decode your inner voice why because your inner voice probably speaks english and therefore i know something about the statistics of your inner voice and if i know that statistics of your inner voice i should in principle be able to get it your inner voice now good news is neuroscience isn't quite there yet

fortunately we're just recording from order 100 maybe a thousand neurons at a time we probably need more to decode interesting voices but in principle are getting into the domain where such things become meaningful and that's at least something that we might want to worry about now that brings me to a call for ethics now these these coming neuroscience possibilities are great possibilities for all kinds of medical problems no if you can't speak you had a stroke your mouth no longer moves but you can totally think in your head it's magical if we can put electrodes into your brain allow you to speak again for people that are locked in for people that had spinal cord injury hey

i mean like in a way one of the coolest things in that area is say luke skywalker now you remember in one of the movies loses the arm and he has a prosthetic device that is so good that in the rest of the movies we don't even see that he has a prosthetic device how is that possible how could such things be possible well access to neural data because that's where ultimately all the wants to move and incidentally you might get to lower in lower latency because you don't need to wait until the signal travels from my head to my arm which is also very desirable i think we should start worrying before it becomes a real possibilities

but brain science is a new weakness for humans if we can record things and if we have access there um then there's there's there's ways of doing bad things with it and and if something if we hear people like elon musk talk about it and i think he's overly enthusiastic about that and doesn't quite understand the difficulty of things coming our way but um but if people start would start building prosthetic devices that allow them to talk more rapidly with one another well there's a channel and with that channel they might not just translate what they want to say they just translate they just transmit what's going on in their brain and all of a sudden we might

know a lot about them that they don't want to actually broadcast to us if not well what is it that they want to how can we know what it is that the person wants to broadcast to another and like like people like elon musk make it sound like having this broadband channel for to another person that that would be very desirable i personally would be very worried about how i can make sure that i don't accidentally leak on all my secrets kind of over this channel because it's hard enough to make sure that i don't say really stupid things like if i give a talk i'm always worried what if i say something that i shouldn't

be saying that's potentially obnoxious or something imagine i had a bus to the wall that was maybe 10 or 100 times broader i would be broadcasting all kinds of things that they shouldn't um so that's one of them um we therefore i think we need to think very hard about what would be acceptable like people have brain implants routinely in their brains for example if you have parkinson's disease now they build little brain implants into your head that basically zap you regularly and some of them recount from brains for example for epilepsy as well so there starts to be data that comes out of brains we don't currently know what the weaknesses of subsystems are

now at the same time clearly if people go into this implant space there's a lot of interest in being able to also write into their brain code and people are extensively working for that now as you know facebook has been very interested in interfacing with brains um if you can write to your brain would it be acceptable to do something to the brain well that's why we have why we do write to the brain but how can we distinguish between just transmitting some text your mother wants you to call her something like that versus also transmitting oh and advertisements are awesome we should all celebrate our advertisements now where we should be on that space

i think neuroscience will very soon have to grapple with difficult problems now like that might not be happening in two years but it might happen in five years it certainly will happen within the next few decades so i think we should really start thinking about it so that brings me to my uh to my take home message so i think brain science has really interesting questions about codes they have their own way of thinking about codes i think thinking about codes from many different directions is a very interesting direction and i don't think there's much debate happening between neuroscientists and maybe security researchers despite the fact that this will become a problem potentially relatively quickly it's an

incredibly advancing technology space where a lot are happening and um and in a way research is slowly moving towards the possibility of a cipher text only attack on human rights and we probably don't want that to happen so we should start thinking about how we can build systems that will make it absolutely impossible these things do happen and with this thank you very much for listening and i very much look forward to the discussion

hey all right hello b-sides campers uh welcome to the q a for our uh you know cutting edge uh you know keynote here i it's really fascinating to see a an update on some of the the kinds of progress that we've made with you know how many electrodes we can stick in a brain and actually get something intelligible out of it um conrad thank you so much for coming to to talk with us today um thanks thanks for having me um so i you know as uh as you might imagine you know a lot of our our folks are immediately the the mind jumps to oh my gosh you know what can we um what can we do with this information how

you know what are the the boundaries of of the the problem um i we see you know you're you've been talking about decoding um internal voices with sticking wires into folks brains which is you know definitely a problem in itself i've also got people asking well what happens how much resolution how much information can we get with other sensors that don't involve putting electrodes into a brain for instance and and is that even a uh something that would apply for the kinds of uh things that you're talking about yeah that's a wonderful question like there's a continuum here where you can say on the one side i open your skull and i put in hundred

thousand wires that allows me to get a really large amount of data in and out of your brain there's the other possibility what can we do if we don't even open your brain and this is something that we worked on for a long time now like i'm sure a lot of you have seen eegs the way you basically put electrodes onto the head put them on an amplifier on the outside and you can decode with that it's just that the data rate that you get through it might be factor 10 plus slower than if you give me a white bus but there's a lot of tech development in that area for example colonel is one of the companies active

in that space who are working to use new techniques to get data in and out of brains for example you can say they exist these new ideas of getting optically information in and out of the brain you still need to wear a hat but you're wearing a hat right now so that that would be a possibility um and uh optically like lasers pointing at my scalp or or some sort of retinal interface or what are we talking about i think we're talking about lasers pointed at your scalp and and and there is old research in your science which is if you if i put light onto your onto the surface of your brain and and and

that part of the brain is more active the light in the in the range of where there's there's the there's more blood gonna be flowing into the those areas okay that means that then the reflectance of the light will change now so one way of getting light out of it no like more blood means it's more red which means that that there will be more absorption in some areas so there exists ways of getting it out and then there's this cool new stuff which is what's called optogenetics where you can say i can't if i if you allow me to get a virus into your bloodstream i can make you i can in a way make your

cells respond to light so that i can inactivate or activate them with light and alternatively there's mechanisms that allow me to get a lot more data from your brain with the same light that goes in and out got it so but is the uh the resolution would be much broader you're talking region activation rather than specific neurons so is that open to the same kinds of things that you're talking about or is it just a much more you know um general sense that you can get responsible yeah no it totally is so so if you give me these low f low resolution signals i can still say a lot about what you're thinking about so for example they they now do these

things where they have someone in an fmi scanner that is one of these outside ideas like i put you into into a strong magnetic field and then i can with radio frequency get your get your read out what your cells are doing and from those you can you can now decode say roughly the video that someone is seeing or what kind of words they're thinking about so so you can get relevant information of it it's just it's just thing about it is just a smaller bus cool all right um so i mean you would in the in the talk you go in a lot to uh you know movement vision language these kinds of things

um what do we know or can we learn about memory context and decision-making from these kinds of techniques well neuroscientists have long been into decision making and in and what you what you found find that there's some brain areas where the activity of a lot of neurons say increases as you get closer to making one decision versus another decision so in that sense we're quite good in some cases to read out the decision that's going to be made when it comes to memory there's wonderful researchers say nicole russ to ask if i show lots of images and i see at the same time the brain activity can i predict which stimuli you will remember and which ones you don't

and so that's the clear signature of what you remember and i should just mention for people that are interested in behavior because that's something that a lot of us can do there's also beautiful research that links eye movements to both memory and decision making so so if i'm go if i decide between two products a and b are two things and i will end up choosing b i will look at b much more often than i look at a so and i don't think people will people reflect at all on the fact that their eye movements actually give a way of what they know and believe about the world uh yeah well it's i mean people been

trying to decode that manually without you know you're on links you know for for years right and there's a i mean but that with that information it sounds like an advertiser's just dream scenario to be able to really understand what images trigger you know attachment memory you know all these sorts of things yeah and they're of course very active trying to to get those kinds of signals to be able to move that so we buy the things that they want us to buy okay um so have you considered like how people might communicate with an advanced version of this kind of thing i mean you talk about you know we we're seeing processing of images

how how do we potentially get an image in one person's head into another person's head in a way that's comprehensible is that even i mean you know like possible or you know what uh when you're laughing i can see the laughter already so you know no like like the interesting thing that it is that that is kind of starting to be a reality now like i told you when i when we put people into an mri scanner as a field people are starting to be able to kind of roughly decode the video that that person sees now you can say if i can decode it it means i can render it as a video i can basically render what's happening

in your brain what you see as a video and therefore i can already do it who can't who have don't have eyes getting vision through that sort of thing in in some places very yeah there's both directions like that's the thing that we call encoding i take a video i encode that in some way i i put it into your brain as electrical stimulation that means that is encoding like i give you something that's on the outside and then there's the decoding i record from your brain and i run an algorithm on it to try and show someone else what it is that you're actually seeing and so so both are happening at the moment and people rapidly make

progress at them okay which in spite of you know people's method of decoding information being different than than everyone else okay that's that's a bit but let me briefly zoom out of this a little bit so if you listen to people like elon musk talk about brain implants they're always like yeah we could talk much faster with one another if we could both have prosthetic devices that kind of beam the information to one another and i'm not so sure if i buy that because i believe that i can type actually faster than i can really think and i believe that i can see things faster than i can process them i can definitely type faster than i can

think i can i have many many uh examples of that on twitter for the public to see so i that actually though and you know what so the fascinating thing here you know that you you talk about this is uh the the things you brought up in the talk uh show me like a lot of room where we we really do eventually need you know cross-disciplinary activity here you know uh both you know technical controls for how you know we can defend against these systems once they're actually in place that kind of requires knowing what the technology will be before you can like you know deal with the ways to defend it uh but also like

legal you know uh uh contributions for like how we make sure that it isn't okay you know from a legal perspective to you know put uh you know a desire for a product in someone's brain and yet it is okay to put you know something that a picture of the product in their brain yeah this kind of thing right there's it's difficult right these are hard questions so what's your sense from this field like you know how do you know we have a lot of people i'm getting a lot of comments a lot of interest how do people who are interested in this like where is the field ready for that input you know what kinds of input are

good to go today like uh or versus in five years or in in ten years do you have any sense of you know uh who who who would you be looking to work with from the the perspective of other disciplines yeah let's briefly talk about the timeline now like we are having stimulation devices in humans right now for diseases like parkinson's disease um there's people very actively building to make that be very high dimensional so which means it's happening and i don't and and of course neuroscientists like me we are mostly interested in making stuff work but that means that if you want like we should be the customers of both the of the legal the ethical and also the

technical solutions of what it means now like if i if i can interfere with your brain what does it mean that my interference is something that you would find okay it's actually ethically quite interesting like what what hacks of your own are you willing to accept as that's good no maybe you want to be motivated to read more books or something quit smoking but but you're clearly not willing to like only drink coca-cola the rest of the future so so so so i think that that in a way we are not asking for for help but in parts it's because we don't know where this is all going and i think that the rest of the

world should start thinking slowly about it i absolutely agree uh so thank you so much for for putting this forward uh for sharing what you know with us uh this is there's obviously a lot more questions a lot more discussion and from what i'm seeing a lot more curiosity everybody you know put those questions in the discord feel free to talk amongst yourselves i don't know maybe you know dr cording will even be able to uh poke his nose in there and and take some of it but if not at courting lab on twitter uh you know obviously he's shown himself to be a fellow who's very you know open to collaboration across disciplines and to

kind of getting word out about these issues so thank you awesome thanks for having me