share
interactive transcript
request transcript/captions
live captions
|
MyPlaylist
[APPLAUSE] OWEN JONES: Thank you very much, Dr. Brainerd. I appreciate the opportunity to come and chat with you all about these subjects, which I care a great deal about. And I hope to inspire you to continue working in.
Before delving a bit more deeply into the connections between law and neuroscience, I think we need to spend a few minutes just orienting ourselves to the respective subjects. So I know that this is not an audience filled with lawyers. So many of you may have an impression of law that looks somewhat like this. Law is dusty, bookish, and boring. It's filled with gesticulations at a podium, and droning arguments over abstractions, and dusty tomes, and this sort of thing.
I want to try to shift that orientation for a few minutes. And there's a particular reason for that. But I want you to think instead about law as a massively multiplayer game in which there are vast resources at stake, and alliances that form and dissolve, and opportunities to collaborate and betray. And all of this takes place in an arena, the jurisdiction or the world in which various alliances rise and fall as a function of their productivity. And individuals all seek to maximize their happiness, and their homes, and their reputations, and their cool gadgets in a world in which not everyone can have exactly all their goals fulfilled. So think about law as having a sort of Indiana Jones component to it in which there is a lot of activity that's really very high drama, notwithstanding the fact that a lot of this drama takes place in verbal arsenals rather than other weaponry.
Now when you think about the human organism, populated in multiple billions now, you think about this vast rules system as one in which the animal needs to behave. Now law is only one form of regulation of behavior. You've got religion, of course. You've got norms that percolate from the bottom up as people recognize their own self interests.
But at some level, the legal system is always trying to intervene. We don't deploy the legal system to change behavior in a direction it's already going typically, unless we need to go further or faster in that direction. We're generally involved in the business of behavioral change.
There are rules. They're complicated. But they have purposes. And they're all designed to incentivize behavior to move in a different direction. So instead of thinking about law as a carrot or a stick, as people tend to do, think about it instead as really a lever for shifting behavior in directions that the population believes to be valuable.
The point of that metaphor, which I've used a number of times in the past, is really to try to focus attention on the behavioral model here, which is the fulcrum. And if you remember your basic physics, if your fulcrum is not solid, if it's sponge-like, or on this analogy, if your behavioral model is inaccurate and soft, then a lot of energy that the legal system could otherwise bring to bear on moving behavior in valuable directions gets bled off into the inefficiency of the model. So law fundamentally needs sound behavioral models for it to do its job efficiently, effectively, and one hopes, ultimately fairly.
OK. So where do these behavioral models come from? Historically, they've come exclusively from the social sciences side of the university. And there's nothing wrong with that, of course, except to the extent that we might think that sociology, economics, poly sci, psychology of the traditional sort, might not be giving us a complete picture of the human organism, which, after all is, like all other species on the planet, a biologically-evolved pathway toward life and thriving and reproduction.
So that brings us to brains. So we're going to connect from law over to the neuroscience side. This is an electron scanning microscope picture of a neuron. And this is a neuron here, if you can see it, in red, playing with friends. Every one that it has a synaptic connection to is identified here in yellow.
And so as you in this building very well know, this is where you are. Whatever we think of as you is largely residing in these synaptic connections. And this brings a lot of people to existential angst. You know, am I nothing but a collection of neurons and causation chains stemming back to the Big Bang? And is all my behavior preordained and this sort of thing?
And of course, what does this mean for responsibility? Do I choose my own path? Can I be held accountable for my behavior? And consequently, what does this sort of neuronal perspective on human behavior and thriving mean for the legal system?
So I want to spend a few words on backdrop before then giving a more specific overview of where we're going to dig down to in this talk. And the backdrop is that law cares about brains because it has to face the following sorts of issues, some of them having to do with responsibility. So here is a brain of a fellow, Herbert Weinstein, who strangles his wife, and not content with her imminent demise, threw her out the 12th story of their apartment building to her now final demise.
And later, after being arrested, and as in preparation for trial, they scanned his brain. And this is what they found with Positron Emission Tomography, otherwise known as PET scanning, which looks at glucose metabolism within the brain tissue. And they found-- you don't have to be a radiologist to recognize-- there's something unusual going on over here.
Well, it turns out that that unusual thing is a subarachnoid cyst in the lining of the brain, not in the brain tissue itself, but growing within the cranium in such a way that it's crowding out a lot of the other brain tissue, with what effect we do not know, but with what effect the defense would like you to believe is either fully exculpating, or perhaps at a minimum, mitigating should he be sentenced? And the legal system must decide, does evidence like this come before the jury?
And if so, how are we to understand its relevance to the legal matters of substance? How do we connect that spot, that absence of metabolism in the cranium, his behavior? How do we know, for example, how many people have a similar condition, undiagnosed, and don't throw their wives out a window after strangling them? So it raises some very, very interesting issues. But it's all part of trying to govern the human organism according to the behavioral models and the understandings of how people make choices and decisions and how they can be held accountable for their various behaviors.
So how did we get here? We'll back up for just a few minutes and sort of race through the brain history for a few minutes. And we can start with Aristotle, because this is one of the few things that he got spectacularly wrong. He thought that the brain was an organ of entirely minor importance, subordinated to the heart, where all the action was in terms of behavior and processing and thinking and feeling and all that sort of thing.
And he was led to this-- we shouldn't hold it against him. He had different technologies than we have now. But he was led to this because of the superficial similarity between the sort of the tubular structure of the brain and the nature of stills. And so he assumed that the brain was actually an organ for distilling the vapors of undigested food.
So that's where we were at the time. And we've obviously moved to much different metaphors about the human brain. For one thing, it's much more like a furnace than is still. It's extremely energy-consumptive. You probably know that it's only about 2% to 3% of the body's mass, and yet it consumes about 20% to 25% of the calories that you ingest. So every fourth hamburger basically goes straight to the brain tissue.
And we know that, whereas it used to be considered, even after we moved from the Aristotelian view that it was of minor importance to a view that it's of major importance, we for a long time still considered it sort of a bumpy lump of smart jello that did a lot of things but not necessarily in a very specialized way. Over time, we've come to recognize, through a variety of different techniques-- behavioral tests, lesion tests, and these sorts of things-- that the brain is, indeed, very acutely functionally-specialized.
And it is, in essence, a form of information processor that integrates perceptions and inputs and plays those out within a brain that is a product of evolutionary history to yield behavioral outputs or sometimes predispositions toward different behaviors in the form of emotions or inclinations or other things that tended to yield adaptive behavior, on average, compared to contemporaneously exhibited behavior in ancestral environments. Obviously, things are much more complicated than simply that. But the fact is we now appreciate the brain is very functionally and anatomically specialized.
So we've moved from a world in which we learned a lot about brains when they were dead on a slab in dissections to a world in which we now can learn a tremendous amount about brains in the living organism. So we know that this functional specificity in here is in the human brain in part because of things like poor Phineas Gage. I assume this is familiar to many of you. Scientifically, this was a happy accident-- not such a happy accident for him. But it really made the point that different pieces of the brain can be responsible for different forms of behavior after he survived this terrible accident of the tamping rod going through his skull and removing part of his tissue, thereby resulting, apparently, in a very sharp and striking behavioral change.
We also know that if you stimulate different parts of the brain, you can elicit memories, laughter, sadness, a variety of different emotions. And we know from even stepping back to the evolutionary biology side that brains cannot be born blank. These are some pictures that I took in the Galapagos Islands some years ago.
And if you took any one of these species and somehow had it behave in a way like one of the others, this is a recipe to instant disaster. If you don't come equipped with at least some behavioral inclinations on which your experiences then build and your brain continues to wire, you go nowhere very quickly. So that suggests that there is inevitably some role for genes in the developmental and behavioral process. And that's true, whether you're talking about insects or fish or mammals or primates.
So what does this mean? Does it mean that we're hard-wired for specific behaviors? Does it mean that we're sort of puppets to our genes? And does it mean that we should engage in this battle that people sometimes do over whether or not a given behavior is the function of its genes or its environment?
No, it won't surprise you to think that I believe that that's ridiculous. The best way I've heard this put is to argue about whether it's the genes or the environment that drives a given behavior is like arguing whether it's the length or the width of a rectangle that contributes more to its area. It just doesn't make any sense, right? So if we think about brains as existing at the intersection of genes and environment across a third axis, which is evolutionary time, we are here with the brain that we have. And the question ultimately that I want to reach is with what implications for some aspects of the legal system?
So against that background then, here is the talk overview. We're going to talk about neuroscience and law, talk first about the hype as a function of the tech, leading to a fear and a hope, ultimately talk a little bit about, in law, the practice and the problem. And that leads us to thinking about the needs of the legal system and the future for law and neuroscience.
So first the hype-- so if you've been paying any attention, you've seen a lot more activity in the advertising realms focused on the brain. This, I think, is a bellwether of the fact that advertisers understand that people are learning more about the brain. They care more about the brain. The brain is somehow sexy in a way that the rest of the body is perhaps only lingeringly so.
And you see this played out even in popular commercials, like this recent one for V8, where you have a guy hooked up presumably to EEG. And there's a lab scientist in the back, connected to him by a wire. And they're trying to figure some meaningful things out about how he likes V8 when he drinks it.
Now this has led some to believe that the arising of neuro drinks-- there's NeuroSport there's Neuro Bliss, a few others-- has led us to a place of neuromania, where we've just taken neuro too far. It's just perhaps too convenient a set of syllables to tack onto all these other disciplines.
And so we have neuropolitics. We have neuromarketing. This is a paper in the field of neurohistory. You've undoubtedly heard about neuroeconomics and your focus on decision-making. And the New York Times Magazine, through reporter Jeffrey Rosen, has dubbed this neurolaw on its cover story a few years ago so.
So that's the hype. Well, where does all the hype come from? Well, it seems to me the hype inevitably comes as a function of the growth of the tech and the awareness thereof, right? So you're well aware that as we study living brains rather than brains on a slab, we can look at them through x-rays. We can look at them through CAT scanners that amalgamate the x-rays into images that approximate three dimensions through which you can move seamlessly. And of course, we've got MRI scanners that also give you, by virtue of the magnetic properties of different tissues within the brain under different conditions, can also give you a lot of structural specificity.
But of course, we also have these techniques increasingly for functional studies as well. So this is EEG amalgamated down here in what may approximate fMRI envy into sort of top-down pictures of the brain, taking signals like this and translating them into the kinds of colors that historically have been more common in fMRI, for better or for worse. You've got PET scanners. Here's our friend Weinstein again.
And of course, you've got the use of MRI in a functional capacity to try to do studies-- sometimes these are task-based-- that enable you to contrast how the brain is working in different circumstances or another, with inferences thereby how the neurons are calling up oxygen and deoxygenating the blood as a function of their metabolic activity and presumably their cognitive work.
You can also do, as you may be aware, resting state analyses, where instead of putting people under task, you put them in the scanners and essentially watch, more or less, their brains talk to themselves. This is from a study of normal controls at the top row, schizophrenics second, and those with bipolar disorder. You can see if you look in the rear of the brain here in the bipolar group and compare it to the so-called normal controls, you can see some significant differences there that may have diagnostic and perhaps more value beyond.
Some of you may or may not know about functional connectivity analyses, where you can look from the side, the top, and the rear at ways in which groups of brains differ from one another. So here, for example, the red bars show how group A may have more connectivity between different regions than Group B. And the blue bars show where that group may have less connectivity in terms of essentially the cross-talk between different regions of the brain.
Some of you may also know about diffusion tensor imaging, which allows you to track pathways of fluid diffusion in the brain and to learn things about it, both structurally and functionally there as well. A technique that we've started using in addition to MRI is TMS, Transcranial Magnetic Stimulation, that enables you to selectively dampen activity in a region not too deep from the cranium by taking advantage of the relationship between magnetic pulses and electrical activity.
So you may also be aware that you can use neuroscientific techniques to reverse engineer what a person's brain may be doing or looking at. How many of you are aware of Frank Tong's work in this regard? Anyone? OK, just a couple.
So what Frank did is he trained up a pattern classifier essentially by showing lots of images to some subjects. And then he would allow the computer essentially to connect the activity of the brain to the images that the person, the subject, is seeing. But then he would show a new set of stimuli here.
And then he'd ask the computer essentially, through its algorithm, to reverse engineer what the person is probably seeing through changes in oxygen in the visual cortex. This is what the computer came up with, which when you think about it, is pretty remarkable on the basis of a non-invasive technology that's looking at changes in oxygenated blood.
He also did an experiment in which they trained up the pattern classifier with lots of images and then asked the computer essentially, go out to a database of, I think, tens of thousands of images, none of which was the target one, to pick the one that seemed most closely aligned, on the basis, again, of changes in oxygenation in the blood in the visual cortex. So here's what the computer picked as a match. Here's another stimulus, people on the steps. And here's the computers match, people on the steps. This is pretty remarkable when you think about it.
So this kind of tech, I think, is what's leading to the hype. People think, ah, we'll just put somebody in a brain scanner. And we'll be able to tell if they're going to blow up the next Lockerbie plane or something like that. And of course, you know it's much more complicated than that.
But let me give you two examples now that are getting a little bit closer to the legal domain to give a flavor of some of the things that law cares about from the neuroscientific side. So one thing that would be really great to know is whether or not somebody is lying, right? Turns out human beings are decent but not really all that good at figuring this out. So wouldn't it be nice to have a little technological boost, better than polygraph by some measure? It would be great.
And the existing studies-- and there are about 35 of them or so to date-- hold some promise that we could detect lies. But they have some very, very serious methodological flaws. One is typically, there are no major stakes at issue. It's not as if somebody is going to fail the test and wind up in jail. So we don't know how that population may lie or tell the truth very differently.
And one of the biggest challenges here is that most of the studies, with one exception I'll talk about in a moment, are on so-called instructed lies, where you're told to try to deceive the researcher in what ultimately is a game a little bit like poker. Go into the next room, take an object, stick it in your pocket, and I'll try to figure out what it is while you try to keep me from figuring out what it is.
So a group I'm associated with funded a study by Josh Greene at Harvard, who came up with a very clever way of inspiring people, incentivizing people, to lie in the scanner, a non-trivial task, but one that he solved quite elegantly. So what he did is he said, OK, we're going to investigate paranormal activity here. And we want to test your ability to predict the future.
We're going to show you a bunch of coin flips. And while the coin is still flipping, we'd like you to predict on your button box whether it's going to land heads or tails. And every time you're right, you get a cash bonus that you'll be able to ultimately take away with you. And every time you're wrong, neutral.
Not surprisingly, under this condition, all the subjects hovered around chance, plus or minus a little bit from 50% accuracy. But then Josh changed the protocol and said, OK, now what we want to do is we want you to just predict silently to yourself, just quietly, whether or not it's going to be heads or tails. Then we'll show you the coin flip. And now just report back to us whether you were right.
Now under this condition, which gives the opportunity for lying in exchange for the cash prizes, about half the group stayed at around 50%. There were a few people roughly in the middle. It was hard to tell what they were doing. They were maybe 60%. And then there were a bunch of people with 75% accuracy or above, some people hovering around 85%, 90% accuracy.
And so what Josh did-- flash of insight that inspired the whole design-- let's call those people liars. And let's subtract out their brain activity from the brain activity of those who, at least on average, didn't lie and see what we find. And lo and behold, there's actually quite a significant difference in the deoxygenation of blood, in prefrontal cortex activity among those who were lying, at least some of the time.
Now of course, there are challenges there. Because we still don't know on which question, which coin flip were you lying on? And we've got group average data. On average, people seem to be doing this. We don't know what our false positive and false negative rates might be. And yet this seems sort of a promising direction potentially for the future. We don't know how far we'll get. But that's the sort of thing that law is a bit interested in.
Oh, by the way, here's his graph. And these are the folks-- actually, turns out there are a lot of people up here around 90%. And it could be that the prefrontal cortex activity is, let's see. I lied last time. I don't want lie 100% of the time That's too obvious. So I'll just lie 90% of the time. And maybe no one will notice.
But OK, so here's a completely different context in which there's some potential relevance to law that we can use neuroscientific techniques to investigate. I and some colleagues at Vanderbilt were interested in this question, how does the brain go about deciding whether or not to punish someone, and if so, how much? So we put together a team, terrific researchers led by René Marois, my colleague neuroscientist in psychology. We had a then graduate student, Josh Buckholz, who took the lead. He's now an assistant professor at Harvard-- and physicists, and clinical psychologists, and a lot of others. It was a really wonderful interdisciplinary team.
And what we did was we created an experiment, a scenario-based experiment, in which we had a couple of things that we manipulated. One was whether or not the protagonist in the scenario was classically responsible for his criminal activity. You know, John's a bad guy doing a bad thing for a bad reason, and then other scenarios in which there were similar sorts of harms, but John was under duress or hypnotized or sleepwalking or subject to a wide variety of other things that we discovered behaviorally, and not surprisingly, tended to lower people's punishment for criminal activity, in some cases lowered to zero, but often just mitigated significantly.
The other thing we varied was the-- behavioral output was how much should you punish the protagonist? And the harms varied from very low harms of stealing CDs and this sort of thing, up to really heinous rape, torture, murder combinations at the high end, right? So we had these sorts of things to look at.
And what we discovered, cutting to the chase, is that when you subtract out activity between the responsibility and diminished responsibility conditions, that there was a meaningful difference in activity between these two conditions in the right dorsolateral prefrontal cortex. This is looking from the back of the brain, so roughly where my fingers would intersect over here, and that that activity was greater for responsibility scenarios than it was for diminished responsibility scenarios.
So this gives us a clue. It's not slam dunk that we've solved that issue. But it gives us a strong clue that this region is deeply involved in that aspect of the decision. But the activity there did not correlate with the actual punishment amount, which varied, as I said, across these different harms. So what does?
Well, on this study so far, what we've identified principally is a region in the right amygdala that seems to have deoxygenation patterns correlated with punishment amounts. As the blood is increasingly deoxygenated, suggesting greater neuronal activity, the punishment amount that the person reports out physically during the experiment increases. So that's a clue. That's a correlation rather than causation.
We're doing some other studies right now with repetitive transcranial magnetic stimulation to investigate the DLPFC activity to see how it may be causally involved in these punishment decisions. But that's the sort of thing that some in the law are involved in doing. And this is all in furtherance, one hopes, of trying to develop a more elaborate and deeper understanding of how different parts of the brain functioning in different ways integrate information on blameworthiness and harm and responsibility to generate ultimately a single punishment output. So we believe this is a line of research that we can pursue for some time to come.
So what's the fear here? Well, there are a variety of different fears. One, of course, stems back to something I alluded to earlier, which is are we nothing more than our neurons? What room does that leave for us, for love and art and these other things?
And so there are some who worry that an explicitly neuronal perspective on where behavior comes from in the law will incline us to think about humans as mere automata and maybe eviscerate the entire concept of free will and free action and free choice and these sorts of things. I think that's dramatically overblown. But I understand the fear.
There's another fear that crops up sometimes. And it has to do-- well, let me just elaborate on that prior point. The economists said that neuroscience was actually potentially more dangerous in this respect to gutting some of these concepts of autonomy than genetics, which got to the table first in the public consciousness.
So the other fear is really one that's borne of the general domain of cognitive enhancement. Probably many of you are cognitively enhanced right now, whether it's on caffeine or various kinds of medicines that are routinely prescribed. And in some universities, I understand, there are often a lot of off-label uses, particularly by undergraduates performing-- attempting to increase their performances on various tests through longer concentration span and that sort of thing.
So we've got sort of this cognitive enhancement domain, where people worry that, aha, these neuroscientific techniques, we'll have implantable microchips and deep brain stimulation and a variety of things that will create this arms race of greater and greater connectivity. And it'll start with little glasses that are in development now with little computers. And you'll be able to process information in ways that those around you can't and that there'll be this sort of arms race.
Again, I think that's a bit overblown. But I do think there are some important legal and ethical issues that we want to keep our eye on the ball in those domains. But one of the things that's driving a lot of the scholarship right now on the law side is hope that these techniques will help us solve big societal problems, that we'll just be able, on a relatively simplistic way, to get to ground truth about issues that we can't otherwise investigate.
And so here, we'll put people through fMRI. And out will pop a definitive conclusion, whether it's about terrorist activity or infidelity or paternity or all sorts of things that you can imagine. Just wouldn't it be easy if we had a silver machine that just could give us the answers to these sorts of things? Not surprisingly, again, I think that hope is overblown. But I understand where it's coming from.
Now this is a chart that shows accumulation of articles in law and neuroscience articles, chapters, and others, that are grappling with these sorts of issues. What's the promise? What are the limitations? What are the sorts of things coming down the pike that we need to know about? We've actually accumulated a bibliography that's sortable and searchable on a website that I'll give you access to in a few moments.
But I think that that scholarship and that excitement, for better or for worse, is being driven by the hope that neuroscience will be able to help us solve, or at least provide, help answering some of the questions that law has to grapple with all the time, things like is this person responsible for his or her behavior? How are we to take Herbert Weinstein's brain and translate that into some sort of meaningfully just way of treating him for his behavior?
Did the person have sufficient capacity to self-inhibit that we assume is distributed in an average way across society? And what if they don't? What are their mental states? How competent are they? What do they remember? We're dealing with these sorts of issues. Are they lying, even is this person dead?
Neuroscientific techniques may help us as we grapple with whether we should be using a sort of coronary approach or a brain science approach to when is someone like Terri Schiavo, for example-- you may remember this case from a few years ago-- when do we consider them irremediably gone? So there's the hope that there will be at least contributions toward answering these questions.
Now let's talk about this case. I don't know how many of you saw this in the news just a few weeks ago. This is a guy who's friend-- we all need friends like this-- shot him with a spear gun through his forehead. And so this is how he presented at the hospital, right-- really scary stuff. He survived. This is him on the other side of the operation. So That's sort of-- we'll consider that sort of hypo number one.
Hypo number two-- here's a guy, if you can see it, he's got a crossbow bolt in the front of his brain. He tried unsuccessfully-- emphasis on the first syllable-- to commit suicide by crossbow shooting up this way. Now he had been suicidal and antisocial. He is now what his doctors describe as inappropriately cheerful all the time.
[LAUGHTER]
Now I mean, this is America. We can't have that, right? But so here's case number two.
Suppose these two individuals, neither of them insane, rob a bank a week after the accident. And these images are presented. And you are jurors. I mean, how many of you would be willing to mitigate, at least mitigate, a sentence on the basis of the view that not all the oars were in the water with these two? Show of hands-- how many of you would be willing to take this into consideration? OK. It looks like the vast bulk of you.
All right. Now so ask yourself then, what distinguishes that case from this one? Suppose this guy, a week later, robs a bank? He doesn't have this big piece of metal in his head. But we're learning that when you take this custard substance and you bang it around a lot, [LAUGHTER] bad things happen, right, and things that can have long-term effects on the brain. So does this open up the, hey, I used to play football defense to harsher punishments?
Or what about veterans coming back from wartime, where they've been exposed to very significant blasts which blow their body in a direction and the brain is trying to catch up with that body from inside the cranium, where it's squished? And they don't have transcranial penetration. But they've still got damage on the interior. What do we do with individuals like that?
Or how about boxing, right? Now the whole purpose of the sport of boxing, as I understand it, is to inflict sufficient brain injury on someone else that their brain shuts down. Now could this lead to the battered boxer's defense for robbing the bank, if they rape, if they execute a will and the will is challenged because the person was incompetent because they were a battered boxer? What do we do with these sorts of brain-based phenomena that can lead to slippery slopes, frankly, of you start with the nail in the head or the spear gun and then you stop where exactly? And how, after all, is this more significant, if it is, than extended abuse in childhood, whether it's physical or emotional?
OK. So you've got a lot of things going on, a lot of things to think about. Let's talk about how this is playing out, some of this is playing out, in practice. So just talk about a couple of cases here. Here's one-- US versus Semrau. The defendant, Lorne Semrau, later convicted but at this stage still the defendant pretrial, had an attorney who employed this fellow, who is the leader of Cephos, who markets MRI for lie detection purposes, to try to verify that this guy is telling the truth, when eight years after the fact, he goes into a scanner to claim that and to attempt to prove that he did not intend to defraud the government in Medicare and Medicaid fraud.
He's a psychiatrist, had a large practice with other doctors, and was accused of essentially directing money that should not have been his. Now the government has to prove that he was aware of the fact that he was breaking the law. This is one of the cases in which you have to have known that you were breaking the law to receive the conviction and the punishment of law.
So here's what happened. So he is scanned by some folks at Cephos. And they ultimately generate a report. It's a lengthy and seemingly thorough report by a credible scientist, Dr. Steven Lakin, that includes in it this language. Dr. Semrau's brain indicates he is telling the truth in regards to not cheating or defrauding the government.
Now this is a fascinating claim when you think about it. Because it's really, it's a meta-claim about truth verification, right? It's the brain sort of testifying as to the prior state of itself, many years ago.
Now the question for the magistrate judge in this case was, do we let this evidence reach the jury? And there was a two-day hearing. I had the privilege to attend, fascinating and intense couple days of testimony. Ultimately, the judge excluded the evidence from the purview of the jury. And that was later upheld on appeal at the Sixth Circuit.
I think that was exactly the right decision, but not because fMRI is wrong or somehow not useful and not because fMRI can never tell us about deception, but because this particular test with this particular defendant given these particular facts as deployed in an experiment designed by this particular researcher, all of that together did not work, did not pass the so-called Daubert test. And there are obviously some specific thresholds for what it would take to pass that test that are beyond the scope of our speaking now.
But this is a good illustration of the fact that judges, and in this case, the prosecutor, gets confronted with sophisticated neuroscientific evidence and doesn't have the luxury of saying, well, I don't know anything about that. Let's move on to the next thing. No. No. This is the evidence that the defendant wants to bring. The judge has to decide it. The prosecutor has to defend it. And had it gone to court, jurors would have had to figure out how to weigh it.
So another development here on the law side that's worth thinking about is that we have some good evidence that at least some jurors claim that they are meaningfully affected by neuroscientific evidence, at least some kinds of neuroscientific evidence. So this was a case in which Grady Nelson in Florida, in which he stabbed someone 67 times, who died and of whose murder he was convicted.
No neuroscientific evidence of the liability phase-- you're guilty. You're done. Now the only question is sentencing. In Florida, they do have the death penalty, which not all states do. And in Florida, as in some other states that have the death penalty, the jury rather than the judge gets to decide whether or not the defendant gets the death penalty or instead gets life in prison.
And so in this case, very graphically, the defendant's life literally was hanging in the balance. He's either going to be executed, or he's not going to be executed. And a bunch of jurors are going to decide.
Now the defense in that case brought forward testimony by this fellow. Dr. Thatcher, of so-called qEEG evidence, the Q standing for quantitative amalgamation of EEG evidence, which as you know, is more like a lines on a rotating cylinder or graph. And in this case, the jury, by a very narrow internal vote, gave the defendant-- or in this case, the convicted killer-- life in prison over the death penalty.
And what was interesting is two jurors came out and spoke to the press. This does not always happen. And we don't know that this is an accurate reporting of their logic. But we do know that it's a window on at least what they're claiming was the relevance of neuroscience to their verdict.
One of them said, "It turned my decision all the way around. The technology really swayed me. After seeing the brain scans, I was convinced this guy had some sort of brain problem."
Now I take no position on whether or not that was a good conclusion. It seems a little vague-- some sort of brain problem. We don't know how many people had a similar sort of brain problem and don't stab somebody. We also don't really know the causal arrow, either maybe this guy's brain problem is he's sitting in jail, which is an intervention in the body and the brain in one form or another, and he's grappling with the fact that he killed somebody he used to love. And that could give you a brain problem, maybe. So we don't know for sure. But we do know that at least two jurors came out, and that seemed to be dispositive of whether he died or not, to say, this kind of evidence, very important to us.
So what's the problem? Variety of problems here-- ecological validity is one problem. You've got experiments that are done, whether it's lying or some other task, on a person's back in the scanner under conditions that are not necessarily ecologically meaningful. So we don't know how even a well-designed study really translates into the ground behavior.
It's also, as I alluded to a moment ago, it's difficult to know which specific inference you're to draw from the neuroscientific information. What is the chain of inference between a subarachnoid cyst in the lining of the brain tissue and throwing your wife out the window? What has to be true for those two things to connect in any meaningful sort of way?
Does it matter if that tumor-- excuse me, that subarachnoid cyst-- is impinging on the prefrontal cortex? Because we know from other studies of other people that prefrontal cortices are really involved in decision-making and inhibition and that sort of thing. Does it matter if it were in the cerebellum or somewhere else, the visual cortex? Do we treat those things differently, and if so, why, and on the basis of what evidence and what logic?
Let's talk about law science, different goals and standards, right? You get neuroscientists and lawyers together in a room, as I've had the privilege to be involved in many times. And it becomes very apparent that the cultures and the purposes of these two disciplines are not the same. You might say, well, of course, we all that's not the same. No, it's deeper than that, right?
So both are engines trying to be engines of truth and some respect, right? Scientists are trying to come closer and closer to an approximation of truth. And at least at trial, you're hoping that the adversarial process and the incentives thereof will yield some uncovering of evidence that will help you get at what the facts actually are as meaningfully as you can approximate them in retrospect with a bunch of jurors who weren't there at the time, right?
But there are also some very meaningful differences, because scientists are trying to discover truths about the world or about groups. And lawyers are often trying to figure out some truth about an act or a person. And this yields very, very different contexts. What do I mean by that?
Scientists, quite properly, will go in pursuit of your P-value at equal to or less than 0.05. Great. That's a cultural norm, somewhat arbitrary. But it seems perfectly reasonable, right? A one in 20 chance of this happening is a function of random rather than causal. All right.
So you might say, suppose you've got a lie detector that's only 75% or 80% accurate-- leave aside the false positive, false negative distinction for a moment. Let's just, for purposes of talking, say 75% accurate. Scientists will typically say at that moment, you can't possibly use that in a court of law. It doesn't meet our threshold, whereas lawyers will say, we can't possibly avoid wanting to use that in a court of law because our next best lie detector is a bunch of jurors looking sideways at somebody, seeing if they're shifty, beady-eyed, and sweating, right?
Now it turns out, as best we can approximate, jurors are not very, very good at lie detection. So the case is joined by what happens if you have a lie detector that's meaningfully somewhere in the middle of what the law is currently using and what the scientists would love to see, to see it accepted as meaningfully scientific? And if you want to learn more about that, there's a wonderful paper written by Fred Schauer in the Cornell Law Review about precisely that issue. OK.
There's also, of course, the concern about to what extent might neuroscientific evidence be over-persuasive? Some have published studies suggesting that brain images lit up like Christmas trees just appeal to people visually. And fine, there's a spot on the brain. Therefore, let's not fry them.
There are others who say, wait a second-- and there's a study by Nick Schweitzer and Michael Saks to this effect recently-- if you really do a tightly-controlled experiment, what's making the difference here is not the visual image, not the fact that it exists, not the fact that jurors see it, but the distinction between scientific testimony and non or un or less scientific testimony, that what really matters is whether you've got a lab guy who can say, I'm a scientist. And here's my view.
So there's a question about this. It is the case that judges can exclude even relevant evidence that everyone stipulates is relevant. They can still exclude it from the jury's purview if there's a meaningful belief from the judge's point of view that jurors may be prejudicially affected.
So for example, here's a picture of the victim with the bloody knife in her back. Do we have to see the bloody knife? Or might the graphicness of the image inflame the jurors' passions in a way that could miscarry justice? So we worry about that in the legal context as well in trying to figure out when evidence could be over-persuasive. But of course, the symmetric concern is there also, which is that people might under-attend to relevant evidence as well.
So that problem leads to the need. That need is being met in part by a variety of different organizations. One is the MacArthur Foundation Research Network on Law and Neuroscience, which I have the privilege to direct. It's a group of just absolutely top flight, tremendous, committed neuroscientists and lawyers from around the country. And we're working on two main things.
One is to help the legal system separate the wheat from the chaff. When can this be used? When can it not be used? How is it best understood? And we're also doing empirical work to try to figure out what promise might neuroscience provide to the legal system? So we're involved getting a lot right at the intersection of law and criminal justice, which I'm happy to come back to, time permitting, during the Q&A.
Let me also mention that I think it's extremely salutary that Cornell-- and I should mention, by the way, that one of your neuroscientists at Cornell, BJ Casey at the Weill Medical School, is one of our researchers in this network. And we're very glad to have her and her expertise. But I'm glad also that at Cornell up here in Ithaca, that you've got this program working on law and psychology. And you're bringing students up with expertise in both disciplines.
We're doing this also at Vanderbilt. And there are a few other places that are doing this as well. And I think that's really the wave of the future, where you have people trained in multiple disciplines, who can make these connections in a faster, more robust, and ultimately more productive way.
All right, so what about the future? So two points here-- one is think smaller, so sober evaluation of the pathway to legal relevance. I think there are pathways. But I think we don't want to just assume these are superhighways to quick, easy, uncomplicated utility. And so I think in particular, focusing on what separates legitimate inferences about the data from illegitimate inferences is going to be one of the most important things we can do.
Also, it's really important to recognize, as I'm sure many of you do, that there are a lot of limitations to the way neuroscientific evidence can be interpreted and that these interpretations are the function of human beings stepping in to make judgments about how the images are to be generated, how they're to be understood, what sorts of thresholds will be displayed, and ultimately, what it means for the things that law cares about. Is this person responsible? How responsible is this person?
So first thing, smaller, then in exploring those limitations, think about things like base rates, the fact that the causal arrow can run in multiple and opposite directions, that explanation is not justification. Just because we come up with a mechanistic pathway for something of legal relevance, that doesn't necessarily mean that it's dramatically-- that we should treat the situation differently than we would have otherwise.
Unless you believe that there are supernatural interventions, everything is caused, one way or another. So the fact that you identify a cause doesn't get you privileged status. The question is, how does what you identified tap into a question the legal system is trying to answer in a way that's meaningful?
I've averted a little bit to, for example, the problems of using group average data. We know that individuals, individual brains, individual brain function, can deviate sometimes in significant ways but not necessarily abnormal ways from the group average data. Well, how do you cross this bridge between we have a study that suggests here's how people on average do it, and here's an individual who did something?
OK. So this is to underscore some of these points, that everything from the color to the inferences about neuronal activity are still ultimately part of an inferential chain and need to be unpacked in a way that, say, your average x-ray doesn't. I mean, you might see that spearfishing rod right in the middle of somebody's forehead and say it doesn't take a whole lot of multiple manipulations to determine whether or not it's really there. Contrast that to some of the functional activity, which is definitely of a more complicated sort. And you see my point.
OK. So after thinking smaller, what next? Then think bigger. I think that as important as neuroscience could be in providing some tools, not necessarily answers, but some tools to the legal system, I also think it's important to contextualize that within the sweep of science and the sweep of populating those behavioral models that I was talking about earlier.
So you've got other disciplines that are also relevant and may be even more important when synergized with neuroscience-- so behavioral genetics, for example, evolutionary biology. These sorts of things should be integrated at their edges and then ultimately integrated with the social sciences in a way that can populate this behavioral model in a more sophisticated way, so that ultimately, one hopes, the legal system can do what it's assigned to do more effectively and more efficiently. I'm not suggesting that life sciences should be at the head of the table. I'm suggesting that it's crazy not to have life sciences at the table, right?
OK. So let me just, sort of in coming toward closing here, just flag a few sorts of ways of thinking about the relationship between law and neuroscience that might be useful. These aren't the only ways in DM. Still working through some of these ideas. But I think that neuroscience can be relevant to a law at sort of writ large and in two general ways.
One is it introduces new problems. And the other is it provides new aid. So the new problems are things like, huh, if the government can take a noninvasive scan of your brain and learn something meaningful, could that be a search and seizure? Is that different from taking a DNA sample from a hair follicle There are a variety of sort of contexts here in which the legal system now has a problem that it must face, an evidentiary issue, like do we allow fMRI lie detection in or not? That becomes sort of a new problem for law.
New aid-- there are a variety of different ways of thinking about this. Let me just tick through a few. And then we'll open it up for questions. One is in the context of buttressing, you have other information pointing in a direction that maybe this person is insane, behaviorally, clinically, psychologically. And you've got some neuroscientific evidence that seems to be pretty dramatic about something abnormal in function or structure, that sort of buttressing a conclusion that might yield greater confidence in the conclusion by adding to the weight in a triangulated way.
Challenging the legal system-- so if there are neuroscientific truths that we can uncover that are inconsistent with legal assumptions that might-- doesn't have to, but it might-- prompt legal reform. Let me give you an example. In the legal system, typically, a person cannot testify as to what he heard somebody else say, right? So I can't say, well, I heard Chuck say, I'm so glad I shot that guy.
So typically-- well, actually, there are ways around that. But typically, the general point is that you go to the best evidence of the person that you have an opportunity to cross-examine, rather than having this chain of hearsay, where somebody says somebody said somebody said something. But there's an exception to that. And one is if you have somebody testify as to the so-called excited utterance of somebody else. This is the excited utterance exception.
And the premise here, although it's not described this way, the premise is fundamentally neuroscientific. It is that people cannot lie well and effectively and quickly when they're excited. That could be right. That could be spectacularly wrong. And if it's wrong, then that's an example of a way in which neuroscience could potentially challenge some of the premises of the legal system. And there are a number of other examples in the evidentiary context that one could pursue.
Obviously, detecting is, this is a new tool. Can it help us detect lies? Can it help us detect things that affect how responsible somebody is? Can it help us detect how damaged someone's brain is with what implications if they've been the victim of an accident and we're trying to calculate damages? Can it help us detect how much pain somebody is really in, as opposed to how much they claim to be in?
There may be some arenas of this sort. It may help us sort also between people who are sort of bad people who should be punished in a very retributive way and other folks who we think would be better served in society, be better served through medical interventions, for example. Anyway, there are a variety of different contexts here in which neuroscience, I think, can potentially be relevant to law. And these are some of the ones that I want you to think about.
With respect to further information, there are all sorts of publications coming out that you may be interested in. A lot of them are amalgamated at this website that I've put up on the board. But I've been talking for a while. Let me just stop there and take questions about this intersection. So thank you for your attention.
[APPLAUSE]
Thank you.
Vanderbilt University law professor Owen Jones reviews the implications of neuroscience on the legal system. How will it change our understanding of will, autonomy and responsibility? How is neuroscientific evidence used, what are its limitations? What are the implications for future practice and research? Recorded October 19, 2012.