share
interactive transcript
request transcript/captions
live captions
download
|
MyPlaylist
And welcome to the 47th Bethe Lecture. My name is Jim Alexander. And in the next few minutes, I'm going to tell you a bit about the life and work on Bethe. And then I'll turn it over to my colleague Michel Wang, who will introduce tonight's speaker.
Well, Hans Bethe was one of the great thinkers of 20th-century physics. He was endowed with an amazing mix of intellectual brilliance and unshakable confidence, extraordinary stamina and a prodigious memory. And he was able to make contributions to just about every field of physics in a career which spanned 75 years.
He began his formal physics training at the age of 20, which was 1926, the year of the birth of quantum mechanics. And he was, of course, eager to get in on the latest thing. And he plunged in, and he mastered these new ideas quite quickly and proceeded to apply them to a very wide range of condensed-matter physics.
By the time he was 25, he had already published 10 papers, of which four of them still stand today as landmark papers. And by the age of 27, he'd written two book-length reviews for the Handbuch der Physik-- one on the physics of one- and two-electron atoms and one of the behavior of electrons in metals. His approach to writing these reviews was actually to rederive each result for himself, usually extending it in some new directions or to some additional depth which hadn't been done before. And of course, these reviews were highly valuable and became textbooks for a whole generation of physicists.
In 1933, the Nazi Party came to power in Germany. Bethe, whose mother was Jewish, lost any chance of a academic position in Germany. And he went to England. And there at the University of Manchester, he met Rudolph Peierls. And he formed a friendship, which lasted a lifetime and launched a new direction in his research.
The previous year, Chadwick had discovered the neutron. And for Bethe, this signalled a change in the field. It became a field in which he could bring his calculational skills to bear and do some of the hard work. Within a year, he and Peierls had published a paper on the structure of the deuteron and the short-range nature of the nuclear forces, which of course, to this days is also one of the great landmark papers of the 20th century. And he was rapidly becoming a world expert, if not the world expert, in all matters of nuclear physics.
And three years later, now at the age of 31, he wrote his 500-page, three-volume review of everything that was known about nuclear physics at that time. Again, in his style, he rederived every result and extended them in new directions so that what he was nominating and reviewing was, in fact, to a large extent original. And this monumental work became known as Bethe's Bible and stood for decades, again, as a source to train another generation of physicists. And if you stop by the library over here and take a look at it, which you should do sometime, you'll be amazed with the language-- the clarity and the economy of Bethe's language and the steadfast but almost leisurely pace that he unfolds the topic and the subtopics in turn. It's something you should look at sometime.
And he wrote that here in this building. After his time in Manchester, he came to Cornell, and that was now in 1935. He'd been specifically sought out by this department because of his expertise in nuclear physics.
And he found the collaborative style of the US physics community and the English physics community to be just his style. And indeed, it was fertile ground for him. He later described the 1930s as his most productive decade.
In 1938, after a simulating conference in Washington, he turned his attention and his encyclopedic knowledge of nuclear physics to the issue of energy generation in stars. And he rather quickly discovered the complex cycle of reaction that powered the sun and for which 30 years later he received the Nobel Prize. And then at the outbreak of World War II, he plunged into military problems. And Robert Oppenheimer tapped him to be the leader of the theory group in the Manhattan Project at Los Alamos.
After the war, unlike some of his colleagues, Bethe wasn't regretful about the outcome of the Manhattan Project. But he did devote herself to the cause of arms control and seeking bans on nuclear testing and the general goal of disarmament. And he brought his scientific and calculation talents to bear on many of the critical technical problems which underlay these rather political and social issues. And he served frequently during these years as adviser to Presidents Eisenhower, Kennedy, and later Johnson.
And during this time-- throughout this very long period of time, in fact-- he shaped the Cornell Physics Department in ways which we still see very clearly today. The openness and the collegiality that we so much enjoy in this department is attributed to Bethe. And he similarly left his imprint on the university as a whole and, indeed, played some pivotal roles in the turbulent times in the 1960s when the university had to wrestle with some difficult situations.
In the late stages of his career, he took up new scientific issues, including the study of supernova mechanisms. And among other things, as an octogenarian, he solved the longstanding problem of the solar-neutrino puzzle, which had stymied a good generation of younger colleagues for more than 20 years. So it's hard to imagine a career of 75 years, let alone a career which was really sustained and uniform in its productivity over that time and of enormous breadth. And yet it really happened, and most of it happened right here.
So the Bethe Lecture Series was instituted in 1977 to honor Hans Bethe and his many contributions to the department. And over the years, it's brought dozens and dozens of extremely distinguished physicists here to speak, giving both technical lectures and public lectures like this one. So on that note, I will turn it over now to my colleague, Michelle Wang, who will introduce tonight's very distinguished Bethe Lecturer.
[APPLAUSE]
I would like to join Jim Alexander in welcoming everybody to this lecture. It's my great honor to introduce Bill Bialek, the John Archibald Wheeler/Battelle Professor in Physics and a member of the Lewis-Sigler Institute for Integrative Genomics at Princeton University. As Jim mentioned, Bill is the Bethe Lecturer for the physics department this year. And this lecture is the last one of the three-lecture series.
This lecture series not only has been highly anticipated by the physics department. It has also brought together people from a broad range of background, as you can see from the audience here. Bill received his bachelor's degree and PhD from University of California at Berkeley. After postdoc in Germany and Santa Barbara, he returned to Berkeley as a faculty member but soon was recruited to NEC Research Institute. And in 2001, he joined Princeton Physics and has been there ever since.
So Bill is a renowned theoretical physicist with interest in a broad range of topics on the phenomena of life. And he's particularly well known for his approaches to seek general physical principles in life, as you've heard from a couple lectures earlier. And he's widely regarded as a visionary and a primary leader in the field.
And he has won a number of very important awards. I just want to list a few here. He was elected as a fellow of the American Physical Society in 1996 and was elected to the National Academy of Sciences in 2012 and won the 2013 Swartz Prize of Theoretical and Computational Neuroscience from the Society of Neuroscience. So besides research, Bill is also widely regarded as an outstanding educator and has won the President's Award for Distinguished Teaching at Princeton University. And he recently wrote a graduate-level textbook titled, Biophysics-- Searching for Principles, which I highly recommend.
I was fortunate to have attended his lecturers early on, first as a grad student during the Princeton lectures on biophysics held at NEC and then later as a faculty student at a [INAUDIBLE] course on physiology. I hope you will find his lectures as inspiring as I do. The title of his lecture certainly has captured my interest. It's not displayed up there yet. But it's "More Than We Imagined-- A Physicist's View of Life." With that, let's welcome Bill Bialek.
[APPLAUSE]
Thanks, Michelle. Let's see how well this works. OK. Yep, think so.
Thank you. It's a great honor to be here. Somebody once said about Hans Bethe that he had done so many different things that one would not be surprised to learn if there were many people whose name was Hans Bethe.
We associate him with this golden age of nuclear physics. As was mentioned, he described the '30s as his most productive decade. And that's getting to be a while ago, so receding into history. But many of the things that he did have enormous amount of resonance even today.
So one which maybe is harder to discern is that there is an approximation that he introduced in statistical mechanics-- the Bethe-Peierls approximation-- together with his friend. And if you trace current work in computer science in the attempts to learn models for big data, you will find that there are crucial algorithms for learning these models that depend on a rather sophisticated set of approximations. And if you strip everything away, you'll discover that they are the Bethe-Peierls approximation.
It's pretty remarkable to think that the things that they were doing-- that, if I'm not mistaken, actually comes from the '20s-- and that that can reach so far, not just in time but through transformations in how we think about the world. OK. So I still find that quite remarkable. It's also true that Bethe-- as I guess Dyson referred to him, the consummate problem-solver of the 20th century.
Some theoretical physicists are celebrated for their flights of fancy and others for their technical prowess and stick-to-itiveness. And Bethe, certainly, one thinks of him in that latter category of working hard on hard problems. But it's also true that he chose problems often of great beauty and things that were awe-inspiring in the literal sense of the word, his fascination with supernovae being an obvious example-- one of the most spectacular phenomena in nature.
And there's one story about Bethe that I enjoy very much. When he had just worked out the ideas about the nuclear reactions that power the stars, he was out on a date with his wife. And she is reported to have remarked how beautiful the stars were. And he smiled and said, yes, and I'm the only one who knows why they shine.
[LAUGHTER]
So I hope that you will find some of the phenomena that I talk about this evening as romantic as Bethe found the stars.
So let me just start. And then I'll tell you what I'm trying to do later. Let's just pick one-- dive in.
These are bats. And many of you have probably seen them flying around. Perhaps you know that the way in which they find their lunch, like that moth there, is through echolocation. They emit high-frequency pulses of sound, and they listen for the echo. And by measuring the time delay of the echo, they know the distance to the target.
And there are more complicated things they can do. For example, they can hear the Doppler shift because they're moving relative to the target. And they build up a three-dimensional view of the world around them, not the way we do by vision-- they can see, but this isn't primarily how they hunt-- but rather through echolocation.
And there's a remarkable saga of how this was discovered. It took a long time and so on. People were confused. And at some point, people started to ask, well, can I measure how accurately they do this?
So what these bats are doing is they make a call. And then they listen for the echo. So they measure a time. So in a sense, they have a clock, right?
The stopwatch starts when the pulse is emitted, and then it ticks shut when it comes back. And now they know the distance. So in order to swoop in on their target, they have to be able to measure this distance reasonably accurately. And you might wonder, well, how accurately do they do it?
So many years ago, somebody tried a wonderful experiment in which they had the bat chase not an insect but a mealworm. And if they're hungry enough, they'll eat mealworms. And what they did was to take the mealworm and dip it in flour, not in preparation for sauteing it but before they threw the mealworm up in the air.
You throw the mealworm up in the air, and the bat swoops in. And you may remember that bats are mammals, and so their wings are like our hands. And so when they bring food to their mouth, they actually use their wings to do it.
And so what you see here is that happening. The bat is about to scoop up this mealworm. And so as he does that, since the mealworm has been dusted in flour, it hits the wing. He brings it to his mouth, and he eats it. And of course, that leaves a little spot of flour on his wing.
And now you do it again. You need a new mealworm. Dust it in flour. Throw it up. Dust it in flour. Throw it up. You can do this 10, 15 times.
And now you catch the bat, and you open his wing. And since he's done this 10 or 15 times, you might think that there are 10 or 15 different spots. But that's not true. There's one spot.
And that's because every time he swoops in and catches the mealworm in his wing and brings it to his mouth, he hits exactly the same place. Well, you can't say exactly. You can say exactly within the size of the mealworm-- about that bit.
So that's pretty extraordinary. It tells you that underneath this rather complicated behavior, which to us looks like it's different every time-- because every time he has to fly in a different way-- in fact, there's something incredibly precise and reproducible going on underneath. So I mean, that's a clever experiment. But it has a slight 19th-century feel. So let's see if we can do better.
And Jim Simmons, who's now at Brown, led an effort that took the better part of 15 years to do a better job. And what he did was he trained bats, instead of flying around, to stand here. And he built a device in which there was a microphone which would pick up the sound of his call, introduce a delay, and send it back through a loudspeaker.
And he had instruments on both sides. And on one side, the delay is always the same. And on the other side, the delay alternates between two values, as if the object that you were looking at on this side was bouncing back and forth by a small amount. And then you train the bat to take a step in the direction of the target that bounces back and forth. And then you ask, how small a delay can he actually manage to detect?
So when they first did this experiment, they found that the bats could detect 1 microsecond-- one millionth of a second. That was a little bit surprising but isn't completely surprising because when your-- well, I don't know how it works in this room because I don't know where the loudspeakers are. But if the loudspeakers were off and you were just listening directly to my voice, the way you know where it's coming from-- well, you can see me. That's one thing.
But if you close your eyes and you just listen, you can point to the source of the sound. And at high frequencies-- this is something that Lord Rayleigh worked out around 1900-- at high frequencies, you can use the fact that your head casts an acoustic shadow. So if the sound is coming from over here, it'll be louder at this ear than it is at this ear. But at low frequencies, that's not true because the wave length of the sound is larger than your head.
At that point, the only way you can tell what the sound is coming from is to listen for the difference in timing. If the sound is coming from over here, it gets to this ear before it gets to this ear. And if you're listening to a sound that's right in front of you, then it gets to the two ears at the same time. And if it moves by a degree or so, then the difference in time between your two ears is a few microseconds-- a few millions of a second-- and you can detect that.
If you were a barn owl, who actually made his living by listening for the little mice rustling underneath the leaves, then you could do a little better. You could do 1 microsecond, not 3 or 4, like us. So the fact that the bat could do 1 microsecond was impressive but not astonishing.
There were some other features of the data that led Simmons and his colleagues to think that the calculation the bat was doing was really very sophisticated. And the details don't quite matter. But what is important is that when they proposed that, there was another group that said that he had to be completely wrong because if the bat could really do that sophisticated calculation that would make use of all of the information that was in the sound pulse, then rather than being able to detect 1 microsecond, he could detect 10 billionths of a second. And that was so obviously absurd that it was an argument against Simmons' interpretation of the data-- 10 billionths of a second.
So Simmons went back in the lab and pushed on the experiments and pushed on the bats to convince them to work harder. And by 1990, they produced these data, which show that when this jitter in timing is 10 billionths of a second, the bat gets it right about 75% of the time. Furthermore, if you ask the bat to do this in a room that's noisy in the background, then as you increase the level of background noise, it gets harder and harder.
You all know this. If you're trying to identify the source of a sound, you find that it's more difficult when it's noisy in the background. What you may not realize is that if I tell you how big that noise is-- how big that random rumbling in the background is-- and I tell you what the size of the pulse is that you're listening for, then I can calculate exactly how much extra trouble you're going to have because of the noise.
And when you do this experiment with the bats, the bats' performance tracks that limit perfectly. So not only is he capable of measuring a delay with the precision of 10 billionths of a second, it's also true that that's down at the limit of what's allowed given the noise level in the room around him. So in a way, he couldn't possibly do better.
So this is the first example that I want to give you of a biological system that manages to push right up to the edge of what the laws of physics allow. And I always like to start with this example because we don't have the slightest idea how it works. I think sometimes in public lectures about science we're so eager to tell you about how much we've accomplished as scientists that we forget to mention that there are lots of things we don't understand. And of course, some of those are the most exciting.
So despite the fact that that's now 25 years ago, we don't really understand how this works. I don't want to say there are no ideas, right. We're not sure. So what I'm going to do for the rest of the evening-- not all of it but the time allotted-- is take you on a tour of a few more phenomena of this flavor. And I hope by the end to give you a sense of how remarkable it is that the organisms all around us-- and indeed we, ourselves-- can perform these tasks which are so essential for our survival right at the edge of what physics tells us is possible.
So I'm going to plunge into some more examples but this is a moment to note that these ideas are things that I've been curious about for a very, very long time and over the years have had the good fortune to work with lots of wonderful colleagues, both my contemporaries and students and postdoctoral fellows, many of whom are now professors in their own right. And it was one of Bethe's contemporaries, Weisskopf, who I guess titled his autobiography A Privileged Life. It's an opportunity to remember the privilege of being a physicist. Part of it is to work with bright young people who grow up and do things on their own.
So let's talk about vision. And let's start not with our visual system but with a visual system that's perhaps a little less familiar-- the visual system of a fly. Now when you look down on the head of a fly, most of what you see are eyes. There's some scary-looking mouth parts when you blow them up to this size but mostly eyes.
And if you look closely, you'll find that there are lots of little lenses. And if you think about one eye having one lens, you might be tempted to say that it has many eyes, which if you then think about how our eyes work, can lead you to all sorts of misconceptions. This is really quite a remarkable thing because as you know, Gary Larson-- I guess perhaps also receding into history a little bit since he's retired from writing The Far Side-- in many cases, although he plays with issues in natural history in enjoyable ways, he often gets things right, or at least gets them right enough that you know where he's joking about being wrong. There's that remarkable passage where he says that he committed the sin of putting hominids and dinosaurs in the same cartoon. This one's completely wrong.
So let's think about how our eyes work. In our eyes, there is the surface of the eye-- the cornea. There's a lens. And on the back of our eye where the retina is, there are the cells that actually absorb light and produce the electrical signal, which eventually gets sent to our brain.
And so each photoreceptor cell-- each photodetector element, like the pixels in your digital camera-- are looking out through a single lens. They're just looking in different directions. You'll notice that this design for your eye has an obvious defect, which is that there's all this empty space in between.
And if you'll pardon the image, you realize that this empty space in the eye of a reasonable-size creature like us is almost large enough to contain an entire fly. So we'll wait for that one to sink in.
And you then realize that it would be a very bad idea for flies to try to build an eye with this structure because this empty space would be gigantic. So instead, what they do is they give each photoreceptor-- approximately. This isn't exactly right. But it's close.
Each photoreceptor has its own lens. And the lenses are on a curved surface-- the surface of the head-- so they each look out in a different direction. OK? So each pixel of the fly's detector array sees a different direction, just like in us. It's just that they each have their own private lens instead of one big lens that they all share. OK?
So how should nature build such a device? Well, there's a lot of physics here. So the first thing you might think, some of you may know the joke about the father who sent his son off to school to study physics and thought he should be able to do something useful, like help him bet on horses. Do you know this one? So right, the-- anyhow, the punchline is I've worked out the case of the spherical horse.
[LAUGHTER]
You can imagine the part in between. So let's do the case of the spherical head, which by the way is a much better approximation than a spherical horse.
[LAUGHTER]
So what's happening is that on the surface of the fly's head, there are these little lenses. And if you think about it, what they are doing, by putting down all these little lenses, you're dividing the world up into little pixels. And the angular size of the pixel-- this angle-- is basically the ratio of the size of the pixel to the radius of the head.
So that says when you go buy a camera, 10 years ago you bought 1-megapixel cameras. And now you buy 13-megapixel cameras or whatever they are. So you want more pixels, make smaller lenses in this view.
And that's mostly right. But it's not the whole story because light is a wave. And if you try to send a wave through a very small hole, a hole whose size is comparable to the wavelength, then you get a phenomenon called diffraction.
So this is an experiment done in a water tank. So it doesn't matter that it's waves of light. Water waves work just as well. You shake something in the tank to make these waves. These are the crests of the waves coming from this side.
When they reach this hole, you see that what happens is instead of them going straight through, they bend. And they go out in a cone, which is much larger, right, than the-- so you might have thought that by making this small hole, you would only see things along that line. But that's not true. You also end up sending waves out in this direction.
And so correspondingly, if you make the lenses too small, diffraction will become important. And essentially what you see through any lens will become blurry. And you'll effectively integrate over a much larger area, even though your hole is really tiny.
So what does that mean? It means that if you plot the smallest things that you can resolve in the image in angle versus the size of the lens, if you make the pixels too big, then you're just putting lots of details in the same pixel. And that doesn't do you any good. But if you make the pixels too small, you get this diffraction blur.
This part depends on the size of your head. But this part depends on the wavelength of the light. So the best you can do is to compromise at this point where these two affect balance. And that predicts that the size of the lens should be the square root of the wavelength of light times the size of your head.
So some of you perhaps thought that having left middle school, you would never hear about square roots again. But here we are. So this is an interesting prediction. It says that if I look at different insects which have different size heads, if they're really pushing to build eyes that have the best possible resolution-- to see as much detail as they can possibly see-- then the size of the lenses they make should be related to the size of their head in this simple way, where this is the wavelength of the typical light that they're seeing.
So this argument actually was first given-- or something equivalent to this argument was first given in the 1890s by Mallock, which is impressive if you think that we only really understood the phenomenon of diffraction perhaps in the 1870s. I don't know exactly how you want to count. So around the time that people were thinking about the limits to resolution in a microscope and so on, somebody understood that insects have the same problem.
The argument reappears in the Feynman lectures-- his undergraduate lectures on physics-- where he plugs in the numbers for a bee, shows that you get the right answer to within 10% and is very happy and moves on. What he doesn't note is that in 1952, Horace Barlow, who's a remarkable character in his own right, made enormous contributions to our understanding of vision. This was near the beginning of his career. He's still contributing past 90.
He decided to go into the drawers of the Comparative Zoology Museum in Cambridge and measure on many different insects the size of their head and the size of the lenses. And what he found is in this graph where he plotted the size of the lenses versus the square root of the size of the head. And you can see that they're basically proportional to each other, as this equation would predict. And in fact, although he didn't make a big point out of it, the factor out in front is actually right, as well.
So what this tells you is that when you look at nature, one of the things that biologists like to torment physicists with is the enormous variety. In physics, we like theories that are compact. They're simple. They're predictive, but they don't generate so many different possibilities.
In biology, everything happens, right-- this incredible variety. So what's beautiful about this graph, which now is 60 some-odd years old, is that these data points reflect the diversity of insect life. There are species of insects whose lenses are less than 10 microns in size. There are ones that are 35 microns in size. Correspondingly, the size of their heads vary over more than a factor of 10, so I invite you to imagine these behemoths over here.
The pattern of this diversity is something you can understand from very basic physical principles. In fact, you can do more than this because it turns out that not all insects follow this rule. If it's very dark outside-- so all the photographers in the room know this. If things are very dark outside and you can't lengthen your exposure time, then the images are going to become a bit grainy. The way to solve this is to smooth them out, which means you sacrifice spatial resolution in exchange for intensity resolution.
So if you look at the insects that fit this pattern, there are the ones that fly in the noonday sun. The ones that fly just at dusk or even more at night, they don't do this. They make bigger pixels to collect more light. So not only do you get this very basic picture. You can even understand the deviations from it.
So let's think about the problem of the dark of night. And again, the goal is to do a controlled experiment. So sit down in a very dark room. And you wait, and somebody flashes a light and asks you whether you see it.
Well, if the light is very bright, there's no problem. You see it every time. And if the light is very, very dim, you never see it. The interesting thing is this world of uncertainty in between, where sometimes you see it and sometimes you don't.
Now you might think that arises because of our human frailties. We can't sustain paying attention all the time. And so even though we're trying, we fade out for a moment, and we miss it. So you try harder.
And then you have some nasty experimentalist who occasionally makes a click that says you should expect a flash, but there's no flash. And if you ever say yes, he sends you home. OK? So you learn not to say yes when there's nothing there. And you work harder and harder. But you can never quite get rid of this range where sometimes you see it and sometimes you don't.
But what's interesting is that if you plot how often you see the flash as a function of how bright it is, there is some fairly reproducible relationship that says it's not just that, well, down here you never see it. Up here, you see it and, I don't know, something complicated happens in between. No, there's a very definite curve that you seem to follow. And all people seem to follow the same curve.
So where does this come from? Does it come from us? Or does it come from the physics of the world?
Well, the story of resolution in the fly's eye is about the wave nature of light. But you also know that light has a particle nature. It's quantized into photons. And when you shine light on something, the individual photons are absorbed at random.
So even if you deliver a flash of light that has exactly the same intensity in the usual sense that we mean it, the number of photons that are absorbed will be random. And we know how to calculate the probability that one photon is absorbed, two photons are absorbed, and so on. This curve is calculated on the assumption that you're willing to raise your hand and say yes when you count up to six. And that's exactly what you see.
Well, if you can say yes when you count up to six, maybe you can count the individual photons. So you'll notice that's 1942. People started getting worried about other things around then. And the field got derailed.
1972, there's this remarkable experiment actually by a student of Barlow's-- Barbara Sackett. And what she did was she said, OK, instead of just telling me yes or no, tell me how bright you thought the flash was. And by "tell me how bright," I want you to give me a number between 0 and 6.
So the first thing you notice is that if you measure how intense the flash is and you ask, on average, what do people say, they're proportional to each other. But what's even more remarkable is that if you look at one particular intensity of light and you ask, how often do they say zero, one, two, three, and so on, the variability of that number-- the variance, for those of you who know these words-- is equal to the mean. And the whole distribution has pretty accurately-- not perfect, but pretty accurately-- the shape that you would expect if what the person was doing was just counting the photons as they arrived.
There's another subtlety to this which is that even when the flash has no light in it, people will say something nonzero. Usually they say zero, but sometimes they'll say one. Hold on to that thought for a moment.
The way all these experiments are done, the light flash hits a region of your retina that has several hundred cells in it-- several hundred of the cells that are responsive to light. But if there are five or six photons that hit 500 cells, the chance that two of them fell into one cell is only 1%. So that means that if this is really working, it must be true that the individual cells are responding to one photon.
And in the late 1970s, people finally succeeded in detecting those signals by taking one of these so-called rod cells from your retina, or from the retina of a toad, and measuring the current that flowed across the membrane of the cell and seeing the individual pulses that come every time the system absorbs one photon. And seeing that if you flash the light, sometimes you get a pulse of height one, and sometimes you get a pulse of height two, and sometimes you get a pulse of height three because photons are absorbed at random. Sometimes you get one. Sometimes you get two. Sometimes you get three, and of course, sometimes you get zero.
There's a lot of stuff going on in this cell. There's a lot of stuff that's the same as what goes on in other cells. But there are special things because it's the cell that's sensitive to light. The way it starts out to be sensitive to light is to pack in this so-called outer segment a pigment molecule called rhodopsin, which consists of a big protein that's synthesized by the cell and a smaller molecule, which actually does the business of absorbing the light, which is called retinal. And it's the derivative of vitamin A, which is why eating carrots is good for your eyes.
And at Cornell, it seems relevant to mention that there are billions and billions of these molecules in the outer segment of the cell. And that's actually really important. If there weren't billions of them, then you wouldn't actually succeed in absorbing the light. It would just pass right through.
But that fact that there are billions of these molecules causes a problem, which is so when the molecule absorbs light, it changes its structure. And that structural change triggers all the events that eventually lead to an electrical signal, which gets transmitted to the other cells in the retina, which eventually finds its way into your brain, which is somewhere in the sub-basement. But with a billion molecules, there's a chance that one of them will, just as the result of being jostled by the water molecules around it, undergo that structural change spontaneously without absorbing light.
So how often does that happen? Well, the answer is once every 1,000 years or so. So you think, that's not a problem. But it is a problem because there's a billion molecules. So that corresponds to once a minute.
And that is finally why you have to count up to six because if you didn't count up to six, you wouldn't be sure that those extra counts came from the outside world and not from the random events that are happening inside the cell. And you'll remember that if you actually ask them to give you a count, not yes or no, but tell me how bright you thought the flash was, even when the flash has zero physical brightness, they give you a nonzero number. That's the number they give you.
So what this tells you is that your perception of light in the dark of night is so precise that you're counting every photon that arrives at your retina or every photon that triggers a change in the structure of rhodopsin. And the precision with which you can do that is ultimately limited by the fact that once every 1,000 years, the rhodopsin molecule itself does something funny. And everything that happens after that is essentially perfect.
But it's astonishing that you can have a cell that has a billion molecules in it. And when one of them changes its structure, the cell can tell the difference. It's as if you were surrounded by a billion molecules of some perfume. And somebody added one molecule that was a little different, and you could smell it.
So you might wonder, right, light is special. There are only a few parts of biology where light is what's being sensed. But there are lots of things in biology where what you're doing is sensing a molecule-- a chemical signal. So are there other examples where you get down to the point where you're counting single molecules?
So in order to do this, let's look at something very different-- again, chosen in part just for its beauty. This is the embryo of a fruit fly. It's about half a millimeter long.
Here, they've been stopped 15 minutes apart. At this moment in development, all the cells look alike. 15 minutes later, you'll notice that there's a line right here, and there's a line in here.
Let's focus on this one. It's called the cephalic furrow. This is going to become the head of the fly. This is going to become the rest of the body.
Here you see a movie of this process live in which some of the proteins that live near the membranes of the cells have been genetically engineered so that they're fluorescent. And now you can see this line forming. And so this is, you're looking at the side of the embryo, much as you are here.
Here there's been a trick in which what you're doing is looking from the top down on the embryo. But you focus the microscope in a plane in the middle of the embryo, so you don't actually see the top or the bottom. You see only a slice through the middle. And all of the nuclei of the cells have moved to the surface at this point in development. And so you can see this furrow forming right here.
So this looks complicated, and it looks messy. But you might ask, is it really messy? Or maybe it's precise after all. So my colleague Thomas Gregor did this beautiful experiment of taking 96 embryos and putting them on the same slide and putting them under the microscope and filming them, much as he filmed this one, but 96 times in parallel. And he tried to capture the images from each one of them. This is not so easy to see.
But you'll notice that there's a little dent for the beginning of the cephalic furrow here. There's the beginning of the cephalic furrow there-- this piece there. And you'll notice that they all line up. And I've tried to attract your attention to that with this dashed line. I'm not sure how successful that is.
But if you actually measure carefully where this invagination occurs-- where this furrow forms-- what you find is that even if you look at 100 embryos, the jitter-- if I call this point 0 and this point 1, this point is about 30% of the way from there to there. But over 96 embryos, it's 30% plus or minus 1%.
And if you count how many rows of cells there are, you'll notice that there's about 100 rows. So what that means is that this furrow is being positioned with an accuracy of one cell. So this cell knows that it's supposed to be part of the head. And that cell right next to it knows that it's supposed to be part of the rest of the body. And that's true in every single egg that this mother lies-- precisely, plus or minus 1 cell.
So how on earth does the embryo do that? Well, this is one of the great triumphs of modern molecular biology and genetics is to understand how this works, at least an outline. The mother sets up what in physics we would call boundary conditions.
She puts, for example, the messenger RNA for particular molecules at cardinal points in the embryo. There's a different molecule that she puts here and here and here. This messenger RNA gets translated into a protein, which can move through the embryo.
And that produces this floor plan, if you will. You know when you're in the hospital and you don't know where you're going because everything is painted the same color, they put something on the ground that you can follow. So here, the scheme is not that there's a path that you follow the blue line. But rather, if you see a very bright green thing, then you know that you're near the head. And if you see a very dark, you know that you're near the tail.
And of course, we've made it green by staining it. Just so that you know how this works, you take the embryo. You stop the action by-- not to put too fine a point on it-- gently cooking the egg. And you then bathe it in antibodies that have been tagged with a fluorescent molecule so that you can see them. And you see this very gradual signal.
Now it turns out that there's a few of these molecules. And these are molecules that have a very special function. They go inside the nucleus of a cell, and they bind at special places along the DNA. And when they do that, they can trigger the reading out of the genetic information from the gene that's nearby, which means they can cause the synthesis of another protein.
So one of those proteins is shown here in red. And you'll notice that what was a very gradual pattern has turned into a very sharp pattern. And so in effect, what you've done is to draw a line and say, this is the front half of the embryo. That's the back half of the embryo.
And so that's the beginning of laying down a blueprint. You need to do more than that. And so indeed, that's not just one of these molecules. There are several that divide things up in different ways but still very broad patterns.
And then those molecules feed in. They do the same thing over again. They turn on and off other genes. And if you look at the proteins that are encoded by those genes, you find that they come in these beautiful stripes.
Now the product of fruit fly development is a maggot. You then have to wait for metamorphosis to make the fly. Maggots are things that you probably haven't spent a lot of time looking at unless you like forensics. But it's the same principle for caterpillars that become butterflies, and that's somehow more charming.
So you may have looked at caterpillars and noticed that their bodies come in segments. And if you came away from this talk thinking that the segments of the body correspond to the stripes here, that wouldn't be so far off. It's not exactly right, but it's pretty close.
Certainly, it's true that these stripes-- which say the green protein is in high concentration here and here, the orange protein is in high concentration there and there, and so on-- if you blow up this picture and align it with the picture with the furrow, where this is going to become the head and that's going to become the rest of the body, you can see that the line where the furrow is is the line of cells at the border between green and orange. So what that means is that these molecules provide the blueprint for making the body plan of the whole organism. What does this have to do with counting molecules?
Well, in order to put this in the right place, plus or minus one cell, you have to put this stripe in the right place, plus or minus one cell. But who told you to put this stripe here? Well, that was related to the signal in these molecules, which was related to the signal in these molecules.
So let's take these primary morphogens and blow the picture up a little bit. So here, what's been done is to genetically engineer a fly so that whenever it makes this molecule, it's always got a fluorescent molecule attached to it. So if you shine light on it, it glows. And how bright it glows tells you how many molecules there are.
But you notice that two nuclei that are right next to each other glow almost with the same brightness. And so you might worry that has to do with the reproduction or whatever. But you can be careful. If you zoom in, what you find out is that in each nucleus there may be 1,000 molecules.
And the difference between two guys that are next to each other is about 10%, so 100 out of 1,000. So you think, I don't know, 100 out of 1,000-- I should be able to do that, right? If you show me 900 dots versus 1,000 dots, I can probably tell the difference.
But that's not how it works. There's no way for the embryo to stand outside the nucleus and count all the molecules inside. The way the molecules work is that they bind DNA. So these molecules that are floating around inside have to find their way to some special target somewhere on the DNA deep inside that nucleus.
Now a nucleus is about 6 microns across, and the target on the DNA is 3 nanometers. So for those of you-- actually, for any of you, even if you offhand know what these numbers mean, it might be useful to scale things up to a more human view. So let's imagine that the size of the nucleus is the size of the Cornell campus-- about a kilometer across. Then the target that you're trying to hit is about half a meter-- so this big.
So what does this mean? It means that the way in which the cell can respond to the chemical signals carried by these molecules is equivalent to trying to figure out how many students there are on campus by standing still and counting how many people walk past you. Right? People are milling all around the campus, and some of them will walk past you. You're the little target on the DNA.
Now you realize that it would help if you were bigger because then you'd catch more people. You don't get to count the guy over there. You only count the people who come here. But you're not. You're only this big.
And if you could count for longer, that would help, too. But we know that this whole process is actually very fast. The nuclei are doubling again and again and again. And there's only a few minutes in which the cells have to make decisions.
So these words about why it's difficult can actually be translated into equations. And we can ask, given that all you can do is to count the guys who come right near you and you only have a limited amount of time to count, can you tell the difference between having 1,000 molecules and 900 molecules scattered across the entire campus? And the answer is just barely.
If you count every single molecule that you encounter that hits that relevant part of DNA, you'll just make it. You'll just be able to tell the difference. And even that actually only works because of some special features of how the whole system is set up. Essentially, the nuclei can talk to each other a little bit, just as if you planted another friend somewhere else on campus and conferred with them to find out how many people they counted. And that's actually essential.
So this is another example where the precision with which the embryo can draw a line is so high that it corresponds, essentially, to having, at some point in the process, to count every single molecular event that happens. And since individual molecules move at random, if you don't count up enough of these events then your estimate will be random. And it's only by counting up to the right number that you can squeeze that randomness down-- that you can pinpoint every cell along the length of the embryo.
So I've given you a few examples of biological systems that operate near the limits that are allowed by the laws of physics. There are more examples. In fact, this example of molecule counting that I've been talking about has its origins not in thinking about embryos but in thinking about how bacteria find their way to sources of food.
They measure, for example, the concentration of sugar in their environment. And as they move along, they count the molecules of sugar that hit their surface. And if that number is going up, then they must be going toward the source of food. If the number is going down, they're going away from it, and they should try another direction. And again, there Howard Berg and Ed Purcell showed that the precision with which bacteria do this is so high that they have, essentially, to be counting every single molecular event.
I talked a lot about vision. The example of hearing that I talked about was from bats, which is a bit special. But generically, if you do an experiment like the one of sitting in a dark room but instead of waiting for a flash of light you wait for a very quiet sound, and you ask at the moment where you can just hear the sound, how much is, for example, your eardrum moving, the answer is the diameter of an atom.
And if you now ask what's going on inside your ear that makes this possible, there are special cells that have little hairs sticking up from them which vibrate. But these are very small objects. They're the size of a micron.
If you look under the microscope at a micron-sized object, you'll see that it's moving around at random. It's called Brownian motion. And although the details are a little fuzzy because they got worked out over decades, it's pretty clear that the quietest sounds that you can hear are just big enough that they move things a little bit more than the Brownian motion of these hairs-- but only a little bit more. And if it didn't move them more, you wouldn't be able to tell the difference between the random motion that just happens from sitting in water and the driven motion that comes from listening to a sound.
The information that your sensory systems collect comes in many forms. But once it gets transmitted into your brain, it gets translated always into the same form. All of the nerve cells in your brain generate these discrete electrical pulses called action potentials or spikes. And because they're discreet, there's a limit to how much information they can carry. And there's a lot of work from my colleagues and I, but also many other people, showing that the rules that nerve cells use for translating their input signals-- the sounds and visual signals, the signals that you get from the pressure on your fingers when you touch something-- the rules for translating those signals into these sequences of pulses are such that they serve to optimize how much information can be packed into every single one of those action potentials.
There's another example that I like, which is from the example of estimating motion and vision, which we do all the time, right? You see that there's something moving. There's nothing on the surface of your retina that responds to motion directly. That's something you have to compute.
For the same reasons that we talked about with noise in the background for the bat, the random arrival of photons at your retina sets a limit to how precisely you can estimate this motion. And other sources of noise in the retina do the same when lights are brighter. And there's a very important thing to remember, which is that when you try to process signals that aren't themselves perfect-- they're a little bit noisy or corrupted-- you have to do something to insulate yourself against that randomness. And when you do that, you inevitably introduce systematic errors.
So a simple example is that if you try to look at snapshots of something, if you stop and look at one frame of a video, it looks blurry and complicated. And so when you average over many frames, you can pick out something that's coherent. But if you average for too long, then you will smooth out the things that really were changing.
So there's always a trade-off. If you average for too long, you miss something. And if you average for not long enough, you're susceptible to a lot of noise. There's no such thing as zero errors. All you can do is trade your random errors against your systematic errors.
And so that means that if you do the best you can, you'll still make errors. They'll be as small as possible. But if I now create an artificial situation, I can make a signal that in some sense goes right for the weak point of your calculation and only generate the systematic errors.
And in perception, those things have a name. They're called illusions. And I should assure you that there's nothing actually moving here. There are no moving objects. These are randomly flickering pixels.
And the only reason you see them moving is because I know what calculation you're doing. You're doing the calculation that generates the best estimate for motion under almost all conditions you find in nature-- but not here. [LAUGHS] OK. And so I can give you the illusion of motion, first in one direction, then a little more slowly, and finally not at all, and then even in the opposite direction. OK. So an important aspect of doing as well as you can is it doesn't mean you'll always get the right answer. Just get the best possible answer given the conditions under which you operate.
But I have to take the last few minutes and address something that's very important. All of this, I've been trying to convince you that biology is so perfect. It's not so long ago that Jim Watson writing the headline of one of the sections in Molecular Biology of the Gene, "biology obeys the laws of physics and chemistry"-- that was hard fought. OK. The notion that there was a life force isn't that old.
So what you're seeing here is examples where not only does biology obey the laws of physics and chemistry, but they come right up to the edge of what's allowed. There's a kind of optimality or perfection. But to err is human. We're very aware of our imperfections.
So how do I reconcile all of these examples where biology works so perfectly with our everyday experience? I'm not sure I know the answer. But let's try.
So here's a problem that is one that is a favorite for illustrating our foibles. Your doctor orders a test for a fatal disease. Unfortunately, it comes out positive. So then you say, well, all right. Let's check. Tests have errors.
So what's the false-positive rate? What's the error? And the answer is 1 in 100. It's 1%.
You should also know the disease is rare. Only one in 10,000 people get it. So the question is should go home and make out your will?
Now I'm not going to ask for a show of hands. But let me tell you the standard story that you'll find in the textbooks. Apparently, most people will say they're doomed because there's only a 1% chance that the test is wrong. OK? Too bad.
What's worse, however, is that the textbook tells you that you were foolish for thinking that because if you test 10,000 people and there is a 1 in 100 error rate, then 100 people will get test-positive. But you know that only one of them has it. So that must mean that from the result of this test, there's only a 1 in a 100 chance that you actually have the disease. Therefore, you shouldn't worry.
And your inability to realize that is taken as a marker of your errors in reasoning about probability. OK? Now goodness knows, human beings make many errors in reasoning about probability. There are whole industries devoted to exploiting these errors. However, this is summarized by saying that people cannot take account of prior information, the prior information being that only 1 in 10,000 people get the disease.
Now I find this whole line of work very problematic because as far as I can tell, if I can't make good estimates of probabilities, I can't cross the street. Right? Every time I cross the street, I'm making an estimate of the odds of getting hit by oncoming traffic. And somehow, we've all managed to make it here this evening.
I think the thing that's wrong with this argument is that it neglects a very important source of prior information. Your doctor ordered the test. Why were you at the doctor? Well, something's wrong. Why did they order the test for a fatal disease? Presumably because they've eliminated all of the easier things.
So in fact, there's an enormous amount of information hiding in the formulation of this. And so if we're ever going to make any progress figuring out whether people are good at solving this kind of problem, we have to find an example which is better posed than this, right? I don't know how to measure how much information is contained in the fact that your doctor ordered the test. But obviously, you're taking account of that when you listen to the results.
So here's an experiment that somebody tried. They said, well, all right. It's kind of the same spirit. It's about randomness and order, right? Can I make estimates of probabilities? Let's look at a bunch of coin flips.
So if you see 10 coin flips in a row and you see that, you might ask to see the coin because you suspect there's something funny going on. But also, you might wonder about this one, where you see tails, heads, tails, heads, tails, heads. That's a kind of order, too.
So if I asked you which one of these actually comes from a fair coin and which one comes from something else, I don't know exactly what's going on, right? But in fact, the experiment was set up that the guys on this side were a funny coin that remembers its last flip and tends to do the same thing on the next flip. Again, a coin you shouldn't bet with, but that was how it worked.
And that makes sense. Your sense that there was something funny going on with this long string of heads was right. It's not that the coin was biased to be heads. It's that it was biased to do the same thing over and over again.
However, you'll notice that despite the fact that it's biased to do that, sometimes it does the opposite and alternates. On this side, it really was a fair coin. And every time the coin gets flipped, it's independent. But you'll notice that sometimes you get runs anyway.
So what that tells you is that even if you do the best possible job of trying to tell the difference between something that's ordered or correlated and something that's random, there's a probability that you'll get confused. And that's not your fault. That's the way statistics works.
So what happens when people do this? Well, I can calculate what's the best you could possibly do. Obviously, when you see a long run of heads, it's pretty easy to conclude that that must've been the biased coin. And you're right.
When you see things alternating, it's a little more complicated. But you should conclude that since the bias was to do the same thing over and over again, if you see things alternating that must actually be the random case. It's hard to convince yourself of that, so people take a long time to learn that.
But what you can see is that if people trained to do this task, eventually they learn, and they do almost as well as it's theoretically possible just because of the statistics. So although it's easy to come up with examples where it seems like we don't do so well in reasoning about random events versus ordered events, distinguishing between what's random and what's structured is one of the most fundamental things that we do. And in fact, we're very good at it, even in very well-posed mathematical examples.
So where does this leave us? I think that if you look at the discussions, for example, of evolution in the popular press, there's two parts to evolution. One part is that you have random mutations that generate variation. And the other part is that you have selection that picks out the things that really work well.
Now for a variety of reasons, the people who explain evolution to the general public often emphasize the randomness of the variations. Now I think this is because they're trying to counter a view in which what you see is the handiwork of a creator. And so they emphasize the randomness as the intellectual opposite of that view. But that's only half the story of evolution.
If you believe that it's all random, you have the view of evolution as tinkering. There's a famous book called The Blind Watchmaker. OK? You don't have to read the book to understand what the message is.
So there's this wonderful Calvin and Hobbes cartoon where Calvin sees this sign that says, Load Limit 10 Tons. He asks his father, how do they know the load limit on bridges? And the father says, oh, they drive bigger and bigger trucks over the bridge until it breaks. They weigh the last truck, and they rebuild the bridge.
[LAUGHTER]
And you realize, of course, that the mother and father don't have names because this is Calvin & Hobbes. The mother says, if you don't know the answer, just tell him. Calvin says, oh, I should have guessed that.
Now there's a lot going on in this cartoon. This is about gender roles in parenting.
[LAUGHTER]
But I want to emphasize to you that this view of tinkering-- that trial and error, right? Evolution is trial and error. Well, not quite. It's not true.
It's a random generation of variation followed by selection. If the selection is strong enough, I can push for things that work better and better and better and better. But there's a limit. And that limit's set by physics.
And what we see in all these examples is that nature has somehow pushed right up to those limits. And the reason that I find this so interesting is that if you think about the set of things that are allowed by the laws of physics and chemistry, it's huge, right? But if I think about the things that happen right at the edge, those are very special.
And so I can do things like ask, if I'm going to make estimates of visual motion that are as precise as allowed by the physics of the data that you're collecting, what calculation does your brain have to do? And that mathematical problem has a well-defined answer. And it's because we solved that mathematical problem that we could create that illusion that I showed you. And in all the other examples that I showed you, it's also been true that as we try to understand what is required of the system in order to get to these limits, you have at least the beginnings of a theory for how these systems are constructed.
And so as a theoretical physicist, I find this an incredibly appealing idea that we have an anchor point for trying to build theories of these incredibly beautiful phenomena. And so I hope that as you cross the street tonight or look at the caterpillar in your backyard, you think about some of these beautiful phenomena and appreciate them in a slightly different light. Thank you very much.
[APPLAUSE]
All right. What a beautiful lecture.
Thanks.
So there's time for some questions from the audience.
Yeah.
So how do you push a bat to improve his [INAUDIBLE] by [INAUDIBLE]?
Yeah. Great question. So there we are. In the first generation of experiment, you basically do it by carrot, not stick, OK? So you reward him for getting it right.
In the experiments where he was able to do 1 microsecond, what the bat would do is he would look in this direction and he would make a call. And then he would turn. He would look in this direction, and he would make a call. And then he would turn back in this direction and make another call.
But if you think about it, since the reason he's measuring timing is to measure distance, in order to make this comparison, he would have to reposition his head with an accuracy that corresponds to the distance traveled by sound in a few billionths of a second. He can't do that.
So at some point in the progress of the experiments, as you make the task harder and harder and he gets more frustrated because he can't solve it, he adopts a new strategy, which is that he stands here and he looks in this direction and he goes, squawk, squawk. And if he gets the same delay for both of them, then he turns here and he goes squawk, squawk, and he measures that one is different from the other. And so he actually adopts a different strategy, holding his head fixed for two calls instead of one. [LAUGHS]
[LAUGHTER]
It's biology.
I have a question.
Yes.
Thank you very much for that very refreshingly different view on biology. Biologists agree with [INAUDIBLE] every day. My question is that--
That's a very nerve-wracking beginning to the question. Go ahead.
[LAUGHTER]
My question is that a lot of evolution and survival of the fittest is biological in fact. Do we see propagation of error through biological phenomenon because the speed in which a container has to find [INAUDIBLE] is ultimately escape [INAUDIBLE] factors in. And so you see propagation of error in biological phenomena.
I guess in the examples that you're giving, my inclination would be to talk more about a kind of arms race where the two things that are competing with each other each have to get better in relation to the other, more than to-- there must be a clear example of error propagation. But somehow I'm not getting one that goes with the examples you're talking about.
There are wonderful examples of co-evolution where the optimum that we're talking about is always in relation to some other organism. So a classic example is where should you put the wavelength of light to which your receptors in the cones in your eye are most sensitive? Well, for us that's a complicated question.
If you're a honey bee, the answer is you should put them in the place where they provide you with maximum discrimination power among the different flowers where you might be foraging. But of course, the flowers also have an interest in you're getting this right because that's how they get pollinated. And so you see this beautiful match between the reflectance properties of the surfaces of flowers-- so the physical properties of the flowers that give them their color-- and the spectral profiles of the receptors used by bees.
So that's an example where the interaction is essential. But that one's cooperative. Maybe it's more optimistic. Yeah.
Yeah, this is more of a comment. But so [INAUDIBLE] light, you're saying that the fact that we can use the fact that evolution has taken biology so close to the edge of what is possible as an anchor point.
Yes.
So I guess if evolution is still going on-- I guess evolution is still going on, right?
Yeah.
And so some things have not yet reached--
Yes.
--that level of perfection. So I think you need to be careful about looking at maybe things that have had enough time to reach there because some things clearly haven't had time to reach there.
It's also true that evolution has to keep going on to make sure that we stay close to the optimum if the conditions are changing. So it could be that you're very near the optimum. But you're still evolving because the world in which you're trying to operate is shifting.
I think that the question is when you go and look at one of these systems, should you imagine that it's a random cobbling together of all of its parts, and then you gradually refine that view? Or should you imagine as your first approximation that it's had enough time to find a really excellent solution to the problem, and if that turns out not to be true, you'll try to understand why you didn't get there? And what I'm arguing is that as the list of examples where you're close to the optimum gets longer, you should start to think of that as your default assumption.
It doesn't mean that everything is exactly at the edge. It just means that it's a useful starting point. So in the extreme version of evolution as random variation and you don't worry so much about selection, then the only thing you can say about life today is that it's related to its ancestors. And then biology is history, which is a phrase you may have heard.
I don't think that's true. OK? A lot of biology is not history. It's the solution to problems that are essential for the survival of your race.
Two things to address, that, one, conditions may change over time.
Yes.
Two, the optimum may not be the maximum. For example, the speed of an enzyme may not be the fastest enzyme you could make. It may be the one that's correct for--
Right. So there's a problem of separability. So I think the objection you want to make to the examples that I've given you-- let me rephrase your question so it's harder for me to answer.
[LAUGHTER]
What's special about these examples is I think not so much that there's some special case where somehow evolution's had a long time. I think what's special about these examples is that it's easy to separate this part of the problem from all the other parts of the problem. OK? So if you're trying to hunt at night, it's pretty clear that you want to push your visual system to count every single photon. And there really isn't much pushing against that because if you can hunt two hours later into the night than your competition, you're going to win. OK? That's just all there is to it.
Now if we're all hunting at dusk where there's kind of enough light to see, then it might be that there's a cost associated with counting every photon that is not made up for by the extra amount of food that I can gather. And so now there's a trade-off. And it's complicated.
In the example of the enzyme, it obviously isn't true that the thing to do is to have every chemical reaction in your cell go as fast as possible. There are few reactions for which that is true. So if you're talking about the connection between the nerve and the muscle, where the nerve cell releases a neurotransmitter called acetylcholine, if a little bit of that spills out, then it'll keep exciting your muscle even after the nerve has turned off.
This is a complete disaster. So you have an enzyme that chops it up. That enzyme goes as fast as it can because having any left is bad. But if you ask about the enzymes that you use for metabolizing your food, it's obviously not true that you want every single one to go as fast as possible, right? That's not going to work.
So there's a challenge in finding problems where I can break off a piece of the problem and say that what it means to do the best is something I can write down in a simple equation. Once you have lots of those examples, the challenge is then to figure out how do you deal with the cases where the problem doesn't break up into quite such isolated pieces? And that's where the field is now. We don't know how to do that. Todd.
[INAUDIBLE].
Yes.
It might be very difficult to have the information on this, but is there any historical record of seeing the delta from the optimum [INAUDIBLE] selected pressure? In other words, if you're very far from optimum, does evolution push it very quickly to the edge? Or does it approach it slowly? Is there information on that?
So you should remember that there are several different timescales involved. There's the timescale of evolution. But in the lifetime of single organisms, there are also processes that allow you to adapt.
Some of them go by the name of learning. Some of them are called physiological adaptations. There's the process by which parts of your body develop, some of which are influenced by the environment as opposed to being an intrinsic program. So the thing you can do is to take things that move toward an optimum in real time, not on evolutionary timescales but on a timescale that you can get at in the laboratory. So those, one can work on.
There are now, as you may know, artificial evolution experiments, some now very long, like these famous experiments by Lenski at Michigan State, where he's been watching populations of E. coli bacteria growing for tens of thousands of generations. Those experiments are done under very static conditions. So it's not really clear what optimal means. And the system seems to be drifting toward being able to grow faster and faster.
What we do know is that some things, which fall on the categories that I'm telling you about, are themselves very ancient. So there are bacteria that will swim toward the light, which can do so with an accuracy such that they're probably counting every single photon. So some of these things are not inconsistent with the notion that we have them because we inherited them. We do have them because we inherited them. But that need not be in conflict with the idea of they were optimized because they might've been optimized a very long time ago.
I don't know if that quite answers your question.
So I'm feeling very [INAUDIBLE]. This deduction that we have to [INAUDIBLE] pushes the limit because even in ancient times, even if a person cannot count number of photons, he can still survive with the [INAUDIBLE]. Who can do this. So because if we have this [INAUDIBLE]?
So I think the problem is the following. We don't really have much intuition for the speed of evolution. It's a hard problem. What we have is what we can see today.
So the fact that you can count single photons-- that's it, right? You can't do better than that. Now not every organism can count single photons. And in fact, when the lights are bright outside and we're seeing in color-- so we're using our cones and not our rods-- we're not counting every single photon because we don't need to. Right.
Now why is the boundary between rods and cones? And why is it that this organism can count single photons and this one can't? I don't know how to do that problem.
In the case of the insect eye, people have made a lot of progress in trying to do the problem of when should you optimize your spatial resolution, and when should you give up some spatial resolution because it's dark outside or because you're flying very fast and the images are going to be blurred anyway, so there's no point in making little tiny pixels? So those are cases where we understand the trade-offs, I think, a little better. But as I say, this gets into situations where the different problems are competing with each other. And so what you mean by "the limit" is not so obvious. Yeah.
But let's make this last question.
That's a lot of pressure. OK.
[INAUDIBLE] so considering that the universe [INAUDIBLE]. So in the perspective of physics, what is the driving forces of bringing all the procedures and accuracies from all the cases that you describe? And if there is such a thing, is life eventually getting somewhat deterministic?
Great question.
Can you repeat--
Can you repeat the question?
Yeah. Can you repeat the question?
I can try. So the question is that there are many aspects of the motion of things in the universe which are random. There's lots of elements of randomness in the physical world. I'm pointing to instances in which biology has somehow tried to squeeze out as much of the randomness as possible and be very precise. Do we end up with a view in which life is almost deterministic in contrast to the sources of randomness in the world? Is that fair?
I don't know. And I guess I could stop there, and we could all go have a drink.
[LAUGHTER]
I think the "I don't know" is actually deep, right? It's not just that I don't happen to have an answer off the top my head. It's that we don't really have a feeling for this yet.
Again, the examples that I've given you are ones in which I can isolate the question. And obviously, all of life isn't like that, right? Most of the questions are all mixed together.
But actually, that's a feature of our understanding of biology altogether. If you look in a textbook and you ask, tell me something about how the brain works, you'll find that the different chapters of the textbook focus on some small region of the brain in which what's being done in that region is relatively isolated from all the other problems. And that covers a tiny fraction of the brain. But those are the examples. The places where everything's happening at once-- very hard to understand. OK.
I think you're maybe asking for something a little more than this about-- our understanding of the physical world is also evolving, right? And maybe it's evolving in a direction where on the one hand, there are the laws of physics that determine everything that we see around us. But on the other hand, as many of you know, it's not obvious why the laws of physics should be the way they are.
And there are theories in which you could imagine universes in which they're very different. And in fact, there are theories in which necessarily there are universes in which it's very different. And so you wonder why we happen to live in this one.
I'm trying to tell you that I think that the selective pressures for that life operates under always push it in this direction. And as has already been asked, it's not clear whether you're going to get all the way in the time available to you before things change so much that you're now going in the wrong direction. So I don't know.
I think that if we got to the point where we had a theory that life was necessarily like this-- where "this" was something that I could write down mathematically-- and that was actually testable, that would be so far from where we are today that I would think it would be fantastic progress, even if it turned out that the theory that we wrote down was wrong. And maybe that's a good place to stop. [LAUGHS] Thanks.
[APPLAUSE]
Sounds that cause our eardrums to vibrate by less than the diameter of an atom, bacteria that count every single molecule that arrives at their surface, and more: evolution has selected for mechanisms that operate near the limits of what is allowed by the laws of physics.
William Bialek of Princeton University tours these beautiful phenomena -- from microscopic events inside a developing embryo to our own perception and decision making -- March 18, 2015, as part of the Department of Physics Bethe Lecture Series.