share
interactive transcript
request transcript/captions
live captions
download
|
MyPlaylist
CARL HOPKINS: My name is Carl Hopkins from the Department of Neurobiology and Behavior. And I'm honored to introduce today's University Lecture. University Lectures were established by Cornell historian Goldman Smith, in the beginning of the 20th century, as a means of bringing the world to Cornell. Today, it gives me great pleasure to welcome James Hudspeth from Rockefeller University in New York.
So the great Spanish anatomist Ramon y Cajal wrote 100 years ago countless modifications during evolution and provided living matter with an instrument of unparalleled complexity and remarkable functions, the nervous system, the most highly organized structure in the animal kingdom. The same can be said for the various sense organs that provide inputs to the nervous system, our windows on the external world.
The sensory organs for sight, smell, taste, touch, and hearing are truly organs of unparalleled complexity and remarkable function. With more then 10,000 moving parts, mammalian ear is perhaps the most complex mechanical device in all biological evolution. Many of its mysteries have been uncovered by our speaker.
Jim Hudspeth received his bachelor's degree, his master's degree, his PhD, and his MD from Harvard University. And from there he did postdoctoral work with Ake Flock at the Karolinska Institute in Sweden. He then became assistant professor, associate professor and full professor at Caltech, where he started his research in the cellular mechanisms of sensory transduction in the hair cells of the inner ear.
He developed a method for isolating single hair cells from the sensory epithelial of the sacculus, an organ used for balance in the frog. It shares the same cell types with the inner ear, used for hearing. Steadily and systematically, Hudspeth learned to isolate, record from, and stimulate individual hair cells with tiny glass probes that have consistently astonished the world of scientific investigators with the beauty of the structures he works on and the elegance of the experiments that he performed.
He studied molecular basis of tuning, so fundamental to frequency analysis by the ear, and the molecular basis for adaptation and recovery. He has uncovered surprising mechanisms for pushing the sensitivity of hearing to unbelievable limits, sensitivity that allows us to hear airborne sounds, the sounds of language and of music, as well as sounds in the environment.
Jim has been rightly honored for his work, not only his research, but his teaching. He's a member of the National Academy of Sciences, he's a Howard Hughes Medical Investigator, recipient of numerous teaching awards from Caltech, UCSF, University of Texas Southwestern Medical, and Rockefeller University, all of which places he's held professorships.
His students and post-docs are names like [INAUDIBLE], [INAUDIBLE], and about 50 other students and postdocs form a who's who of modern neuroscience, cell biology, and hearing biology. Jim Hudspeth is also co-founder and editor of the journal Neuron. And he's also co-editor of the 5th edition of the long-awaited Principles of Neuroscience, co-edited with Eric Kandel.
Tomorrow, a more specialized talk will be given, more technical, in the [INAUDIBLE] Seminar Room at 12:30 as part of the NB&B seminar series. The title of the talk will be "Making an Effort to Listen-- Amplification by Myocin Molecules and Ion Channels in Hair Cells. After today's talk will be a reception in the atrium. So I'm very pleased to introduce Jim Hudspeth, who will be speaking to us today about how hearing happens.
[APPLAUSE]
JAMES HUDSPETH: Thank you very much for the kind introduction. And thank you all for coming today. I want to start out by giving you some sense of the motivations that we have for studying hearing. There are really two broad classes of motivations, one from the point of view of basic biomedical research, the other from the point of view of clinical impact.
First, we're impressed at a basic level by the remarkable technical specifications of the ear, of which I'll mention just three. First, we can hear sounds at frequencies as great as 20 kilohertz-- 20,000 cycles per second. And bats and whales can hear all the way up to 100 kilohertz or more. And that's in striking contradistinction to our other senses.
As you know, if you see a series of images repeated 20 or 30 times per second, the perception is one of a continuous image. That's the basis of television and of motion pictures. Whereas somehow the auditory system is capable of operating fully 1,000 times faster than vision, or in fact than our other senses.
A second remarkable technical feature is that at the acoustic threshold, the very faintest sounds that we can hear correspond to vibrations within the ear through a distance of about 3/10 of a nanometer, three angstroms, the size of a medium large single atom. So one wonders how something that's made out of lipids and proteins and whatnot can reliably detect vibrations at an atomic level.
And the third feature that's remarkable is the broad dynamic range of hearing. We can hear comfortably from 0 decibels, which is our threshold of hearing, up to about 120 decibels, which is the threshold of pain. That would be the sound of a very loud rock concert or an airplane taking off nearby.
Now that's six orders of magnitude, that is, a millionfold, in the amplitude of the stimulus, or 12 orders of magnitude, a trillionfold, in the power or energy of the stimulus. And again, it's a broader dynamic range than other senses and in fact than any man-made device that I know can readily encompass. So we'd like to understand how those properties arise.
Now, from the clinical point of view, it's pretty obvious why we have an interest in hearing. A lot of people have hearing difficulties. In this society and in other industrial societies, about 10% of the population, which is 30 million Americans, have significant problems with their hearing.
These are problems great enough to interfere with day to day communication, for example, talking in a crowded environment or talking over the telephone. Two million or so Americans or totally deaf. They have what's called profound deafness.
The conditions of deafness stem from a variety of different causes. There are five major ones. First there are more than 100 inherited diseases of hearing that affect just hearing or sometimes hearing and balance, and maybe 200 more inherited conditions that cause difficulties with hearing in associations with problems with the heart, with the kidneys, or other organs.
Secondly, there are various types of infections, such as encephalitis and meningitis that can readily damage the inner ear. Thirdly, there are certain licit drugs-- these are drugs like aminoglycoside antibiotics that sometimes have to be used in life threatening diseases. But they have as their major side effect devastation of the hair cells, the sensory receptors in the inner ear.
Another common cause of such problems is what's called cisplatin, which is the major chemotherapeutic agent for ovarian cancer. And many women must use that drug to deal with a life threatening cancer. But they can suffer complete or partial hearing loss as a consequence. The fourth cause of hearing loss is what's called acoustic trauma, exposure to loud sounds.
So that could be everything from excessively loud music to noise of subways in New York, things of that sort, or industrial noise, or things like gunshots. And finally, there's the phenomenon called presbyocusis, literally the hearing of old men, which is the gradual deterioration of our hearing faculties with age. And that's thought to be compounded of daily noise damage from various sounds in the environment and deterioration of the small blood vessels, the microvasculature of the ear.
Just as with age, we tend to have problems with our eyes, our hearts, and so on as a result of deteriorating blood vessels, and atherogenesis. So the ear suffers from the same problem. The impact of this hearing loss is enormous. First, it costs this country about $100 billion a year. That's not only for clinical diagnosis and treatment, but also for special education and in lost career opportunities for the deaf.
There's a strong impact on young kids. Those who are born with hereditary deafness are deprived of the ordinary avenue of acquiring speech and therefore all sorts of symbolic communication. And in fact, there's been a real advance in recent years in that most states it's now legally required that one tests every child born in a hospital to make sure that his or her hearing is more or less normal. Because it used to be that some kids would not be noticed for three or four, sometimes even five years until they got to elementary school that they had in fact, suffered a hearing loss, perhaps a genetic one. And they weren't acquiring speech and learning at the normal pace.
For the elderly in our population, it's long been the case that hearing loss has deprived them of contact with friends, with family, and the workplace. And here too, there's been a real improvement, just in the last generation. There used to be a real stigma attached to hearing aids. Wearing a hearing aid made you look, you know, old and out of it.
And these were large devices that often made loud noises and were very cumbersome. And Ronald Reagan was the first public figure who wore a hearing aid and let it be known. And it really was sort of a watershed. It was the only good thing he ever did, to my mind. But be that as it may, the fact that he wore one helped to destigmatize it. So it was on the front pages when he began to wear one.
A few years later, Clinton became President. He was wearing hearing aids, had them installed on both sides. But that was on page 17. And by then, it was a minor deal. And now nobody thinks twice about it, anymore than they think twice about whether you're wearing glasses or contacts or have had radial keratotomy. It's no big deal.
Now, I should say finally, the group of people who are most impacted by hearing loss are those of intermediate ages, basically the ages of most in this audience. Because any of us can lose our hearing overnight as a response to viral infection, for example. And when this happens, it has really catastrophic impact.
It causes depression. It causes suicide, even more than unanticipated blindness. Helen Keller, who rested with both of these problems, said that blindness deprives us of our contact with things, but deafness deprives us of our contact with people.
And it turns out that the daily, minor give and take discussions we have with people around us are terribly important for situating us in sort of a social milieu. And if we are deprived of that kind of input, we feel isolation that most hearing people simply don't have a sense of. The other thing we lose when we become deaf is an early warning system.
Without ever thinking about it, the hearing know that they can cross a street confident that if there's a siren from a coming fire truck or something, they'll be warned. And they can also hear somebody shouting out to them if there's a problem. Again, if one is deaf, one isn't aware of fire alarms. One isn't aware of such warnings. And one feels a greater sense of vulnerability.
Now, the remarkable fact is that more than 95% of all these problems with hearing, all the causes of deafness, stem from a single type of cell. That's the sensory cell of the inner ear called the hair cell. And that will be the topic of my discussion today.
We're interested both in understanding how these hair cells normally operate to produce the remarkable qualities that I mentioned at the outset, but also in what goes wrong with them, whether they can be protected, whether they can be repaired, ultimately whether they can be replaced. And I'll end up addressing all of those topics.
I tend to accelerate when I reach cruising altitude. I will try to keep myself under control. But if I start going too fast, feel free to interrupt with a difficult question that will slow me down. So what I'll do, at an ever increasing pace, is to first introduce what our ears are doing right now, how they work. I'll talk to you a bit about a model for how they transduce or convert airborne sound into electrical signals that the brain can interpret.
And then I'll focus a lot of the talk on the remarkable feature that Carl mentioned already, the ear's ability to amplify and tune its inputs. It's not just a passive detector. It's a very active filter that controls the nature of the signals that reach the brain. And then finally, in the last third of the talk, I'll turn to what can be done about deafness-- the current ways we're dealing with deafness and hope for future ways of remediating the problem.
So what I will do at the outset is to tell you what your ears are doing right now. So this is a diagram of human speech. At the top, you can see basically the signal you would measure if you had an oscilloscope and measured what was coming out of this microphone-- a series of impulses representing syllables of short length, long length, different amplitudes and whatnot.
Below is what's called a sonogram. It's 2 and 1/2 seconds in length. And it represents the same speech, but now decomposed in terms of its different frequency content. This particular chunk of speech is my own. It's actually the end of a Dylan Thomas poem, "Fern Hill."
"As I was young and easy in the mercy of his means, time held me green and dying, though I saying in my chains like the sea." And this particular piece is the last, "though I sang in my chains like the sea." And you can see that the vowel sounds are these relatively low frequency chunks, often in stacks of three that are called formants. And they're in the range of a couple of kilohertz or less for my male voice.
The consonants, such as that in "sang," "chains," "sea," and so on, go to much higher frequencies, up to about 10 kilohertz. They tend to be shorter in many cases, and they cover a much broader frequency spectrum. So the reason I show you this is to remind you of the fact that your ears are doing this even as you listen to me.
I'll use 53,500 and some odd phonemes, or speech sounds, in this talk. And to pass the exam, you have to get most of them right. You have to capture most of those sounds on the fly. So the key thing that your ear is constantly doing is breaking down complex sounds into these frequency components and reporting each of those to the brain.
Now I want to remind you of the sensitivity with which we can detect frequencies with a couple of demonstrations. We're not terribly sensitive to differences in amplitude. So this is going to be a tone and then it's going to become twice as loud. That is, it's going to increase by 100%.
[BEEP]
So we can readily detect that. If we go down to 20% change,
[BEEP]
--it's still detectable, but sub-awesome. If we go down to 10%,
[BEEP]
--it's equivocal, right? Most people have a hard time hearing that difference. So that's 10%. Let's go to 2% change in frequency, in the tone.
[BEEP]
Is that a little too loud for you? Sorry. That's pretty easily detectable, right? Nobody has much trouble with that. If we go to 1%, easy. 1.5%?
[BEEP]
Even 0.2, most of us can hear.
[BEEP]
And finally, 0.1?
[BEEP]
I can't hear that one. A trained musician-- do we have some violinists here? Trained musicians can hear sometimes 10 times better than that. They can hear a fraction of 0.1%. So the point of this is simply that our frequency discrimination is extremely good. We don't care that much about the amplitude of sound.
Another example that's pretty obvious is a symphony orchestra. You can hear a soloist, the pianists, or whoever it happens to be sounds plenty loud enough by him or herself. Then the whole orchestra comes in. There's now 100 times as much sound, but it doesn't seem that impressively louder, certainly not 100 times louder.
So we have a strong compression of sound intensity. But we're very, very sensitive to changes in sound frequency. So how does this come about? The apparatus that resolves these sounds is the cochlea, which is this spiraling snail-shaped object in the inner ear. It's associated with what's called the vestibular apparatus.
They're these three semicircular canals they give us our sense of angular acceleration-- in other words, our ability to detect rotatory motions of the head and body. There's also you a utricle and a saccule here and here that give us our sensitivity to linear acceleration. So I can walk around like this or measure up and down motions as a result of those two sensory organs.
The cochlea itself can be understood by imagining a very simple rearrangement. Suppose that you can uncoil this coiled structure. It would then be about 30 millimeters in length, lined with hard bone. It consists of two liquid-filled compartments, separated by an elastic diaphragm called the cochlear partition and including the basilar membrane.
And it's attached, of course, to the middle ear, which has the eardrum or tympanum and the three little bones, malleus, incus, and stapes of the middle ear itself. Now, sound of course consists of alternating compressions and rarefactions of the air. When there's a compression outside, it pushes on the eardrum, the tympanum, which moves the three little bones to the right.
And the last of these, the stapes, exerts a piston-like action, compressing the fluid in this upper compartment, which forces this basilar membrane downward. During a rarefaction, the opposite occurs and the membrane moves up. Now, if this basilar membrane were homogeneous, then it would simply oscillate along its entire length just like a plucked guitar string.
But in fact, it's not that way. It's sort of a magical string that varies in its properties. So at one end it's like the lowest string on a bass. The other end is like the highest string on a violin. And as a consequence of that, different sound frequencies are represented at distinct positions. So low frequencies cause a vibration up here. High frequencies cause a vibration down here. And other frequencies are represented logarithmically in between.
The key point is that when you listen to a complex sound like my voice, the various constituent components are analyzed. They're separated out along the length of the basilar membrane. So this is a real-time frequency analyzer, or a Fourier analyzer, if you want to use the mathematical term. So this is going on constantly as you listen to my voice or any other complicated sound.
Here is a rather, perhaps excessive demonstration of the point. But this has the property of etching it into your brain forever. So here is the throbbing eardrum, the little stapes. Here is the cochlea. This represents the ganglion carrying vestibular information into the brain. So there's the unrolled basilar membrane. Then we stimulate with different frequencies.
[LOW TONE]
[HIGHER TONE]
[HIGHER TONE]
[LOWER TONE]
[MUSIC - JOHANN SEBASTIAN BACH, "TOCCATA IN D FUGUE"]
What I like about this is the fact that this is what's going on in your own two ears. As you watch this and you listen to the Toccata in Fugue, your basilar membranes are flapping around just this way. And it gives you some sense of the complexity of what's going on.
The actual vibrations, of course, are much faster, because they're at the real auditory frequencies. But something very much like this is what's actually happening. Now, a way of characterizing this is to use a laser. One shoots a laser beam at some particular point along the basilar membrane and asks how much it responds to different frequencies of sound. When you do that, you get a so-called tuning curve.
So what this is saying is, how loud a sound-- it's measured in decibels, but it doesn't matter. This is increasing loudness. How loud a sound is necessary to vibrate that part of the basilar membrane by a certain amount? What you find is that for any particular point, there is some minimum amount of energy, the minimal threshold, that causes a vibration. This is called the characteristic or sometimes the best or the natural frequency, that particular frequency.
If you raise the frequency, you have to use a louder sound to get the same vibration. If you lower the frequency, you also have to use a louder sound. So this sweeps out this curve representing all the different levels, that is, the loudnesses, that cause the same response at different frequencies. Now, this represents the response at just one part of the basilar membrane. There are, of course, a family of these, some 4,000 of them spread along the cocklea.
Here I've flipped them around the other way. So the low frequencies are here, corresponding to this nerve fiber at the top of the cochlea. The high frequencies are here, corresponding to this nerve fiber. And these represent nerve fibers all the way in between. There's some 25,000 nerve fibers here, representing, as I said, about 4,000 different frequency levels.
So what you can appreciate is that different nerves are carrying information about different frequencies. And if you go back to the example that I showed you before, you can get some sense of the complexity of what's going on. Here is "Though I sang in my chains like the sea." This is the word "chains," which begins with a "ch." That's a broad frequency spectrum at relatively high frequencies. That's going to excite all these nerve fibers to different extent.
Now "-ains," this vowel sound, is lower frequency. There are a couple of frequencies here near two kilohertz. They excite those coteries of fibers. And there's also a low frequency component which excites fibers there. And finally, the "s" at the end of "chains," this again is a weaker sound at high frequency, exciting that particular panoply of nerve fibers. So this whole apparatus works like an inverse piano. In a piano, you pluck different strings and the different tones are blended together into a harmonious whole.
Here you start with, I hope, a harmonious whole and you split it apart so that a given set of sounds rings different particular groups of these nerve fibers. And the brain at the other end-- notice I've minimized the brain in this slide-- the brain at the other end just reads out what activity is there and it knows what sounds were present and how loud each of them was.
So now the key issue is, how do you transduce or convert these different frequencies of sound into an electrical response? And that's finally the job of the hair cells. There are 16,000 of these yellow cells along the length.
And when a stimulus comes along, it's going to stimulate a particular group of cells which are synaptically connected to this particular ensemble of nerve fibers carrying information to the brain. So if we look at one of these hair cells, we see something like this. These are not nerve cells. They don't have axons and dendrites.
They're simple epithelial cells. But they do make synaptic connections along the base with these nerve fibers that carry information into the brain. The specialized feature of the hair cell that gives it its name is the so-called hair bundle, which is this little bundle of hairs, the little cluster of processes sticking out of the top of the cell.
And that's nicely seen in a scanning microscope, looking down from above. These bundles uniformly have this wonderful, elegant, organ pipe sort of an organization. And there are two critical features of it. One is these processes, these little feelers, are always short on one side of the hair bundle and then grow progressively longer as you move across.
So the hair bundle always looks like a hypodermic needle that's been cut off obliquely at the top. And secondly, these processes don't stand up straight. They're always healing over towards each other so that they come into contact at their tips. And that's quite important for the way in which they do business.
If you take a living hair bundle such as this one, and touch it with a glass rod such as this, you can see two other important features. First, the hair bundle moves as a unit. There are filamentous connections that hold it together so when you push on it in any direction, it always moves as one unit. It doesn't splay apart.
And secondly, the individual stereocilia, these individual processes, remain quite straight along their length. They don't bend like a bow. Rather, they pivot about their base. And you can appreciate what that means-- namely, if these are two stereocilia, two of these processes, as they move they must share or slide with respect to each other at their tips. And that motion is key in terms of how they're excited.
Because at the tip of each short stereocilium, there is a fine filament-- it's actually a braid of two filaments-- connecting it to the next longer one in front of it, and so on up here. This thing is called the tip link. And it seems to be the site at which the mechanical stimulus is actually sensed. More specifically, we believe that there are two ion channels here that respond to the mechanical stimulation.
Here's a schematic diagram of it. The ion channel is a protein molecule that can form a pore through the membrane. And that pore can let ions such as potassium and calcium flow into the cell. When the hair bundle is in its resting position, the little molecular gate shown in red closes those pores.
But if we move the hair bundle to the right, the shear or sliding between the consecutive stereocilia elongates the stippling. It stretches the spring. The spring pulls on the little door. The door pops open. And now ions can flow in, bringing positive current into the cell and giving rise to an electrical response.
Now, this is a very simplistic, almost comical way of doing business, but it works. And it has two great advantages. The first is that it's very fast. There are no chemical reactions in this picture. The only time scale is set by how fast the channels can pop between closed and open. And that's really thousands of times per second. So it's a very fast process.
And that accounts for the fact we can hear such high frequencies. And the second nice feature is there's no intrinsic threshold to this. An arbitrarily small stimulus that deflects the hair bundle ever so slightly-- remember 3/10 of a nanometer at threshold-- is enough to cause a few channels to open, at least statistically to open a little more. That lets a few ions in and gives rise to a small electrical signal.
Now, this way of doing business is neat, but it leads to a problem. And that problem is that the hair cell has to do all of this underwater. Ideally, we would like this hair bundle that I just showed you to accumulate sound, to act like sort of a tuning fork.
As you know, you can take a tuning fork or an opera singer-- which I'm not-- can take a wine glass and sing at it. And the thing will vibrate more and more and more as it accumulates more energy. We would like our ear to work the same way. We would like it to be able to detect the faintest of sounds by soaking up those sounds over a period of time until they can be heard.
That would be great, except it's like operating a tuning fork underwater, operating a tuning fork in honey. There's a viscous environment in which this thing has to move, and the system is what's called overdamped, meaning that there's simply too much viscous drag on it. I want to remind you of damping with a simple yet messy experiment.
This is a mass. You will recognize a mass. This is a spring. There are no tricks. Such a system has a natural frequency. And I want to point out the fact that if I pump it very gently, just light pressure with this pin, I could easily make it move to a large distance. And if I let go, it keeps oscillating for quite some time. So this is a nice resonant system, right?
And in physics class, some of you are now suffering through masses and springs and you know what the equation is for the frequency and so on. The system does run down. After I pump it for a while, it takes about 14 or 15 cycles to run down. So it's relatively highly tuned. It has what's called a high Q. This is what happens if you're operating in air. It's what happens if you operate in vacuum.
The reality is, we're operating in a viscous medium. And Carl's kindly supplied us with a viscous medium, Karo brand high fructose corn syrup. You should not eat this stuff, but this is what you should do with it. It's very good to provide a viscous medium. So I just want to show you what the impact of the damping is.
Now this is a lifelike simulation in the sense that the viscosity of Karo syrup and the mass and spring constant of the system are designed to roughly be equivalent to what happens in the air. In other words, this mass represents our hair bundle trying to resonate inside the ear in the presence of water, rather than Karo syrup. Maybe one more of these will be enough to make the point.
This is great fun. If anybody wants some Karo syrup afterwards, let me know. I should say also, since this is being taped, I don't own any stock in Karo syrup. I'm not going to benefit from this financially, so don't-- so that looks good enough. So let's see how we do. So the same sort of stimulus, and I'll try to pump it with my pen and I get
So even if I move it through a large distance, it's so highly damped, the oscillation is extinguished in just something like one or maybe one and a half vibrations, instead of 13 or 14. So what do we do with this problem? How can the ear act as a resonator in the presence of such high viscosity?
And the original suggestion for this was made actually by somebody on this campus, a professor here, Tommy Gold, who many of you know as a rather heterodox astronomer and physicist who had many interesting ideas, even some right ones. This is one of the rightest ones. He suggested as early as 1948, based on his experience during World War II in radar, that the ear must have a regenerative property. It must have an amplifier.
It must have something that pumps energy in to overcome the energy that's being lost as a consequence of this viscous dissipation. And his hypothesis was-- I think it was mostly ignored, but a few people read the paper and thought it was just stupid. But in fact, 20 something years later, it turned out he was quite right. And it really has been an important insight in the field.
So there is an active process. And the active process in all tetrapods-- that is, amphibians, reptiles, birds, mammals-- it has four cardinal features that are shared. The first is amplification.
So if we look at the output of the ear, let's say the vibration of the basilar membrane as a function of stimulus frequency, we find, as I've already indicated, that there's one particular frequency where a given part of the basilar membrane vibrates the best. If we now take that same animal, the same ear, and remove its energy supply, perhaps temporarily block its blood supply, we find that its response falls.
This is a logarithmic scale so it falls by more than 40 decibels. In other words, it falls to less than 1% its original value. So we lose 99% of the sensitivity. The second feature is tuning. The normal position on the basilar membrane is highly tuned to a specific frequency, as I showed you before. If we deprive it of energy, the system becomes very broadly tuned, even if we jacked the sound level way up to the blue level.
The third feature is what's called compressive nonlinearity. If we stimulate at three different levels-- this is 20 decibels, in other words 10 times as loud as the first one. This is 100 times as loud as the first. Away from this characteristic frequency, the system grows linearly. But at the characteristic frequency it grows much less.
Now that seems a little weird. It's growing less at the frequency where there's the greatest sensitivity. But what it's really saying is that a very small stimulus produces a very large response at this frequency. It's almost saturated. If we make the sound louder, it cannot grow that much more, because the amplifier has already brought it to near its saturating level. Whereas away from that, there's still compass for considerably more growth at other frequencies.
And finally and most weirdly, there's the phenomenon of spontaneous otoacoustic emissions. 85% of normal human ears can actually emit sound. So if we capture an undergraduate, tie him or her up, take them to a sound chamber, put a sensitive microphone in the ear-- I guess we should sign a release form. But if we do that experiment, the odds are 85% that there would be one or more tones of sound coming out of the ears.
And these spectra are quite idiosyncratic. Your left ear will have a different set of sounds from your right ear. Yours will be different from mine, from Carl's, and so on. But they're stable over time. So a year or five years later, they're still the same, unless your ear has meanwhile been damaged by one mechanism or another.
Now, nobody thinks it's useful to have sound coming out of your ears. And in fact, what this is, is an epiphenomenon. It's a side effect of the amplifier. This amplifier has gain control. In a loud environment it turns itself down, because you don't need it. If you go into a quieter environment, it begins to turn on.
If you're in a very quiet environment, it turns itself up so far that you can hear the proverbial pin drop. It becomes terribly sensitive. If you go into a hyperquiet environment like a sound chamber, it actually goes unstable. It's like this microphone or other PA systems. If you turn it up too far, it begins to howl or oscillate. And those oscillations are what cause these sounds coming out of the ear.
Now we've investigated the basis of this amplification by using simple model systems that we can study in a dish in vitro. What we do is to take a hair bundle such as this one and attach to it a long glass fiber. It's about 100 micrometers in length, about half a micrometer in diameter.
We can measure the back and forth motion of this hair bundle by casting a magnified image of the bundle and of the probe onto a photo diode. And by that means, we can measure motions down to one nanometer, a billionth of a meter, with a bandwidth of a kilohertz, 1,000 cycles per second or more. We can also give stimuli.
So if we move the base of this fiber, this will pull the hair bundle to your right. And if we know how stiff the fiber is and how far it's bent, we also can measure what force the hair bundle is producing. And we know, as again you're learning in physics, that the force to the right produced by the fiber has got to be equal and opposite to the force to the left that's produced by the hair bundle.
So we can by this means basically measure what forces the hair bundle is producing. And using this kind of apparatus, we can see all four manifestations of the active process that I just showed you-- first of all, amplification. If we move the base of the fiber back and forth through plus or minus 10 nanometers, we find that the tip-- that is, the hair bundle-- is moving twice as much. You can see a one to one coupling between the stimulus and the response.
But you can see that the response is significantly amplified. This green line and this green line show the level of the stimulus. So amplification is there. Tuning is there. The system is tuned to a specific frequency. Here, it's only about eight or 10 Hertz. This is a low frequency organ from the frog. But we've measured higher frequencies in certain other organs. It shows compressive nonlinearity.
So if we look at the relationship between the stimulus that we put in and the response that comes out, the vibrations of the basilar membrane, or of the hair bundle in this case, we find that over most of the range where we do our hearing, from about two nanometers to about 100 nanometers, the slope is less than unity. It's about one third. And that turns out to be significant in mathematical terms that don't have time to go into today but I'll discuss it tomorrow-- in terms of a mathematical object called the Hopf bifurcation, which we think is present in the system.
And finally there's spontaneous oscillations that we think cause the spontaneous sound emitted from the ear. So you can see this hair bundle flutters back and forth at a pretty regular high frequency. This hair bundle is fluttering back and forth at a slower frequency and is more irregular. And if we measure this fluttering on many hair bundles, we find that they show all kinds of different personalities.
Some of them are of almost metronomic regularity. Some are slower. Some are faster. Some are one sided. Some have this rather complex waveform with a fast and a slow component, fast and slow, fast and slow. Some of them are just ugly. But anyway, these all occur in different hair bundles under different circumstances.
So how does this come about? And here we find something quite interesting. If we push on the hair bundle and measure how much force we need to apply in order to move it through different displacements, we get this strange curve. If you take an ordinary elastic object and you push up on it, you would expect a linear relationship that fits Hooke's law. Hooke's law just says that the force is proportional to the displacement.
That's a straight line passing through the origin. We instead have a curve with a strange loop in it. And in this region, there's actually a negative stiffness, which is something you've never heard of. We had never heard of it when we found it. So here's what negative stiffness means. When the hair bundle's at rest, a probe attached to it is straight and everything is very relaxed.
If we move the base of the probe to your right, a large distance, it will pull the hair bundle in the same direction. But because the probe is flexible, it will bend back to the left. That's no great surprise. The weirdness comes about if we move the base of the probe a small amount. We find that the hair bundle actually moves farther than we ask it to move. Similarly in the other direction, if we move it a little bit to the left, it moves farther to the left.
So it's weird in that it's as if I walked up to this podium and pushed on the podium. And instead of pushing against me, the podium pulled me farther in the same direction. It says that the podium or the hair bundle has in it the capacity to do external work on me. Somehow there's energy stored in the system that can make the probe move farther than it really wants to move.
This peculiar relationship, this nonlinearity that I've just showed you, actually has an impact in musicology, of all things. So this was an effect discovered by Giuseppe Tartini, a great violinist, in 1714. At least that's what he says. It wasn't published until 1767. What he noticed while tuning his violin is that when he played two notes, two strings, a high frequency F2 and a lower frequency F1, he could not surprisingly hear or perceive those two frequencies. But he could perceive other frequencies that were not harmonically related. He could hear F2 plus F1, F2 minus F1, twice F2 plus or minus F1, twice F1 plus or minus F2.
He was perplexed by these things, to say the least. He made some use of them in his compositions. And subsequent composers have used them in several different pieces in which these artificial sounds-- they're called fathom tones. They're called difference tones They're called distortion products. These tones which are not present in the melody nonetheless get heard by your ear.
So Ligeti, Stockhausen, and other people of this stripe have used these difference tones in actual musical compositions. The audience is meant to hear things that are not being played by any instrument. And I want to show you that, in fact, this really happens and you can really hear these things. First I should say that hair bundles show these difference tones as we anticipate. Here's a very brief mathematical primer on how it works.
If we take this original relation with the kink in it, the yellow curve, that can be fit, this dotted red line, by the sum of a linear component, a quadratic component, a cubic component, and other higher components. And if you play two frequencies, F1 and F2, that are shown here and put those into this expansion for this curve, you will see that the linear term gives us the same frequencies we put in. The quadratic term gives us the second harmonic, but also F2 plus or minus F1.
And the cubic term gives us these complicated cubic difference products. If we do this in a real hair bundle-- if we stimulate with these two frequencies and measure the forces the hair bundle produces, it in fact produces all the expected difference tones. This is theory. This is experiment. This is as good agreement as biology gets. So here's the demonstration.
I'll play one pure tone, originally called F1. It's not very interesting. It's seven seconds. F2 is a little more interesting. It goes along for a few seconds and then goes down. Now the most prominent of these difference tones is twice F1 minus F2. So twice F1 is going to be up here and twice F1 takeaway F2 starts out a small difference and then becomes larger.
So it starts out low in and then goes up. And the reason the demonstration is contrived this way is you're going to hear one pure tone. You're going to hear one descending tone. But your ears will synthesize, if they're working, an ascending tone. So you're going to listen for that rising tone particularly late in this last demonstration. So here's seven seconds of F1 just to show you that there are no tricks.
[LONG TONE]
Here's F2, which is only slightly more riveting.
[LONG DESCENDING TONE]
Now, the exact same two things played simultaneously. Listen for this rising tone.
[TWO TONES PLAYED TOGETHER]
Can you hear it? Many people can hear it? I'll do it one more time if I can get it to go back.
[LONG DESCENDING TONE]
So in fact, it's not that faint. It's about 30 decibels less strong then the primary tones, the tones that are actually being sounded. And as I said, that's loud enough that it can be used in musical composition. So why is it there? It seems that it really stems from the activity of the little transduction channels, little protein pores that I showed you that open and close. And the evidence is this. I can take a hair bundle, such as the one shown here that's oscillating spontaneously, and block the channels stuck in one position by spraying on a drug. It happens to be [? genomycin, ?] which is one of the drugs that interferes with the functioning of these cells.
When the drug is turned off, the oscillation comes back again. Now if I make a measurement here or here, I can see this kinked curve. But while the drug is present, the curve becomes entirely linear. It follows Hooke's law. There's a linear relation between displacement and force. So that shows that the channels have to be flickering back and forth in order for this phenomenon to occur. So what do the channels have to do with it?
Here's the idea. This represents what we know about the channel. You can see it's approaching. It has a pore. Ions can go through the pore. It's got a gate, a molecular gate that can be opened or shut. And it has a tip link attached to it. This is, in fact, all we know biochemically about this channel at this point. This represents one stereocilium and this represents another.
So the idea of the experiment I've been showing you is this. If you were to come up and push on my arm, stimulating the system, you would push it to the right. The rubber band would elongate, and you would measure a certain force. And the force would be the same if the channel is closed or if the channel is open.
But the interesting thing that occurs is this. Suppose you start pushing. And while you're pushing, the channel opens. The rubber band goes slack. So you expect over a certain range of positions that the stiffness will decline because the rubber band has gone out of the picture. And that's exactly what we're seeing in that curve.
The zigzag curve starts out with all the channels closed, and there's a certain stiffness. Then the channels open and you get the negative stiffness region and now finally the channels stay open while rubber band elongates farther. So it's this flickering of the channels, between the closed and open states, that gives rise to the phenomenon and to the musical effect that I've just shown.
Here's a diagram showing the same thing. The idea again is there's a channel that can be closed or open. It has a little gate that swings through some distance d. It's attached to a spring of stiffness, kappa. Here are three of these channels stuck side by side in parallel. If we apply a force, that force is equally distributed among the three springs and is equally divided among the three closed channels.
But if one channel pops open, there's less force on that spring. So there's therefore more force on the other two. And that's the amount of force. That causes a second channel to open. Now those two channels are relaxed. This last channel bears still more force until it opens also. So the system tends to be bistable. When the force is low, all the channels are closed and happy.
When it's high, all the channels are open. But in between, it's unstable. It tends to cause an avalanche. As soon as a few channels open, all the others open. As soon as a few close, all the others tend to close. And that's what causes this instability of the system. Now, you can make an amplifier with an instability of that sort. Those who are in electronics know about tunnel diodes which use this exact effect. You need something else to make it go. You need a power supply of some sort. And it turns out that the power supply in the ear, it comes from a myosin molecule. Myosin, remember, is the protein in our muscles that's responsible for their contracting and moving back and forth.
This is a different type of myosin, but it works very much the same way. And the evidence that there's a power source is shown here. This is a simple experiment in which we apply a force to the hair bundle for a few milliseconds and then turn it off. When we do that, we see that transduction channels open, but then many of them shut, and the others close with a time constant of about 25 milliseconds.
At the same time, the hair bundle moves in the direction of the force, but then sags farther. It relaxes in the same direction. So again, it's as though I come up to the podium, I push on the podium with a constant force and it moves some distance.
But as time goes on, it moves farther. It gets softer. So there's some mechanical rearrangement going on in the hair bundle that causes that to occur. And that turns out to be mediated by the myosin molecule. I won't show you all the evidence for that, but I'll just show you that the myosin molecule is there. Its name is myosin 1c.
It's found particularly near the tips of the stereocilia, along this beveled or scalloped edge of the hair bundle. And our working hallucination for what's going on is that at the top of the tip link, there is this little cluster of about 50 myocin molecules that can clamber up and down the actin filaments, by that means adjusting the tension in the tip length.
So here's a slightly frightening slide that shows you how it all works. The hair bundle's initially resting. We push on it. The tension opens the channels. Calcium and potassium run in. But the idea is now that these upper insertions of the tip links, marked with the green dots, physically slide down.
You can see them scooting down over here. And as they do, the channels are allowed to re-close. So we initially see a spurt of current going in. But then as adaptation occurs, that fades away. And at the same time as adaptation occurs, the bundle become softer. Why softer? Because tension is going out of the springs as they're relaxed.
And in fact, we make both of those observations and they fit with this scheme. The same thing happens in the opposite direction. If we move the hair bundle to your left, these initially become slack. But now the myosin molecule does mechanical work, climbing up and restoring tension to the system. And that allows channels to reopen when the hair bundle's put back at rest.
A simple model based on this does fine at explaining everything we know about human hearing. So if we use such a model, a mechanical stimulus in the absence of this feedback mechanism causes only a tiny mechanical response. But if we turn on the feedback, we get this much larger response. In fact, it's about 100 times as big, fitting with the amplification that I've shown you.
The system is highly tuned. So for a particular set of parameter values, for a particular numerical set, it's tuned to a particular frequency. Here, it's 5 kilohertz. And it shows compressive nonlinearity. As the sound gets louder and louder and louder, the sensitivity or gain gets lower and lower. And finally, the system passes through what I called before the Hopf bifurcation.
As it becomes more and more sensitive, the amplification of a constant input gets greater and greater, until finally the system can become unstable and begin to emit sound, which we detect as the spontaneous otoacoustic emissions. So we think that this active motion of the hair bundle really underlies the amplifying process, that is, the active process, of the inner ear.
So there's been a real change in our understanding of the cochlea. It used to be thought that this was essentially a passive structure, in which the mechanics of this basilar membrane really determined the vibration. Now we know it's an active process. We think it operates at a Hopf bifurcation, which makes the response much stronger and much more sharply tuned.
At least some of that is owing to the active hair bundle motility that I've just shown you. There's another process called membrane-based electromotility that I've not shown you today. We expect that these two processes somehow work together. They collude to give the ear its remarkable technical properties.
Now I want to shift gears in the last few minutes just to deal with the problem with deafness that I introduced at the outset. We start out with nice hair bundles like these, with a very regular arrangement. With aging, with exposure to drugs, with exposure to loud noise, with other bad luck, the hair bundles become deranged like this. The hair cells eventually die.
These hair cells are not replaced by mitosis in our ears. So as we lose some of the 16,000 hair cells, we progressively lose sensitivity to the corresponding frequencies. Most often, we lose first the sensitivity to high frequencies. And then progressively lower and lower frequencies are involved.
One thing that we and other groups are doing are trying to find ways of regenerating these cells. And I'll only make a brief mention of this, but I think it's worth your knowing about. We use in our lab the zebrafish.
This is a one week old zebrafish larvae. It has the nice property that it has hair cells on its skin, on its surface. This, many of you know, is the lateral line organ. Each of these bright spots is a cluster of about 20 to 30 hair cells along the tail. These are used for the fish to measure other fish nearby when they're schooling. They can measure predators. They can feel prey. They can feel water motion.
The nice thing about these clusters is that they can regenerate hair cells. So unlike ourselves, these guys are replacing their hair cells all the time. And we can see this happen and even begin to see the cells that give rise to new hair cells. So here's a movie showing just one of these particular clusters.
I'll show you at the outset that this cell at the bottom is black. So keep that in mind. These are pairs of hair cells. They're labeled with a fluorescent protein that is turned on when cells decide that they want to become hair cells.
And as you can see, this cell has made that decision. It's becoming progressively greener as the movie runs. So we have one, two, three pairs of hair cells here, one, two, three, maybe four pairs of hair cells there.
Now watch this cell. It's going to round up in a minute. A very excited macrophage comes by. And then it will undergo a division-- pop, into two cells. And those cells will become progressively brighter. Notice this cell is about to explode. That cell has lived out its normal lifetime and is undergoing the process called apoptosis.
So in this system, there is continual renewal of hair cells. And there's sort of a dual conveyor belt. There's a stem cell up here somewhere, a stem cell down here somewhere that is producing new stem cells, that is, replicating itself. And it's producing another cell called the transit amplifying cell that then splits always into two antisymmetrical hair cells.
And they then mature. And as they gradually age, they turn orange or whatever this color is, and then die and are replaced. But what we and others hope to do is to identify these stem cells and to ask whether our own ears have them as well. In other words, do we have cells that could be induced to undergo replication and to replace the missing cells?
In the meantime, there's another solution. And this is a technological solution called the cochlear prosthesis, of which some of you have heard. The cochlear prosthesis is an electronic replacement of the missing hair cells. So here we have a normal cochlea that I showed you at the outset with 16,000 hair cells which respond selectively to particular frequencies and send information down the nerve fibers.
Here we have a cochlea that has been so damaged that it has no hair cells left at all. So a hearing aid would not help such a person, because even if the sound is made very loud, there's still nothing to detect it. But as you can appreciate, all that the brain knows about what's going on is what comes down these nerve fibers.
So what if you could use an electrode, a wire like this one, to shock those nerve fibers? The brain at the other end wouldn't know any different. And that's what the cochlear prosthesis does. It produces an electrical signal that selectively stimulates particular groups of the nerve fibers corresponding to the frequencies of a sound that's being heard.
This is quite an old cochlear prosthesis. This is from the '70s. But it's much easier in this diagram or picture to see how it works than you can in moderate ones. This is a coil made out of silastic plastic that's placed into the bottom one third of the human cochlea. It consists in this case of eight pairs of electrodes.
So you can see electrodes here, here-- this pair and this pair particularly clear-- going around. And each of those is associated with a couple of little wires. So one wears some sort of an apparatus that picks up the sound. It used to be worn in the pocket. It was the size of a pack of cigarettes. Now it's worn typically just on the frame of eyeglasses. It's a little widget that fits behind the ear.
It picks up the sound, breaks it down now into 20 or so channels of different frequency. It then sends those channels of information across the skin by a magnetic antenna. They're picked up and then sent down the 20 wires to the pairs of electrodes which shock the corresponding nerve fibers. Now this is a remarkably impoverished amount of information. I told you we normally have 25,000 nerve fibers and 16,000 hair cells.
We've winnowed this down to just 20 electrodes. And when people began doing this work, it was thought to be absurd, that there was no way that that little information could be valuable. But in fact they have worked out spectacularly well. And I want to give you a demonstration of it. What I'm going to do is to play you a bit of speech.
And this bit of speech has been decimated, or more than decimated. I've removed from it all of the information except five channels, so just five narrow bands of frequencies. And I should tell you before you get upset, nobody can understand this. But I wanted to give you an idea of what five channels of information sounds like.
[NOISE]
OK, so that's badly garbled. But the first prosthesis had only one channel and people found them valuable, because they aided lip reading. You could hear clicks and pops and noises at the back of the mouth that you cannot see when you're lip reading. So five channels, when this first came along, was already a real success.
Here is the same speech, now with 10 channels.
[NOISE]
Anybody get anything from that? Here's the original conversation. It's a monologue. It's me again. "The cochlear prosthesis is now in everyday use by nearly 100,000 people worldwide."
It's now more than 100,000 since I made this. It really works. It's a spectacular success. And our brains very rapidly adapt to the very impoverished information. I want to show you that. I'm going to play the 20 channels again, the one you just could barely make out some words. Try not to understand this.
It's actually a prosthesis. It's now in everyday use by nearly 100,000 people worldwide.
I mean, we very rapidly lock on to this. They all sound the same to me at this point. I've heard it enough time. I can hear the five channels equally clearly. The reason I want to emphasize this is that it's a real success story.
Again, I don't own stock. It's a real success story in that more than 100,000 people worldwide who were formally totally deaf are now living with these things on a daily basis. There may be some here today. They can carry out normal conversations. They can use telephones.
They have recaptured some of the ability to listen to music, though it's somewhat limited. The disadvantages of the device are that it is only at the base of the cochlea, so only high frequencies are heard. Everything is shifted to a Donald Duck sort of range. You don't have the full frequency of range as yet. But given the alternative that these people had before, total deafness, it's an enormously effective procedure.
In fact, the state of Oregon, which has rated various medical procedures, considers this the second most cost-effective procedure in medicine, after cardiac angioplasty. And you can see why, because this may have an effect for 70 years of life in a kid or more. It really is an enhancement of the quality of life that can be very long lived. The other reason for mentioning it is it gives us increasing hope that it will be possible to have similar prosthetic successes on other fronts.
We would, of course, like to do the same thing to deal with blindness, to restore vision with a visual prosthesis. We would, of course, like to have similar success with spinal injuries, restoring mobility to people who are quadriplegic or paraplegic. And this gives us by far the most optimistic forecast to date of what prosthetics can do.
So I'll conclude at this point simply by saying, I hope I've shown you something about how remarkable the ear is, given you some sense of what can be done to deal with the problems and what we hope to be able to do about it in the future. I thank you all for your attention.
[APPLAUSE]
So the question is, why in the tuning curves that I showed you is there this asymmetry? What I showed you is a tuning curve that looks sort of like this. This is frequency, or logarithm of frequency. And there is a sharp cutoff on the high frequency side, a broad cutoff on the low frequency side. The reason is that the information moves along the basilar membrane as a so-called traveling wave. So if we capture the basilar membrane, this is the base. This is the apex.
And here it is at rest. If we play a particular frequency, let's say a moderately high frequency, the oscillation actually looks like this. It is moving in this direction. This is a traveling wave. And the envelope of that wave is like that. If we instead play a frequency that is, say, a little bit higher, now we will get a wave that peaks a little bit more in this direction.
And again, it has an envelope like that. So the asymmetry comes from this fact, that there's a high frequency cutoff that's very sharp. There's a frequency cutoff that is very sharp on this side and is very broad-- it extends all the way down on this side. And this gives you that asymmetry. Yes?
AUDIENCE: In the response to the prosthesis, where does that learning curve occur? In the brain?
JIM HEDSPETH: Yes, in the brain. So the question was, if one gets a cochlear prosthesis, there is clearly some learning curve in adapting to it and learning to make the most of it. That is clearly a process going on in the brain. But there is also some adjustment of the cochlear prosthesis when it's installed.
So what they do is they place the thing surgically with the 20 wires or whatever, coming through the skin. And they then hook the person up after the surgery has passed and the person's recovered. They hook the person up to a computer. And the computer then stimulates each of the wires in different combinations and with different electrical intensities.
And the idea is to ask the patient to say when you can hear the tone, how much stimulus is required, and so on. Because obviously, you don't want to pass more current than is necessary, because that might damage the nerve fibers. But some nerve fibers will be closer to the wires. Some will be farther away. So it all has to be adjusted.
Sometimes one or more wires might have failed. So we have to get rid of those. Sometimes one or more of the nerve fibers or many of the nerve fibers may be missing or damaged by the original condition. So over a period of a few days, the computer basically interrogates the person about which wires are working well, which ones are not working well, and how much electrical signal should be passed over each.
All that information is then incorporated in the microcircuitry of the detector and programmed for that individual. Then the skin lesion is sealed, so a magnetic antenna is put in. So there's no longer any wire penetrating that could cause an infection or the like. And now this device which is now intelligent and is tuned to that particular person's ear, this device does the right thing, stimulating just the useful channels and stimulating each of them the right amount.
And finally, in terms of the learning curve, as you saw just now when you heard it, it's remarkably sharp. People pick things up very quickly. When I was at San Francisco, I wasn't involved in experiments, but other people were doing it.
And they were looking for a few patients under FDA tests who were postlingually deaf, people who had been hearing all of their life and so on, who had abruptly lost all of their hearing, therefore there was nothing to risk in doing the surgery, and who had lost their hearing from causes like cisplatin that were known to leave the nerve fibers intact. So these were the ideal candidates.
There was a woman who came in from Ireland who had one of these implants. And everybody was very disappointed. She was the second or third that they implanted. She seemed like the ideal candidate, but she got up off the table and people tried to talk to her and she didn't get any of it.
And then her son came in and said something. And she, "Aye, begorrah. I understand you perfectly." So she had incredibly strong Irish accent, unlike all the surgical people. But she understood instantaneously her son's talking to her. So it's that fast. It really can be a very sharp learning curve. Yes?
AUDIENCE: [INAUDIBLE]
JIM HEDSPETH: Yes. So there are two hypotheses about the circularity. And you've touched on both of them. The simple hypothesis is, you want to have better frequency resolution so the cochlea needs to get longer. And if you curve it, it keeps it from sticking the cerebellum. So it's just the way of coiling it and getting it out of the way.
But people have also posited that the curvature causes better transmission of the lower frequencies all the way up to the top. The effect in models does not seem to be very strong. So whether that's enough to have helped evolution drive it or not, we don't know. But there is a slight improvement, apparently, as a result of this coiling in the propagation of a traveling wave to the top. Yes?
AUDIENCE: Does a dog or a dolphin also have 16,000 hair cells?
JIM HEDSPETH: Yes. Some of them have even more. So bats, for example, have many times as many. Whales have many times as many as we do. And the other neat thing about some of those animals like bats is they have what's called an acoustic fovea. So as you know, we have a fovea in our eye in which the density of the cones is much greater.
That's what we use to fixate on things and read newsprint or whatnot. They have the comparable thing in the cochlea. So in our cochlear map, as we go along the length of the cochlea, as I was showing you, the frequencies were low here and high here. And on a log scale, it's more or less linear. A bat will have a scale that looks something like this.
It has a very large range of the total cochlea devoted to a very small range of frequencies. And the reason is that it's putting out a relatively pure frequency. In this case, it's 60.5 kilohertz. As that frequency hits the prey, it bounces back. But if the prey is moving away, it's Doppler shifted to lower frequencies. If the prey is moving towards the bat, it's Doppler shifted to higher frequencies.
So if the bat is emitting at, let's say, this frequency, these very slightly higher frequencies or very slightly lower frequencies tell the bat its speed with respect to the target. So it's specialized to find just those particular frequencies and it spends much less time worrying about low frequency noises and high frequency noise. Yes?
AUDIENCE: Professor, [INAUDIBLE]
JIM HEDSPETH: OK, so the question is there any difference between the base and the apex? What is it that sorts the different frequencies out at different positions? And that answer is there are as many as four different filters, four different things that are doing it. So first and most strikingly is this basilar membrane behavior itself. This was worked out in the '50s and '60s by von Bekesy. And what he found is that along the basilar membrane, this end is very floppy and soft and broad.
This basal end is very narrow and very taut. So it really is like having the high strings of a violin here, the low strings of a bass here. They vibrate sympathetically to different frequencies. Now the rule vibration pattern is complicated because as I mentioned earlier, there's actually this traveling wave moving from the base towards the apex and then stopping at some particular position that is characteristic of each particular frequency. That's one tuning mechanism.
There is a difference. If we look at the base, we find small hair cells that have small numbers-- sorry, have short stereocilia but rather a lot of them, and in some animals at the apex, we find larger hair bundles with fewer stereocilia. These act like tuning forks. This is a tuning fork tuned to a low frequency. This is a tuning fork tuned to a high frequency. And in fact, they work that way. Thirdly, there are many hair cells, not including our own, that have a remarkable electrical property.
Those of you who are electrophysiologists know that if you pass current across the membrane of most cells, you simply see a charging curve. That's the voltage. If you do the same thing in a hair cell, you see ringing. The hair cell oscillates back and forth. And rather remarkably, if you measure a hair cell at the base-- this would be a chicken's cochlea-- it oscillates at a high frequency.
If you make the same measurement at the apex, it oscillates at a low frequency. So different frequencies cause oscillations at different-- different hair cells respond to different frequencies along the length. And this comes about because there is a calcium channel that lets calcium into the cell and depolarizes it.
There is a calcium-dependent potassium channel that lets potassium flow out. And the ebb and flow of positive current into and out of the cell makes these oscillations. And each cell is tuned by the number of these channels and their kinetic properties. And then finally, the fourth process is the active process that I told you about. It's somehow tuned-- we don't know how-- to different frequencies at different positions.
AUDIENCE: [INAUDIBLE] implants, why can they only hear the high frequency sounds?
JIM HEDSPETH: It's simply a matter of the way the operation is performed to date. Remember from this diagram here, the cochlea is a spiral with about three turns to it. So this is the base. And as I've said here, this is the high frequency end.
This is the apex, which is the low frequency end. It's simply a physical fact that surgically, the thing is threaded in here at the base and it goes just above one turn. So it's basically only stimulating, say, from there to there. People are trying to make ones that they can put in farther.
But the problem is, the thing has to be somewhat rigid. It's plastic. It has to be rich enough to keep its shape but at the same time flexible enough that it can snake its way a certain distance up the cochlea. And what people are worried about is if it's too rigid, it could penetrate. It could push its way through this very delicate membranous structure, and that would do prominent damage to the ear. So this is as far, so far, as anybody's been prepared to go.
CARL HOPKINS: Can you join me in thanking Dr. Hudspeth?
[APPLAUSE]
James Hudspeth, F.M. Kirby Professor and head of the Laboratory of Sensory Neuroscience at Rockefeller University, spoke at Cornell on March 31 as part of the Spring 2010 University Lecture Series.
University Lectures were established by Cornell historian Goldwin Smith in the beginning of the 20th century, as a means of bringing the world to Cornell.