Video: Hearing
You are watching a preview. Go Premium to access the full video: Overview of the special sense of hearing, which allows us to detect, interpret, and respond to sounds from our environment.
Related article
Related videos
Transcript
Hearing is a highly complex and intricate sense, allowing us to detect, interpret, and respond to sounds from our environment. This process transforms sound waves, which are essentially vibrations in ...
Read moreHearing is a highly complex and intricate sense, allowing us to detect, interpret, and respond to sounds from our environment. This process transforms sound waves, which are essentially vibrations in the air, into electrical signals that our brain can understand. From the capturing of sound waves by the external ear to sophisticated neural processing in the cortex, our auditory system performs an incredible feat of biological engineering that let us communicate, enjoy music, and remain aware of our surroundings.
Today, we’ll explore the journey of sound through the ear to the brain as we learn about the physiology of hearing.
Sound conduction is the process by which sound waves move from the external ear to the brain as electrical signals. It starts when sound waves are captured by the external ear and funneled through the ear canal, also known as the external acoustic meatus, which leads to the tympanic membrane, or eardrum.
The eardrum vibrates according to the frequency and intensity of the incoming sound waves, representing both pitch and loudness. These vibrations are transmitted through the ossicular chain in the middle ear. This chain consists of three tiny bones -- the malleus, the incus, and the stapes -- which vibrate one after the other. The footplate of the stapes then presses against the vestibular or oval window, a membrane that transfers these vibrations to the fluid-filled cochlea.
One of the challenges in sound conduction is the transition from air-based vibrations in the external and middle ear to fluid-based vibrations in the cochlea of the internal ear. To ensure efficient sound transmission, this process includes impedance matching.
What do we mean by impedance matching?
Air and fluid have different densities. Airborne sounds have a lower impedance compared to the vibrations in fluid, creating a mismatch. Without amplification, much of the sound’s energy would dissipate when moving from air to fluid.
The ossicles in the middle ear fix this problem by performing a crucial function known as impedance matching. By acting as a lever system, they increase the force of sound vibrations while reducing their amplitude, amplifying the intensity of sound nearly 20 times, enabling the sound energy to penetrate the denser cochlear fluid.
Additionally, the ossicles conduct vibrations from the larger tympanic membrane to the smaller vestibular window. This size difference intensifies the pressure, further allowing efficient transfer of sound energy from air to fluid. Thus, these two mechanical processes ensure that even soft sounds can be conducted effectively to the internal ear.
As the stapes pushes on the vestibular window to create waves in the cochlear fluid, the cochlea or round window, which is a secondary membrane located at the base of the cochlea, compensates for changes in pressure. The cochlear window moves outward when the vestibular window is pushed inward, allowing fluid to move freely within the cochlear chambers. This movement is essential for transmitting vibrations through the cochlea without creating pressure buildup, ensuring the fluid waves carry accurate representations of the original sound.
The middle ear not only amplifies sound waves but also contains specialized muscles that protect the ear from potential damage caused by loud sounds. Two critical muscles in the middle ear -- the tensor tympani and the stapedius -- work to dampen excessive vibrations, preventing overstimulation of the delicate structures within the cochlea. This protective mechanism is known as the acoustic reflex or attenuation reflex.
The tensor tympani is a small muscle attached to the malleus, the first bone in the ossicular chain. It reduces eardrum vibrations from loud sounds by increasing eardrum tension, particularly for low-frequency noises like thunder or machinery. It also activates during chewing or speaking to prevent self-generated sounds from masking external ones.
The stapedius, the smallest skeletal muscle at about 1 millimeter, attaches to the stapes, the last ossicle. It reduces the stapes’ movement against the oval window, dampening high-frequency sounds like screams or clapping. Its contraction, part of the acoustic reflex, protects the inner ear from prolonged loud noises but cannot shield against very sudden sounds like fireworks.
Together, the tensor tympani and stapedius modulate sound transmission, balancing sensitivity to quieter sounds with protection from excessive noise, though their reflexes have limitations against rapid intense sounds.
The internal ear, specifically the cochlea, performs the crucial task of converting mechanical sound energy into electrical signals that the brain can interpret. This process is called sound transduction.
The cochlea is a fluid-filled, snail-shaped organ containing three chambers -- the scala vestibuli, scala media, and scala tympani. Within these chambers lies the basilar membrane of the cochlear duct, where the spiral organ sits, also known as the organ of Corti, housing thousands of specialized hair cells.
The waves generated in the cochlear fluid cause the basilar membrane to vibrate, stimulating hair cells located along it. This initiates a chain reaction that ultimately converts these vibrations into neural signals, which travel to the brain via the cochlear nerve.
Hair cells are specialized cells that are essentially the heart of hearing. Each hair cell has a bundle of stereocilia, tiny hair-like structures that bend in response to sound waves. The stereocilia on the hair cells are critical to their function as mechanoreceptors. They are organized in rows of increasing height, forming a staircase pattern, and are connected by tiny, spring-like proteins called tip links.
The endolymph around the hair cell has a higher potassium ion concentration. When the stereocilia bends towards the tallest row, the tip links stretch and open potassium ion channels on the cell surface. Potassium enters the cell and depolarizes it. When they bend the opposite way, the tip links compress, the channels close and the cell is hyperpolarized.
There are two types of hair cells in the cochlea -- inner hair cells and outer hair cells. Inside the cochlea, inner hair cells extend into the endolymph and are arranged in a single row along the basilar membrane. Outer hair cells, on the other hand, sit in three neat rows, with their stereocilia projecting into the tectorial membrane above them.
When the stapes moves, pushing in or pulling out at the vestibular window, it creates pressure waves that travel through the scala vestibuli and scala tympani. These waves cause the basilar membrane to move, and that movement creates a shearing force between the outer hair cells and the tectorial membrane, bending the stereocilia. Meanwhile, the flow of endolymph bends the stereocilia on the inner hair cells.
So what do these hair cells actually do?
The outer hair cells work a bit like amplifiers. They physically contract when they’re depolarized and expand when hyperpolarized. This unique ability, called electromotility, boosts the motion of the basilar membrane, making the cochlea more sensitive to quiet sounds and better at picking apart subtle differences in pitch, crucial for things like understanding speech or enjoying music.
The inner hair cells are the real messengers. They’re the main sensory receptors and communicate directly with the brain. When they’re depolarized, they release more neurotransmitters; when hyperpolarized, they release less. These signals trigger action potentials in the afferent cochlear neurons, which then carry sound information to the brain.
Surrounding all of this are supporting cells. They help hold everything in place and maintain the delicate ionic environment that hair cells need to function properly. And here’s an interesting twist: in animals like birds and fish, these supporting cells can actually regenerate lost hair cells. But in mammals, including us, that ability is lost. That’s why damage to these hair cells often results in permanent hearing loss.
The good news? Researchers are actively exploring ways to stimulate hair cell regeneration in humans. It’s an exciting field and one that could lead to breakthrough treatments for hearing loss in the future.
Sound frequency, or pitch, is a key element of sound that allows us to distinguish between high and low tones. Frequency, measured in Hertz, refers to the number of sound wave cycles per second. Humans can hear frequencies ranging from 20 Hertz, perceived as deep, low-pitched sounds like a rumble, to 20,000 Hertz, which we recognize as high-pitched sounds like a bird’s chirp. This range encompasses an enormous variety of sounds from musical notes and speech to environmental noises and alarms.
While the full range of human hearing is impressive, our ears are particularly sensitive to frequencies between 1,000 Hertz and 3,000 Hertz. This range includes the primary frequencies found in human speech, which is critical for communication. Sensitivity in this range allows us to detect subtle nuances in spoken words, helping us differentiate between similar sounds and recognize emotional tones in voices.
The cochlea’s tonotopic organization separates these frequencies along the basilar membrane, where different frequencies stimulate different regions. High frequencies cause the base of the basilar membrane to vibrate, while low frequencies stimulate areas near the apex. Inner hair cells detect these specific frequencies and send precise signals to the brain.
In addition to frequency, the ear is also capable of detecting a vast range of loudness levels, measured in decibels. Loudness corresponds to the amplitude of sound waves and determines how intense or soft a sound feels. The decibel scale is logarithmic, meaning that an increase of 10 decibels represents a tenfold increase in intensity.
At the lower end of the scale, a whisper measures around 30 decibels, while normal conversation is approximately 60 decibels. Sounds above 85 decibels, such as heavy traffic or loud machinery, can cause damage to the delicate structures in the inner ear if exposure is prolonged. At 120 decibels, which is the intensity of a rock concert or a jet engine, sounds become painful, and immediate hearing protection is essential to prevent irreversible hearing loss. Sounds above 140 decibels, such as fireworks, can cause instant damage.
Once sound vibrations are converted into electrical signals, they travel along the cochlear nerve to the brainstem, entering a complex relay system known as the auditory pathway.
First, they reach the ipsilateral auditory nuclei, and subsequently, the ipsilateral cochlear nuclei, where basic sound features like pitch and intensity are processed. Next, signals pass to the ipsilateral and contralateral superior olivary complex, which compares the timing and intensity of sounds arriving at each ear. This enables sound localization, allowing us to pinpoint sound direction.
The signals then travel through the lateral lemniscus, refining rhythm and intensity, and arrive at the inferior colliculus, which integrates sensory information and coordinates reflexive responses to sudden sounds. From here, the signals reach the medial geniculate nucleus in the thalamus, a critical relay station where auditory information is further refined and organized.
The medial geniculate nucleus processes sound features such as pitch, rhythm, and intensity, and transmits these signals via the auditory radiation -- a bundle of neural fibers -- to the primary auditory cortex in the temporal lobe, located in Brodmann areas 41 and 42. This region is specialized for processing complex auditory information stemming from both ears, such as the recognition of speech patterns, musical tones, and environmental sounds, enabling us to interpret and respond to the auditory world around us.
In the auditory cortex, sound information is fully processed and interpreted. The cortex is organized tonotopically, mirroring the frequency mapping in the cochlea, allowing the brain to separate sounds by pitch. Specialized areas in the cortex handle different sound aspects, such as language in the left hemisphere and tonal quality in the right.
Fibers from the auditory cortex project to regions like the hippocampus and the amygdala, connecting sounds to memories and emotions, which allows us to associate specific sounds with experiences and react instinctively to familiar sounds.
Our brain’s ability to focus on specific sounds in noisy environments, known as the cocktail party effect, shows its capacity to filter and prioritize auditory input. This remarkable processing enables us to enjoy music, recognize voices, and respond to warnings. Hearing is, therefore, a highly cognitive process, shaped by learning, memory, and emotion, transforming simple sound waves into a rich auditory experience that influences our interaction with the world.
And that concludes our tutorial for today. To revise this content, check out our quiz and other learning materials in our study unit on this topic.
See you next time!