Knowledge

Sound localization

Source 📝

1654:
about 40 kHz; peak frequencies between 40 kHz and 120 kHz), short duration clicks (about 40 μs). Dolphins can localize sounds both passively and actively (echolocation) with a resolution of about 1 deg. Cross-modal matching (between vision and echolocation) suggests dolphins perceive the spatial structure of complex objects interrogated through echolocation, a feat that likely requires spatially resolving individual object features and integration into a holistic representation of object shape. Although dolphins are sensitive to small, binaural intensity and time differences, mounting evidence suggests dolphins employ position-dependent spectral cues derived from well-developed head-related transfer functions, for sound localization in both the horizontal and vertical planes. A very small temporal integration time (264 μs) allows localization of multiple targets at varying distances. Localization adaptations include pronounced asymmetry of the skull, nasal sacks, and specialized lipid structures in the forehead and jaws, as well as acoustically isolated middle and inner ears.
301: 722:, which are more pronounced at higher frequencies; that is, if there is a sound onset, the delay of this onset between the ears can be used to determine the input direction of the corresponding sound source. This mechanism becomes especially important in reverberant environments. After a sound onset there is a short time frame where the direct sound reaches the ears, but not yet the reflected sound. The auditory system uses this short time frame for evaluating the sound source direction, and keeps this detected direction as long as reflections and reverberation prevent an unambiguous direction estimation. The mechanisms described above cannot be used to differentiate between a sound source ahead of the hearer or behind the hearer; therefore additional cues have to be evaluated. 1482:. It utilizes "smart" manikins, such as KEMAR, to glean signals or use DSP methods to simulate the transmission process from sources to ears. After amplifying, recording and transmitting, the two channels of received signals will be reproduced through earphones or speakers. This localization approach uses electroacoustic methods to obtain the spatial information of the original sound field by transferring the listener's auditory apparatus to the original sound field. The most considerable advantages of it would be that its acoustic images are lively and natural. Also, it only needs two independent transmitted signals to reproduce the acoustic image of a 3D system. 1503:
systems, the major goal of them is to simulate stereo sound information. Traditional stereo systems use sensors that are quite different from human ears. Although those sensors can receive the acoustic information from different directions, they do not have the same frequency response of human auditory system. Therefore, when binary-channel mode is applied, human auditory systems still cannot feel the 3D sound effect field. However, the 3D para-virtualization stereo system overcome such disadvantages. It uses HRTF principles to glean acoustic information from the original sound field then produce a lively 3D sound field through common earphones or speakers.
1338:
long as the lateral direction is constant. However, if the head is rotated, the ITD and ILD change dynamically, and those changes are different for sounds at different elevations. For example, if an eye-level sound source is straight ahead and the head turns to the left, the sound becomes louder (and arrives sooner) at the right ear than at the left. But if the sound source is directly overhead, there will be no change in the ITD and ILD as the head turns. Intermediate elevations will produce intermediate degrees of change, and if the presentation of binaural cues to the two ears during head movement is reversed, the sound will be heard behind the listener.
1343:
the synthesized elevation. The fact that the sound sources objectively remained at eye level prevented monaural cues from specifying the elevation, showing that it was the dynamic change in the binaural cues during head movement that allowed the sound to be correctly localized in the vertical dimension. The head movements need not be actively produced; accurate vertical localization occurred in a similar setup when the head rotation was produced passively, by seating the blindfolded subject in a rotating chair. As long as the dynamic changes in binaural cues accompanied a perceived head rotation, the synthesized elevation was perceived.
739:
IIDs, in what is called the cone model effect. However, human ears can still distinguish between these sources. Besides that, in natural sense of hearing, one ear alone, without any ITD or IID, can distinguish between them with high accuracy. Due to the disadvantages of duplex theory, researchers proposed the pinna filtering effect theory. The shape of the human pinna is concave with complex folds and asymmetrical both horizontally and vertically. Reflected and direct waves generate a frequency spectrum on the eardrum, relating to the acoustic sources. Then auditory nerves localize the sources using this frequency spectrum.
1457:
critical bands and if the perceived direction is stable, this attack is in all probability caused by the direct sound of a sound source, which is entering newly or which is changing its signal characteristics. This short time period is used by the auditory system for directional and loudness analysis of this sound. When reflections arrive a little bit later, they do not enhance the loudness inside the critical bands in such a strong way, but the directional cues become unstable, because there is a mix of sound of several reflection directions. As a result, no new directional analysis is triggered by the auditory system.
1549:(interaural phase differences) for lower frequencies and evaluation of interaural level differences for higher frequencies. The evaluation of interaural phase differences is useful, as long as it gives unambiguous results. This is the case, as long as ear distance is smaller than half the length (maximal one wavelength) of the sound waves. For animals with a larger head than humans the evaluation range for interaural phase differences is shifted towards lower frequencies, for animals with a smaller head, this range is shifted towards higher frequencies. 1334:, or otherwise referred to as binaural recordings. It has been shown that human subjects can monaurally localize high frequency sound but not low frequency sound. Binaural localization, however, was possible with lower frequencies. This is likely due to the pinna being small enough to only interact with sound waves of high frequency. It seems that people can only accurately localize the elevation of sounds that are complex and include frequencies above 7,000 Hz, and a pinna must be present. 288: 264:. Haas measured down to even a 1 millisecond difference in timing between the original sound and reflected sound increased the spaciousness, allowing the brain to discern the true location of the original sound. The nervous system combines all early reflections into a single perceptual whole allowing the brain to process multiple different sounds at once. The nervous system will combine reflections that are within about 35 milliseconds of each other and that have a similar intensity. 3259: 1634:. Because they hunt at night, they must rely on non-visual senses. Experiments by Roger Payne have shown that owls are sensitive to the sounds made by their prey, not the heat or the smell. In fact, the sound cues are both necessary and sufficient for localization of mice from a distant location where they are perched. For this to work, the owls must be able to accurately localize both the azimuth and the elevation of the sound source. 743: 3271: 47: 318: 284:
different sides of the head, and thus have different coordinates in space. As shown in the duplex theory figure, since the distances between the acoustic source and ears are different, there are time difference and intensity difference between the sound signals of two ears. We call those kinds of differences as Interaural Time Difference (ITD) and Interaural Intensity Difference (IID) respectively.
197:, in which only the first of multiple identical sounds is used to determine the sounds' location (thus avoiding confusion caused by echoes), it cannot be entirely used to explain the response. Furthermore, a number of recent physiological observations made in the midbrain and brainstem of small mammals have shed considerable doubt on the validity of Jeffress's original ideas. 731: 576: 1528:
Since most animals have two ears, many of the effects of the human auditory system can also be found in other animals. Therefore, interaural time differences (interaural phase differences) and interaural level differences play a role for the hearing of many animals. But the influences on localization
1486: 1456:
In order to determine the time periods, where the direct sound prevails and which can be used for directional evaluation, the auditory system analyzes loudness changes in different critical bands and also the stability of the perceived direction. If there is a strong attack of the loudness in several
1372:
Direct/ Reflection ratio: In enclosed rooms, two types of sound are arriving at a listener: The direct sound arrives at the listener's ears without being reflected at a wall. Reflected sound has been reflected at least one time at a wall before arriving at the listener. The ratio between direct sound
1329:
are highly individual, depending on the shape and size of the outer ear. If sound is presented through headphones, and has been recorded via another head with different-shaped outer ear surfaces, the directional patterns differ from the listener's own, and problems will appear when trying to evaluate
283:
In 1907, Lord Rayleigh utilized tuning forks to generate monophonic excitation and studied the lateral sound localization theory on a human head model without auricle. He first presented the interaural clue difference based sound localization theory, which is known as Duplex Theory. Human ears are on
1706:
gene's role in the evolutionary trajectory of mammalian echolocation systems. This research underscores the adaptability and evolutionary significance of Prestin, offering valuable insights into the genetic foundations of sound localization in bats and dolphins, particularly within the sophisticated
1685:
Noteworthy is the emission of high-frequency echolocation calls by toothed whales and echolocating bats, showcasing diversity in shape, duration, and amplitude. However, it is their high-frequency hearing that becomes paramount, as it enables the reception and analysis of echoes bouncing off objects
1653:
to aid in detecting, identifying, localizing, and capturing prey. Dolphin sonar signals are well suited for localizing multiple, small targets in a three-dimensional aquatic environment by utilizing highly directional (3 dB beamwidth of about 10 deg), broadband (3 dB bandwidth typically of
1342:
artificially altered a sound's binaural cues during movements of the head. Although the sound was objectively placed at eye level, the dynamic changes to ITD and ILD as the head rotated were those that would be produced if the sound source had been elevated. In this situation, the sound was heard at
1337:
When the head is stationary, the binaural cues for lateral sound localization (interaural time difference and interaural level difference) do not give information about the location of a sound in the median plane. Identical ITDs and ILDs can be produced by sounds at eye level or at any elevation, as
200:
Neurons sensitive to interaural level differences (ILDs) are excited by stimulation of one ear and inhibited by stimulation of the other ear, such that the response magnitude of the cell depends on the relative strengths of the two inputs, which in turn, depends on the sound intensities at the ears.
1565:
For many mammals there are also pronounced structures in the pinna near the entry of the ear canal. As a consequence, direction-dependent resonances can appear, which could be used as an additional localization cue, similar to the localization in the median plane in the human auditory system. There
1529:
of these effects are dependent on head sizes, ear distances, the ear positions and the orientation of the ears. Smaller animals like insects use different techniques as the separation of the ears are too small. For the process of animals emitting sound to improve localization, a biological form of
1502:
Qxpander. They use HRTF to simulate the received acoustic signals at the ears from different directions with common binary-channel stereo reproduction. Therefore, they can simulate reflected sound waves and improve subjective sense of space and envelopment. Since they are para-virtualization stereo
1379:
Sound spectrum: High frequencies are more quickly damped by the air than low frequencies. Therefore, a distant sound source sounds more muffled than a close one, because the high frequencies are attenuated. For sound with a known spectrum (e.g. speech) the distance can be estimated roughly with the
713:
of the sound waves. So the auditory system can determine phase delays between both ears without confusion. Interaural level differences are very low in this frequency range, especially below about 200 Hz, so a precise evaluation of the input direction is nearly impossible on the basis of level
2083:
For sinusoidal signals presented on the horizontal plane, spatial resolution is highest for sounds coming from the median plane (directly in front of the listener) with about 1 degree MAA, and it deteriorates markedly when stimuli are moved to the side – e.g., the MAA is about 7 degrees for sounds
717:
For frequencies above 1600 Hz the dimensions of the head are greater than the length of the sound waves. An unambiguous determination of the input direction based on interaural phase alone is not possible at these frequencies. However, the interaural level differences become larger, and these
1574:
For sound localization in the median plane (elevation of the sound) also two detectors can be used, which are positioned at different heights. In animals, however, rough elevation information is gained simply by tilting the head, provided that the sound lasts long enough to complete the movement.
1383:
ITDG: The Initial Time Delay Gap describes the time difference between arrival of the direct wave and first strong reflection at the listener. Nearby sources create a relatively large ITDG, with the first reflections having a longer path to take, possibly many times longer. When the source is far
1296:
At present, the main institutes that work on measuring HRTF database include CIPIC International Lab, MIT Media Lab, the Graduate School in Psychoacoustics at the University of Oldenburg, the Neurophysiology Lab at the University of Wisconsin–Madison and Ames Lab of NASA. Databases of HRIRs from
738:
Duplex theory shows that ITD and IID play significant roles in sound localization, but they can only deal with lateral localization problems. For example, if two acoustic sources are placed symmetrically at the front and back of the right side of the human head, they will generate equal ITDs and
102:
have been extensively studied. The auditory system uses several cues for sound source localization, including time difference and level difference (or intensity difference) between the ears, and spectral information. Other animals, such as birds and reptiles, also use them but they may use them
1511:
Since the multichannel stereo systems require many reproduction channels, some researchers adopted the HRTF simulation technologies to reduce the number of reproduction channels. They use only two speakers to simulate multiple speakers in a multichannel system. This process is called as virtual
1670:
employed by bats and dolphins. Discovered just over a decade ago, Prestin encodes a protein located in the inner ear's hair cells, facilitating rapid contractions and expansions. This intricate mechanism operates akin to an antique phonograph horn, amplifying sound waves within the cochlea and
1556:
If the ears are located at the side of the head, interaural level differences appear for higher frequencies and can be evaluated for localization tasks. For animals with ears at the top of the head, no shadowing by the head will appear and therefore there will be much less interaural level
435: 714:
differences alone. As the frequency drops below 80 Hz it becomes difficult or impossible to use either time difference or level difference to determine a sound's lateral source, because the phase difference between the ears becomes too small for a directional evaluation.
333:
From the duplex theory figure we can see that for source B1 or source B2, there will be a propagation delay between two ears, which will generate the ITD. Simultaneously, human head and ears may have a shadowing effect on high-frequency signals, which will generate IID.
1760:(1802–1875) did work on optics and color mixing, and also explored hearing. He invented a device he called a "microphone" that involved a metal plate over each ear, each connected to metal rods; he used this device to amplify sound. He also did experiments holding 1054: 903: 1750:(1746–1822) conducted and described experiments in which people tried to localize a sound using both ears, or one ear blocked with a finger. This work was not followed up on, and was only recovered after others had worked out how human sound localization works. 1360:
The human auditory system has only limited possibilities to determine the distance of a sound source. In the close-up-range there are some indications for distance determination, such as extreme level differences (e.g. when whispering into one ear) or specific
1552:
The lowest frequency which can be localized depends on the ear distance. Animals with a greater ear distance can localize lower frequencies than humans can. For animals with a smaller ear distance the lowest localizable frequency is higher than for humans.
754:(HRTF). The corresponding time domain expressions are called the Head-Related Impulse Response (HRIR). The HRTF is also described as the transfer function from the free field to a specific point in the ear canal. We usually recognize HRTFs as LTI systems: 1575:
This explains the innate behavior of cocking the head to one side when trying to localize a sound precisely. To get instantaneous localization in more than two dimensions from time-difference or amplitude-difference cues requires more than two detectors.
208:(IC), many ILD sensitive neurons have response functions that decline steeply from maximum to zero spikes as a function of ILD. However, there are also many neurons with much more shallow response functions that do not decline to zero spikes. 1715:
The term 'binaural' literally signifies 'to hear with two ears', and was introduced in 1859 to signify the practice of listening to the same sound through both ears, or to two discrete sounds, one through each ear. It was not until 1916 that
1460:
This first detected direction from the direct sound is taken as the found sound source direction, until other strong loudness attacks, combined with stable directional information, indicate that a new directional analysis is possible. (see
1701:
distinctions in sonar mammals that likely contribute to their distinctive echolocation features. The confluence of evolutionary analyses and empirical findings provides robust evidence, marking a significant juncture in comprehending the
1512:
reproduction. Essentially, such approach uses both interaural difference principle and pinna filtering effect theory. Unfortunately, this kind of approach cannot perfectly substitute the traditional multichannel stereo system, such as
2936: 103:
differently, and some also have localization cues which are absent in the human auditory system, such as the effects of ear movements. Animals with the ability to localize sound have a clear evolutionary advantage.
1693:
shift from threonine (Thr or T) in sonar mammals to asparagine (Asn or N) in nonsonar mammals. This specific alteration, subject to parallel evolution, emerges as a linchpin in the mammalian echolocation narrative.
255:
Helmut Haas discovered that we can discern the sound source despite additional reflections at 10 decibels louder than the original wave front, using the earliest arriving wave front. This principle is known as the
111:
Sound is the perceptual result of mechanical vibrations traveling through a medium such as air or water. Through the mechanisms of compression and rarefaction, sound waves travel through the air, bounce off the
907: 571:{\displaystyle ITD={\begin{cases}3\times {\frac {r}{c}}\times \sin \theta ,&{\text{if }}f\leq {\text{4000Hz }}\\2\times {\frac {r}{c}}\times \sin \theta ,&{\text{if }}f>{\text{ 4000Hz}}\end{cases}}} 759: 223:
Localization can be described in terms of three-dimensional position: the azimuth or horizontal angle, the elevation or vertical angle, and the distance (for static sounds) or velocity (for moving sounds).
1686:
in their environment. A meticulous dissection of Prestin protein function in sonar-guided bats and bottlenose dolphins, juxtaposed with nonsonar mammals, sheds light on the intricacies of this process.
188:
lengths. Some cells are more directly connected to one ear than the other, thus they are specific for a particular interaural time difference. This theory is equivalent to the mathematical procedure of
1597:. The animal is too small for the time difference of sound arriving at the two ears to be calculated in the usual way, yet it can determine the direction of sound sources with exquisite precision. The 694: 585:
the left ear. These level differences are highly frequency dependent and they increase with increasing frequency. Massive theoretical researches demonstrate that IID relates to the signal frequency
1422:
The auditory system can extract the sound of a desired sound source out of interfering noise. This allows the listener to concentrate on only one speaker if other speakers are also talking (the
1426:). With the help of the cocktail party effect sound from interfering directions is perceived attenuated compared to the sound from the desired direction. The auditory system can increase the 2937:
https://kns.cnki.net/kcms2/article/abstract?v=C1uazonQNNh31hpdlsyEyXcqR2uafvd3NO5N-rwCbIvv4k-h-lQ2euw2Ja7xMXcwObpETefJWcYFa1zXJqT8ezXCQyp8UxeCVFCuTs07Lhqt4Qc6zy4aOw==&uniplatform=NZKPT
705:
Localization accuracy is 1 degree for sources in front of the listener and 15 degrees for sources to the sides. Humans can discern interaural time differences of 10 microseconds or less.
231:, by the relative amplitude of high-frequency sounds (the shadow effect), and by the asymmetrical spectral reflections from various parts of our bodies, including torso, shoulders, and 1325:
for sound localization. Together with other direction-selective reflections at the head, shoulders and torso, they form the outer ear transfer functions. These patterns in the ear's
1317:, form direction-selective filters. Depending on the sound input direction, different filter resonances become active. These resonances implant direction-specific patterns into the 581:
Interaural intensity difference (IID) or interaural level difference (ILD) – Sound from the right side has a higher level at the right ear than at the left ear, because the
1445:
at the walls. The auditory system analyses only the direct sound, which is arriving first, for sound localization, but not the reflected sound, which is arriving later (
709:
For frequencies below 800 Hz, the dimensions of the head (ear distance 21.5 cm, corresponding to an interaural time delay of 626 μs) are smaller than the half
2739:
Liu, Z., Qi, F. Y., Zhou, X., Ren, H. Q., & Shi, P. (2014). Parallel sites implicate functional convergence of the hearing gene prestin among echolocating mammals.
1231: 2212:
Musicant A D, Butler R A. The influence of pinnae-based spectral cues on sound localization. The Journal of the Acoustical Society of America, 1984, 75(4): 1195–1200.
1291: 1271: 1211: 623: 390: 1792: 1520:
system. That is because when the listening zone is relatively larger, simulation reproduction through HRTFs may cause invert acoustic images at symmetric positions.
702:), for frequencies above 1500 Hz mainly IIDs are evaluated. Between 1000 Hz and 1500 Hz there is a transition zone, where both mechanisms play a role. 1191: 1164: 1137: 1110: 1083: 1251: 603: 430: 410: 370: 1609:
and sound level differences were available to the animal's head. Efforts to build directional microphones based on the coupled-eardrum structure are underway.
1330:
directions in the median plane with these foreign ears. As a consequence, front–back permutations or inside-the-head-localization can appear when listening to
2024:
Rayleigh L. XII. On our perception of sound direction. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 1907, 13(74): 214-232.
68: 249:
qualities of the sound, helping the brain orient where the sound emanated from. These minute differences between the two ears are known as interaural cues.
1545:
If the ears are located at the side of the head, similar lateral localization cues as for the human auditory system can be used. This means: evaluation of
718:
level differences are evaluated by the auditory system. Also, delays between the ears can still be detected via some combination of phase differences and
341:(ITD) – Sound from the right side reaches the right ear earlier than the left ear. The auditory system evaluates interaural time differences from: (a) 2203:
Batteau D W. The role of the pinna in human localization. Proceedings of the Royal Society of London B: Biological Sciences, 1967, 168(1011): 158-180.
1557:
differences, which could be evaluated. Many of these animals can move their ears, and these ear movements can be used as a lateral localization cue.
252:
Lower frequencies, with longer wavelengths, diffract the sound around the head forcing the brain to focus only on the phasing cues from the source.
1754:(1842–1919) would do these same experiments and come to the results, without knowing Venturi had first done them, almost seventy-five years later. 2419:"Contribution of the Dorsal Nucleus of the Lateral Lemniscus to Binaural Responses in the Inferior Colliculus of the Rat: Interaural Time Delays" 1989:
Thompson, Daniel M. Understanding Audio: Getting the Most out of Your Project or Professional Recording Studio. Boston, MA: Berklee, 2005. Print.
300: 1376:
Loudness: Distant sound sources have a lower loudness than close ones. This aspect can be evaluated especially for well-known sound sources.
2679:
Ho CC, Narins PM (Apr 2006). "Directionality of the pressure-difference receiver ears in the northern leopard frog, Rana pipiens pipiens".
1803:(1781–1826); the stethophone had two separate "pickups", allowing the user to hear and compare sounds derived from two discrete locations. 1776:
also attempted to compare and contrast what would become known as binaural hearing with the principles of binocular integration generally.
238:
The distance cues are the loss of amplitude, the loss of high frequencies, and the ratio of the direct signal to the reverberated signal.
2895: 1682:. This adaptation proves instrumental for dolphins navigating through turbid waters and bats seeking sustenance in nocturnal darkness. 2550:
Miles RN, Robert D, Hoy RR (December 1995). "Mechanically coupled ears for directional hearing in the parasitoid fly Ormia ochracea".
55: 1605:
strategy. Ho showed that the coupled-eardrum system in frogs can produce increased interaural vibration disparities when only small
2910: 1735:
Later, it would become apparent that binaural hearing, whether dichotic or diotic, is the means by which sound localization occurs.
1449:). So sound localization remains possible even in an echoic environment. This echo cancellation occurs in the Dorsal Nucleus of the 2885: 30:
This article is about the biological process of sound localization. For sound localization via mechanical or electrical means, see
1601:
of opposite ears are directly connected mechanically, allowing resolution of sub-microsecond time differences and requiring a new
1441:
In enclosed rooms not only the direct sound from a sound source is arriving at the listener's ears, but also sound which has been
2963: 1751: 2222: 1837: 628: 1738:
Scientific consideration of binaural hearing began before the phenomenon was so named, with speculations published in 1792 by
2991: 2821: 1827: 1139:
is the amplitude of sound pressure at the center of the head coordinate when listener does not exist. In general, an HRTF's
2769:
Wade, NJ; Ono, H (2005). "From dichoptic to dichotic: historical contrasts between binocular vision and binaural hearing".
1049:{\displaystyle H_{R}=H_{R}(r,\theta ,\varphi ,\omega ,\alpha )=P_{R}(r,\theta ,\varphi ,\omega ,\alpha )/P_{0}(r,\omega ),} 898:{\displaystyle H_{L}=H_{L}(r,\theta ,\varphi ,\omega ,\alpha )=P_{L}(r,\theta ,\varphi ,\omega ,\alpha )/P_{0}(r,\omega )} 2930: 2900: 1678:, unveiling its critical role in the ultrasonic hearing range essential for animal sonar, specifically in the context of 578:. In above closed form, we assumed that the 0 degree is in the right ahead of the head and counter-clockwise is positive. 2925: 2849: 2049: 1764:
to both ears at the same time, or separately, trying to work out how sense of hearing works, that he published in 1827.
3008: 2915: 1969:
Blauert, J.: Spatial hearing: the psychophysics of human sound localization; MIT Press; Cambridge, Massachusetts (1983)
2896:
Interaural Intensity Difference Processing in Auditory Midbrain Neurons: Effects of a Transient Early Inhibitory Input
2593:
Robert D, Miles RN, Hoy RR (1996). "Directional hearing by mechanical coupling in the parasitoid fly Ormia ochracea".
2478:
Zhao R. Study of Auditory Transmission Sound Localization System, University of Science and Technology of China, 2006.
1689:
Evolutionary analyses of Prestin protein sequences brought forth a compelling observation – a singular
2076: 116:
and concha of the exterior ear, and enter the ear canal. In mammals, the sound waves vibrate the tympanic membrane (
1391:
in acoustical perception. For a moving listener nearby sound sources are passing faster than distant sound sources.
17: 2162:
Wallach, Hans (October 1940). "The role of head movements and vestibular and visual cues in sound localization".
1787:
in such a way as to enable sound localization and direction was considerably advanced after the invention of the
220:
source. The brain utilizes subtle differences in intensity, spectral, and timing cues to localize sound sources.
3275: 3207: 751: 2886:
auditoryneuroscience.com: Collection of multimedia files and flash demonstrations related to spatial hearing
2636:
Mason AC, Oshinsky ML, Hoy RR (Apr 2001). "Hyperacute directional hearing in a microscale auditory system".
2956: 1728:, distinguished between dichotic listening, which refers to the stimulation of each ear with a different 719: 346: 323: 3317: 3212: 3197: 2114:
Wallach, H; Newman, E.B.; Rosenzweig, M.R. (July 1949). "The precedence effect in sound localization".
1618: 1546: 338: 306: 228: 3013: 1747: 1112:
represent the amplitude of the sound pressure at the entrances to the left and right ear canals, and
456: 3217: 1434:, which means that interfering sound is perceived to be attenuated to half (or less) of its actual 165: 3070: 60: 2095: 3302: 3263: 3227: 2986: 2949: 1216: 3043: 1773: 1739: 1729: 1427: 1423: 1276: 1256: 1196: 608: 375: 35: 1498:
The representatives of this kind of system are SRS Audio Sandbox, Spatializer Audio Lab and
3120: 3053: 3033: 3028: 2559: 2501: 2350: 2294: 2242:"Localization of sound in the vertical plane with and without high-frequency spectral cues" 1847: 1765: 1442: 1347: 1331: 1169: 1142: 1115: 1088: 1061: 2911:
Research on "Non-line-of-sight (NLOS) Localisation for Indoor Environments" by CMR at UNSW
2282: 1732:, and diotic listening, the simultaneous stimulation of both ears with the same stimulus. 8: 3187: 3125: 3075: 1817: 1784: 1679: 1667: 1650: 1534: 1419:. For a directional analysis the signals inside the critical band are analyzed together. 246: 205: 3110: 2563: 2505: 2354: 2298: 1947: 1912: 1901:
Schnupp J., Nelken I & King A.J., 2011. Auditory Neuroscience, MIT Press, chapter 5.
3297: 3150: 3080: 3023: 2794: 2704: 2661: 2618: 2525: 2488:
Díaz-García, Lara; Reid, Andrew; Jackson-Camargo, Joseph; Windmill, James F.C. (2022).
2451: 2434: 2418: 2131: 2099:
Columbia College, Chicago - Audio Arts & Acoustics acousticslab.org/psychoacoustics
1832: 1757: 1517: 1326: 1318: 1236: 588: 415: 395: 355: 184:
in the superior olive which accept innervation from each ear with different connecting
2015:
Benade, Arthur H. Fundamentals of Musical Acoustics. New York: Oxford UP, 1976. Print.
287: 3171: 2817: 2786: 2696: 2653: 2610: 2575: 2529: 2517: 2456: 2438: 2310: 2263: 2226: 2139: 2072: 1952: 1934: 1884: 1812: 1598: 1450: 1446: 261: 194: 190: 177: 173: 31: 2798: 2622: 1394:
Level Difference: Very close sound sources cause a different level between the ears.
3307: 3237: 3232: 3145: 2864: 2778: 2748: 2708: 2688: 2665: 2645: 2602: 2567: 2509: 2446: 2430: 2358: 2302: 2253: 2171: 2123: 1942: 1924: 1876: 1743: 750:
These spectrum clues generated by the pinna filtering effect can be presented as a
241:
Depending on where the source is located, our head acts as a barrier to change the
2920: 2038:
Zhou X. Virtual reality technique. Telecommunications Science, 1996, 12(7): 46-–.
1373:
and reflected sound can give an indication about the distance of the sound source.
3202: 3105: 3100: 2390: 1842: 1822: 1462: 1368:
The auditory system uses these clues to estimate the distance to a sound source:
1362: 1322: 1310: 1297:
humans with normal and impaired hearing and from animals are publicly available.
273: 232: 145: 113: 99: 1800: 3312: 3222: 3095: 3090: 3038: 2489: 1929: 1769: 1666:
gene has emerged as a pivotal player, particularly in the fascinating arena of
1590: 1585: 149: 137: 2692: 2513: 2069:
Computational auditory scene analysis: principles, algorithms and applications
1697:
Subsequent experiments lent credence to this hypothesis, identifying four key
1387:
Movement: Similar to the visual system there is also the phenomenon of motion
3291: 3085: 3018: 2972: 2521: 2442: 2053: 1938: 1795:
in 1859, who coined the term 'binaural'. Alison based the stethophone on the
1631: 1602: 1479: 1408: 1404: 2752: 2003:
Roads, Curtis. The Computer Music Tutorial. Cambridge, MA: MIT, 2007. Print.
3242: 3192: 3130: 2890: 2790: 2700: 2657: 2143: 1956: 1888: 1725: 1606: 1530: 1339: 2614: 2579: 2460: 2314: 2267: 3166: 3135: 3115: 1796: 1788: 1761: 1717: 1646: 1628: 1384:
away, the direct and the reflected sound waves have similar path lengths.
742: 699: 582: 342: 257: 125: 91:
is a listener's ability to identify the location or origin of a detected
2283:"Factors That Influence the Localization of Sound in the Vertical Plane" 1403:
Sound processing of the human auditory system is performed in so-called
3140: 2606: 2258: 2241: 2135: 1721: 1698: 1690: 1478:
This kind of sound localization technique provides us the real virtual
1412: 710: 161: 133: 121: 2868: 2723:
The Living Bird, First Annual of the Cornell Laboratory of Ornithology
2487: 2362: 2306: 2649: 2571: 2175: 1880: 1624: 1485: 1416: 1314: 1306: 352:
Theory and experiments show that ITD relates to the signal frequency
169: 2906:
HearCom:Hearing in the Communication Society, an EU research project
2127: 1911:
Carlini, Alessandro; Bordeau, Camille; Ambard, Maxime (2024-07-10).
1674:
In 2014 Liu and others delved into the evolutionary adaptations of
3003: 2239: 2113: 1780: 1642: 1435: 1388: 272:
To determine the lateral input direction (left, front, right), the
216:
Sound localization is the process of determining the location of a
117: 2782: 2280: 1560: 193:. However, because Jeffress's theory is unable to account for the 2721:
Payne, Roger S., 1962. How the Barn Owl Locates Prey by Hearing.
2398:
Proceedings of the Royal Society of London, B Biological Sciences
1703: 1675: 1663: 1566:
are additional localization cues which are also used by animals.
1431: 1058:
where L and R represent the left ear and right ear respectively,
141: 129: 2941: 2490:"Towards a bio-inspired acoustic sensor: Achroia grisella's ear" 1365:(the visible part of the ear) resonances in the close-up range. 164:, interaural time differences are known to be calculated in the 46: 2997: 2331:, 3rd edition, Holt Rinehart & Winston, 1971, pp. 267–268. 1499: 698:
For frequencies below 1000 Hz, mainly ITDs are evaluated (
242: 181: 1867:
Jeffress L.A. (1948). "A place theory of sound localization".
1233:, the distance between the source and the center of the head 317: 217: 92: 1411:
is segmented into 24 critical bands, each with a width of 1
2327:
Thurlow, W.R. "Audition" in Kling, J.W. & Riggs, L.A.,
1540: 1506: 564: 185: 1612: 2814:
Sounds of our times : two hundred years of acoustics
1913:"Auditory localization: a comprehensive practical review" 1594: 1578: 1513: 689:{\displaystyle IID=1.0+(f/1000)^{0.8}\times \sin \theta } 372:. Suppose the angular position of the acoustic source is 277: 2066: 1593:
in sound localization experiments because of its unique
1468: 1350:
showed the pinna also enhances horizontal localization.
1493: 730: 1473: 326:(ILD) between left ear (left) and right ear (right). 1279: 1259: 1239: 1219: 1199: 1172: 1145: 1118: 1091: 1064: 910: 762: 631: 611: 591: 438: 418: 398: 378: 358: 309:(ITD) between left ear (top) and right ear (bottom). 2905: 1910: 124:
to vibrate, which then sends the energy through the
2850:"Binaural Hearing—Before and After the Stethophone" 1869:
Journal of Comparative and Physiological Psychology
98:The sound localization mechanisms of the mammalian 1662:In the realm of mammalian sound localization, the 1285: 1265: 1245: 1225: 1205: 1185: 1158: 1131: 1104: 1077: 1048: 897: 688: 617: 597: 570: 424: 404: 384: 364: 2891:Collection of references about sound localization 3289: 2635: 605:and the angular position of the acoustic source 2848:Wade, Nicholas J.; Deutsch, Diana (July 2008). 1561:In the median plane (front, above, back, below) 1353: 2901:Online learning center - Hearing and Listening 2592: 2549: 2240:Robert A. Butler; Richard A. Humanski (1992). 1671:elevating the overall sensitivity of hearing. 132:where it is changed into a chemical signal by 106: 2957: 2391:"The role of the pinna in human localization" 2281:Roffler Suzanne K.; Butler Robert A. (1968). 1866: 2552:Journal of the Acoustical Society of America 2417:Kidd, Sean A.; Kelly, Jack B. (1996-11-15). 2343:Journal of the Acoustical Society of America 2341:Wallach, H (1939). "On sound localization". 2287:Journal of the Acoustical Society of America 229:difference in arrival times between the ears 2847: 2843: 2841: 2839: 2837: 2835: 2833: 2764: 2762: 2760: 1321:of the ears, which can be evaluated by the 2964: 2950: 1985: 1983: 1981: 1979: 1977: 1975: 1658:The role of Prestin in sound localization: 227:The azimuth of a sound is signaled by the 2450: 2416: 2257: 2157: 2155: 2153: 1946: 1928: 1273:and the equivalent dimension of the head 1193:are functions of source angular position 725: 2830: 2757: 2678: 2011: 2009: 1541:Lateral information (left, ahead, right) 1507:Multichannel stereo virtual reproduction 1484: 741: 729: 286: 211: 71:of all important aspects of the article. 2931:An introduction to acoustic beamforming 2768: 2388: 2340: 2161: 1999: 1997: 1995: 1972: 1742:(1757–1817) based on his research into 1613:Bi-coordinate sound localization (owls) 14: 3290: 2926:An introduction to acoustic holography 2389:Batteau, Dwight Wayne (January 1964). 2150: 2109: 2107: 2084:originating at 75 degrees to the side. 1838:Perceptual-based 3D sound localization 1579:Localization with coupled ears (flies) 204:In the auditory midbrain nucleus, the 155: 67:Please consider expanding the lead to 2945: 2916:An introduction to sound localization 2811: 2805: 2735: 2733: 2731: 2474: 2472: 2470: 2369: 2006: 1828:Coincidence detection in neurobiology 1779:Understanding how the differences in 1469:Specific techniques with applications 625:. The function of IID is given by: 432:, the function of ITD is given by: 3270: 2182: 2096:Auditory localization - Introduction 2047: 2034: 2032: 2030: 1992: 1494:3D para-virtualization stereo system 1398: 40: 2681:Journal of Comparative Physiology A 2595:Journal of Comparative Physiology A 2382: 2104: 2067:DeLiang Wang; Guy J. Brown (2006). 1474:Auditory transmission stereo system 24: 2728: 2467: 2435:10.1523/JNEUROSCI.16-22-07390.1996 2164:Journal of Experimental Psychology 120:), causing the three bones of the 27:Biological sound detection process 25: 3329: 2971: 2879: 2027: 3269: 3258: 3257: 1801:René Théophile Hyacinthe Laennec 1783:between two ears contributes to 316: 299: 267: 45: 3014:Central pattern generator (CPG) 2741:Molecular biology and evolution 2715: 2672: 2629: 2586: 2543: 2481: 2410: 2334: 2321: 2274: 2233: 2215: 2206: 2197: 2189:"Acoustics: The Ears Have It". 2089: 2060: 2041: 1569: 1489:Sound localization with manikin 148:fibers that travel through the 59:may be too short to adequately 3208:Frog hearing and communication 2246:Perception & Psychophysics 2116:American Journal of Psychology 2018: 1963: 1904: 1895: 1860: 1040: 1028: 1010: 980: 964: 934: 892: 880: 862: 832: 816: 786: 752:head-related transfer function 665: 650: 69:provide an accessible overview 13: 1: 1853: 1799:, which had been invented by 1309:, i.e. the structures of the 1300: 412:and the acoustic velocity is 176:, this calculation relies on 1380:help of the perceived sound. 1354:Distance of the sound source 260:, a specific version of the 7: 2423:The Journal of Neuroscience 1806: 1637: 1547:interaural time differences 1447:law of the first wave front 345:at low frequencies and (b) 324:Interaural level difference 107:How sound reaches the brain 95:in direction and distance. 10: 3334: 3213:Infrared sensing in snakes 3198:Jamming avoidance response 1930:10.3389/fpsyg.2024.1408073 1710: 1619:Sound localization in owls 1616: 1523: 339:Interaural time difference 307:Interaural time difference 29: 3253: 3180: 3159: 3063: 2979: 2812:Beyer, Robert T. (1999). 2693:10.1007/s00359-005-0080-7 2514:10.1109/JSEN.2022.3197841 2223:"The CIPIC HRTF Database" 2193:. 1961-12-04. p. 80. 1748:Giovanni Battista Venturi 3218:Caridoid escape reaction 1226:{\displaystyle \varphi } 166:superior olivary nucleus 3071:Theodore Holmes Bullock 2329:Experimental Psychology 1917:Frontiers in Psychology 1793:Somerville Scott Alison 1707:realm of echolocation. 1583:The tiny parasitic fly 1286:{\displaystyle \alpha } 1266:{\displaystyle \omega } 1253:, the angular velocity 1206:{\displaystyle \theta } 618:{\displaystyle \theta } 385:{\displaystyle \theta } 276:analyzes the following 3228:Surface wave detection 2816:. New York: Springer. 2101:, accessed 16 May 2021 2071:. Wiley interscience. 1720:(1848–1936), a German 1490: 1287: 1267: 1247: 1227: 1207: 1187: 1160: 1133: 1106: 1079: 1050: 899: 747: 735: 726:Pinna filtering effect 690: 619: 599: 572: 426: 406: 386: 366: 292: 3044:Anti-Hebbian learning 2935:Link to reference 8: 2753:10.1093/molbev/msu194 2050:"Auditory Perception" 1774:William Charles Wells 1740:William Charles Wells 1488: 1428:signal-to-noise ratio 1424:cocktail party effect 1332:dummy head recordings 1288: 1268: 1248: 1228: 1208: 1188: 1186:{\displaystyle H_{R}} 1161: 1159:{\displaystyle H_{L}} 1134: 1132:{\displaystyle P_{0}} 1107: 1105:{\displaystyle P_{R}} 1080: 1078:{\displaystyle P_{L}} 1051: 900: 745: 733: 691: 620: 600: 573: 427: 407: 392:, the head radius is 387: 367: 290: 212:Human auditory system 36:3D sound localization 3121:Bernhard Hassenstein 3054:Ultrasound avoidance 3029:Fixed action pattern 2992:Coincidence detector 2494:IEEE Sensors Journal 2379:1961-12-04, pp.80-81 2375:"The Ears Have It," 1848:Spatial hearing loss 1766:Ernst Heinrich Weber 1277: 1257: 1237: 1217: 1197: 1170: 1143: 1116: 1089: 1062: 908: 760: 629: 609: 589: 436: 416: 396: 376: 356: 349:at high frequencies. 280:signal information: 3188:Animal echolocation 3126:Werner E. Reichardt 3076:Walter Heiligenberg 2564:1995ASAJ...98.3059M 2506:2022ISenJ..2217746D 2500:(18): 17746–17753. 2355:1939ASAJ...10..270W 2299:1968ASAJ...43.1255R 1818:Animal echolocation 1785:auditory processing 1535:animal echolocation 1327:frequency responses 1319:frequency responses 206:inferior colliculus 156:Neural interactions 3151:Fernando Nottebohm 3049:Sound localization 3024:Lateral inhibition 2607:10.1007/BF00193432 2259:10.3758/bf03212242 1833:Human echolocation 1758:Charles Wheatstone 1599:tympanic membranes 1518:7.1 surround sound 1491: 1283: 1263: 1243: 1223: 1213:, elevation angle 1203: 1183: 1156: 1129: 1102: 1075: 1046: 895: 748: 736: 686: 615: 595: 568: 563: 422: 402: 382: 362: 293: 89:Sound localization 3318:Spatial cognition 3285: 3284: 3172:Slice preparation 3034:Krogh's Principle 3009:Feature detection 2869:10.1121/1.2994724 2823:978-0-387-98435-3 2429:(22): 7390–7397. 2363:10.1121/1.1915985 2307:10.1121/1.1910976 1813:Acoustic location 1451:Lateral Lemniscus 1430:by up to 15  1399:Signal processing 1313:and the external 1246:{\displaystyle r} 598:{\displaystyle f} 559: 548: 526: 506: 495: 473: 425:{\displaystyle c} 405:{\displaystyle r} 365:{\displaystyle f} 262:precedence effect 245:, intensity, and 195:precedence effect 191:cross-correlation 86: 85: 32:acoustic location 16:(Redirected from 3325: 3273: 3272: 3261: 3260: 3238:Mechanoreception 3233:Electroreception 3146:Masakazu Konishi 3111:Jörg-Peter Ewert 2966: 2959: 2952: 2943: 2942: 2873: 2872: 2854: 2845: 2828: 2827: 2809: 2803: 2802: 2766: 2755: 2747:(9), 2415–2424. 2737: 2726: 2719: 2713: 2712: 2676: 2670: 2669: 2650:10.1038/35070564 2644:(6829): 686–90. 2633: 2627: 2626: 2590: 2584: 2583: 2572:10.1121/1.413830 2547: 2541: 2540: 2538: 2536: 2485: 2479: 2476: 2465: 2464: 2454: 2414: 2408: 2407: 2405: 2404: 2395: 2386: 2380: 2373: 2367: 2366: 2338: 2332: 2325: 2319: 2318: 2293:(6): 1255–1259. 2278: 2272: 2271: 2261: 2237: 2231: 2230: 2225:. Archived from 2219: 2213: 2210: 2204: 2201: 2195: 2194: 2186: 2180: 2179: 2176:10.1037/h0054629 2159: 2148: 2147: 2111: 2102: 2093: 2087: 2086: 2064: 2058: 2057: 2052:. Archived from 2045: 2039: 2036: 2025: 2022: 2016: 2013: 2004: 2001: 1990: 1987: 1970: 1967: 1961: 1960: 1950: 1932: 1908: 1902: 1899: 1893: 1892: 1881:10.1037/h0061495 1864: 1772:(1805–1849) and 1768:(1795–1878) and 1744:binocular vision 1292: 1290: 1289: 1284: 1272: 1270: 1269: 1264: 1252: 1250: 1249: 1244: 1232: 1230: 1229: 1224: 1212: 1210: 1209: 1204: 1192: 1190: 1189: 1184: 1182: 1181: 1165: 1163: 1162: 1157: 1155: 1154: 1138: 1136: 1135: 1130: 1128: 1127: 1111: 1109: 1108: 1103: 1101: 1100: 1084: 1082: 1081: 1076: 1074: 1073: 1055: 1053: 1052: 1047: 1027: 1026: 1017: 979: 978: 933: 932: 920: 919: 904: 902: 901: 896: 879: 878: 869: 831: 830: 785: 784: 772: 771: 695: 693: 692: 687: 673: 672: 660: 624: 622: 621: 616: 604: 602: 601: 596: 577: 575: 574: 569: 567: 566: 560: 557: 549: 546: 527: 519: 507: 504: 496: 493: 474: 466: 431: 429: 428: 423: 411: 409: 408: 403: 391: 389: 388: 383: 371: 369: 368: 363: 320: 303: 152:into the brain. 81: 78: 72: 49: 41: 21: 18:Binaural hearing 3333: 3332: 3328: 3327: 3326: 3324: 3323: 3322: 3288: 3287: 3286: 3281: 3249: 3203:Vision in toads 3176: 3155: 3106:Erich von Holst 3101:Karl von Frisch 3059: 2975: 2970: 2882: 2877: 2876: 2857:Acoustics Today 2852: 2846: 2831: 2824: 2810: 2806: 2767: 2758: 2738: 2729: 2720: 2716: 2677: 2673: 2634: 2630: 2591: 2587: 2548: 2544: 2534: 2532: 2486: 2482: 2477: 2468: 2415: 2411: 2402: 2400: 2393: 2387: 2383: 2374: 2370: 2339: 2335: 2326: 2322: 2279: 2275: 2238: 2234: 2221: 2220: 2216: 2211: 2207: 2202: 2198: 2188: 2187: 2183: 2160: 2151: 2128:10.2307/1418275 2112: 2105: 2094: 2090: 2079: 2065: 2061: 2046: 2042: 2037: 2028: 2023: 2019: 2014: 2007: 2002: 1993: 1988: 1973: 1968: 1964: 1909: 1905: 1900: 1896: 1865: 1861: 1856: 1843:Psychoacoustics 1823:Binaural fusion 1809: 1713: 1640: 1621: 1615: 1581: 1572: 1563: 1543: 1526: 1509: 1496: 1476: 1471: 1463:Franssen effect 1401: 1356: 1323:auditory system 1303: 1278: 1275: 1274: 1258: 1255: 1254: 1238: 1235: 1234: 1218: 1215: 1214: 1198: 1195: 1194: 1177: 1173: 1171: 1168: 1167: 1150: 1146: 1144: 1141: 1140: 1123: 1119: 1117: 1114: 1113: 1096: 1092: 1090: 1087: 1086: 1069: 1065: 1063: 1060: 1059: 1022: 1018: 1013: 974: 970: 928: 924: 915: 911: 909: 906: 905: 874: 870: 865: 826: 822: 780: 776: 767: 763: 761: 758: 757: 728: 668: 664: 656: 630: 627: 626: 610: 607: 606: 590: 587: 586: 562: 561: 556: 545: 543: 518: 509: 508: 503: 492: 490: 465: 452: 451: 437: 434: 433: 417: 414: 413: 397: 394: 393: 377: 374: 373: 357: 354: 353: 331: 330: 329: 328: 327: 321: 312: 311: 310: 304: 274:auditory system 270: 214: 172:. According to 158: 146:spiral ganglion 109: 100:auditory system 82: 76: 73: 66: 54:This article's 50: 39: 28: 23: 22: 15: 12: 11: 5: 3331: 3321: 3320: 3315: 3310: 3305: 3300: 3283: 3282: 3280: 3279: 3267: 3254: 3251: 3250: 3248: 3247: 3246: 3245: 3235: 3230: 3225: 3223:Vocal learning 3220: 3215: 3210: 3205: 3200: 3195: 3190: 3184: 3182: 3178: 3177: 3175: 3174: 3169: 3163: 3161: 3157: 3156: 3154: 3153: 3148: 3143: 3138: 3133: 3128: 3123: 3118: 3113: 3108: 3103: 3098: 3096:Donald Kennedy 3093: 3091:Donald Griffin 3088: 3083: 3081:Niko Tinbergen 3078: 3073: 3067: 3065: 3061: 3060: 3058: 3057: 3051: 3046: 3041: 3039:Hebbian theory 3036: 3031: 3026: 3021: 3016: 3011: 3006: 3001: 2994: 2989: 2983: 2981: 2977: 2976: 2969: 2968: 2961: 2954: 2946: 2940: 2939: 2933: 2928: 2923: 2921:Sound and Room 2918: 2913: 2908: 2903: 2898: 2893: 2888: 2881: 2880:External links 2878: 2875: 2874: 2829: 2822: 2804: 2756: 2727: 2714: 2671: 2628: 2585: 2558:(6): 3059–70. 2542: 2480: 2466: 2409: 2381: 2368: 2349:(4): 270–274. 2333: 2320: 2273: 2252:(2): 182–186. 2232: 2229:on 2013-09-13. 2214: 2205: 2196: 2181: 2170:(4): 339–368. 2149: 2122:(3): 315–336. 2103: 2088: 2077: 2059: 2056:on 2010-04-10. 2040: 2026: 2017: 2005: 1991: 1971: 1962: 1903: 1894: 1858: 1857: 1855: 1852: 1851: 1850: 1845: 1840: 1835: 1830: 1825: 1820: 1815: 1808: 1805: 1770:August Seebeck 1712: 1709: 1639: 1636: 1623:Most owls are 1617:Main article: 1614: 1611: 1591:model organism 1586:Ormia ochracea 1580: 1577: 1571: 1568: 1562: 1559: 1542: 1539: 1525: 1522: 1508: 1505: 1495: 1492: 1475: 1472: 1470: 1467: 1405:critical bands 1400: 1397: 1396: 1395: 1392: 1385: 1381: 1377: 1374: 1355: 1352: 1302: 1299: 1282: 1262: 1242: 1222: 1202: 1180: 1176: 1153: 1149: 1126: 1122: 1099: 1095: 1072: 1068: 1045: 1042: 1039: 1036: 1033: 1030: 1025: 1021: 1016: 1012: 1009: 1006: 1003: 1000: 997: 994: 991: 988: 985: 982: 977: 973: 969: 966: 963: 960: 957: 954: 951: 948: 945: 942: 939: 936: 931: 927: 923: 918: 914: 894: 891: 888: 885: 882: 877: 873: 868: 864: 861: 858: 855: 852: 849: 846: 843: 840: 837: 834: 829: 825: 821: 818: 815: 812: 809: 806: 803: 800: 797: 794: 791: 788: 783: 779: 775: 770: 766: 727: 724: 707: 706: 703: 696: 685: 682: 679: 676: 671: 667: 663: 659: 655: 652: 649: 646: 643: 640: 637: 634: 614: 594: 579: 565: 555: 552: 544: 542: 539: 536: 533: 530: 525: 522: 517: 514: 511: 510: 502: 499: 491: 489: 486: 483: 480: 477: 472: 469: 464: 461: 458: 457: 455: 450: 447: 444: 441: 421: 401: 381: 361: 350: 322: 315: 314: 313: 305: 298: 297: 296: 295: 294: 269: 266: 213: 210: 157: 154: 150:cochlear nerve 138:organ of Corti 108: 105: 84: 83: 63:the key points 53: 51: 44: 26: 9: 6: 4: 3: 2: 3330: 3319: 3316: 3314: 3311: 3309: 3306: 3304: 3303:Neuroethology 3301: 3299: 3296: 3295: 3293: 3278: 3277: 3268: 3266: 3265: 3256: 3255: 3252: 3244: 3241: 3240: 3239: 3236: 3234: 3231: 3229: 3226: 3224: 3221: 3219: 3216: 3214: 3211: 3209: 3206: 3204: 3201: 3199: 3196: 3194: 3191: 3189: 3186: 3185: 3183: 3179: 3173: 3170: 3168: 3165: 3164: 3162: 3158: 3152: 3149: 3147: 3144: 3142: 3139: 3137: 3134: 3132: 3129: 3127: 3124: 3122: 3119: 3117: 3114: 3112: 3109: 3107: 3104: 3102: 3099: 3097: 3094: 3092: 3089: 3087: 3086:Konrad Lorenz 3084: 3082: 3079: 3077: 3074: 3072: 3069: 3068: 3066: 3062: 3055: 3052: 3050: 3047: 3045: 3042: 3040: 3037: 3035: 3032: 3030: 3027: 3025: 3022: 3020: 3019:NMDA receptor 3017: 3015: 3012: 3010: 3007: 3005: 3002: 3000: 2999: 2995: 2993: 2990: 2988: 2985: 2984: 2982: 2978: 2974: 2973:Neuroethology 2967: 2962: 2960: 2955: 2953: 2948: 2947: 2944: 2938: 2934: 2932: 2929: 2927: 2924: 2922: 2919: 2917: 2914: 2912: 2909: 2907: 2904: 2902: 2899: 2897: 2894: 2892: 2889: 2887: 2884: 2883: 2870: 2866: 2862: 2858: 2851: 2844: 2842: 2840: 2838: 2836: 2834: 2825: 2819: 2815: 2808: 2800: 2796: 2792: 2788: 2784: 2783:10.1068/p5327 2780: 2777:(6): 645–68. 2776: 2772: 2765: 2763: 2761: 2754: 2750: 2746: 2742: 2736: 2734: 2732: 2724: 2718: 2710: 2706: 2702: 2698: 2694: 2690: 2687:(4): 417–29. 2686: 2682: 2675: 2667: 2663: 2659: 2655: 2651: 2647: 2643: 2639: 2632: 2624: 2620: 2616: 2612: 2608: 2604: 2600: 2596: 2589: 2581: 2577: 2573: 2569: 2565: 2561: 2557: 2553: 2546: 2531: 2527: 2523: 2519: 2515: 2511: 2507: 2503: 2499: 2495: 2491: 2484: 2475: 2473: 2471: 2462: 2458: 2453: 2448: 2444: 2440: 2436: 2432: 2428: 2424: 2420: 2413: 2399: 2392: 2385: 2378: 2372: 2364: 2360: 2356: 2352: 2348: 2344: 2337: 2330: 2324: 2316: 2312: 2308: 2304: 2300: 2296: 2292: 2288: 2284: 2277: 2269: 2265: 2260: 2255: 2251: 2247: 2243: 2236: 2228: 2224: 2218: 2209: 2200: 2192: 2185: 2177: 2173: 2169: 2165: 2158: 2156: 2154: 2145: 2141: 2137: 2133: 2129: 2125: 2121: 2117: 2110: 2108: 2100: 2097: 2092: 2085: 2080: 2078:9780471741091 2074: 2070: 2063: 2055: 2051: 2044: 2035: 2033: 2031: 2021: 2012: 2010: 2000: 1998: 1996: 1986: 1984: 1982: 1980: 1978: 1976: 1966: 1958: 1954: 1949: 1944: 1940: 1936: 1931: 1926: 1922: 1918: 1914: 1907: 1898: 1890: 1886: 1882: 1878: 1874: 1870: 1863: 1859: 1849: 1846: 1844: 1841: 1839: 1836: 1834: 1831: 1829: 1826: 1824: 1821: 1819: 1816: 1814: 1811: 1810: 1804: 1802: 1798: 1794: 1790: 1786: 1782: 1781:sound signals 1777: 1775: 1771: 1767: 1763: 1759: 1755: 1753: 1752:Lord Rayleigh 1749: 1745: 1741: 1736: 1733: 1731: 1727: 1723: 1719: 1708: 1705: 1700: 1695: 1692: 1687: 1683: 1681: 1677: 1672: 1669: 1665: 1660: 1659: 1655: 1652: 1648: 1644: 1635: 1633: 1632:birds of prey 1630: 1626: 1620: 1610: 1608: 1604: 1603:neural coding 1600: 1596: 1592: 1589:has become a 1588: 1587: 1576: 1567: 1558: 1554: 1550: 1548: 1538: 1536: 1532: 1521: 1519: 1515: 1504: 1501: 1487: 1483: 1481: 1480:stereo system 1466: 1464: 1458: 1454: 1452: 1448: 1444: 1439: 1437: 1433: 1429: 1425: 1420: 1418: 1414: 1410: 1409:hearing range 1406: 1393: 1390: 1386: 1382: 1378: 1375: 1371: 1370: 1369: 1366: 1364: 1358: 1351: 1349: 1346:In the 1960s 1344: 1341: 1335: 1333: 1328: 1324: 1320: 1316: 1312: 1308: 1298: 1294: 1280: 1260: 1240: 1220: 1200: 1178: 1174: 1151: 1147: 1124: 1120: 1097: 1093: 1070: 1066: 1056: 1043: 1037: 1034: 1031: 1023: 1019: 1014: 1007: 1004: 1001: 998: 995: 992: 989: 986: 983: 975: 971: 967: 961: 958: 955: 952: 949: 946: 943: 940: 937: 929: 925: 921: 916: 912: 889: 886: 883: 875: 871: 866: 859: 856: 853: 850: 847: 844: 841: 838: 835: 827: 823: 819: 813: 810: 807: 804: 801: 798: 795: 792: 789: 781: 777: 773: 768: 764: 755: 753: 744: 740: 732: 723: 721: 715: 712: 704: 701: 697: 683: 680: 677: 674: 669: 661: 657: 653: 647: 644: 641: 638: 635: 632: 612: 592: 584: 580: 553: 550: 540: 537: 534: 531: 528: 523: 520: 515: 512: 500: 497: 487: 484: 481: 478: 475: 470: 467: 462: 459: 453: 448: 445: 442: 439: 419: 399: 379: 359: 351: 348: 344: 340: 337: 336: 335: 325: 319: 308: 302: 291:Duplex theory 289: 285: 281: 279: 275: 268:Duplex theory 265: 263: 259: 253: 250: 248: 244: 239: 236: 234: 230: 225: 221: 219: 209: 207: 202: 198: 196: 192: 187: 183: 179: 175: 171: 167: 163: 153: 151: 147: 143: 139: 135: 131: 128:and into the 127: 123: 119: 115: 104: 101: 96: 94: 90: 80: 70: 64: 62: 57: 52: 48: 43: 42: 37: 33: 19: 3274: 3262: 3243:Lateral line 3193:Waggle dance 3131:Eric Knudsen 3048: 2996: 2863:(3): 16–27. 2860: 2856: 2813: 2807: 2774: 2770: 2744: 2740: 2722: 2717: 2684: 2680: 2674: 2641: 2637: 2631: 2601:(1): 29–44. 2598: 2594: 2588: 2555: 2551: 2545: 2535:12 September 2533:. Retrieved 2497: 2493: 2483: 2426: 2422: 2412: 2401:. Retrieved 2397: 2384: 2376: 2371: 2346: 2342: 2336: 2328: 2323: 2290: 2286: 2276: 2249: 2245: 2235: 2227:the original 2217: 2208: 2199: 2190: 2184: 2167: 2163: 2119: 2115: 2098: 2091: 2082: 2068: 2062: 2054:the original 2043: 2020: 1965: 1920: 1916: 1906: 1897: 1875:(1): 35–39. 1872: 1868: 1862: 1778: 1762:tuning forks 1756: 1737: 1734: 1726:psychologist 1714: 1696: 1688: 1684: 1680:echolocation 1673: 1668:echolocation 1661: 1657: 1656: 1651:echolocation 1641: 1622: 1607:arrival time 1584: 1582: 1573: 1570:Head tilting 1564: 1555: 1551: 1544: 1531:active sonar 1527: 1510: 1497: 1477: 1459: 1455: 1440: 1421: 1402: 1367: 1359: 1357: 1345: 1340:Hans Wallach 1336: 1304: 1295: 1057: 756: 749: 737: 720:group delays 716: 708: 700:phase delays 583:head shadows 558: 4000Hz 505:4000Hz  347:group delays 343:Phase delays 332: 282: 271: 254: 251: 240: 237: 226: 222: 215: 203: 199: 159: 110: 97: 88: 87: 74: 58: 56:lead section 3167:Patch clamp 3136:Eric Kandel 3116:Franz Huber 2987:Feedforward 1797:stethoscope 1789:stethophone 1722:philosopher 1718:Carl Stumpf 1647:odontocetes 1645:(and other 1629:crepuscular 258:Haas effect 178:delay lines 162:vertebrates 126:oval window 3292:Categories 3141:Nobuo Suga 3056:in insects 2771:Perception 2403:2023-11-30 2048:Ian Pitt. 1854:References 1699:amino acid 1691:amino acid 1649:) rely on 1305:The human 1301:Other cues 711:wavelength 134:hair cells 122:middle ear 77:April 2023 3298:Acoustics 2725:, 151-159 2530:252223827 2522:1558-1748 2443:0270-6474 1939:1664-1078 1625:nocturnal 1443:reflected 1315:ear canal 1307:outer ear 1281:α 1261:ω 1221:φ 1201:θ 1038:ω 1008:α 1002:ω 996:φ 990:θ 962:α 956:ω 950:φ 944:θ 890:ω 860:α 854:ω 848:φ 842:θ 814:α 808:ω 802:φ 796:θ 684:θ 681:⁡ 675:× 613:θ 538:θ 535:⁡ 529:× 516:× 501:≤ 485:θ 482:⁡ 476:× 463:× 380:θ 170:brainstem 61:summarize 3264:Category 3004:Instinct 2980:Concepts 2799:43674057 2791:16042189 2701:16380842 2658:11287954 2623:21452506 2377:Newsweek 2191:Newsweek 2144:18134356 1957:39049946 1948:11267622 1889:18904764 1807:See also 1730:stimulus 1643:Dolphins 1638:Dolphins 1453:(DNLL). 1436:loudness 1389:parallax 547:if  494:if  247:spectral 174:Jeffress 140:, which 118:ear drum 3308:Hearing 3276:Commons 3181:Systems 3160:Methods 2709:5881898 2666:4370356 2615:8965258 2580:8550933 2560:Bibcode 2502:Bibcode 2461:8929445 2452:6578946 2351:Bibcode 2315:5659493 2295:Bibcode 2268:1549436 2136:1418275 1711:History 1704:Prestin 1676:Prestin 1664:Prestin 1524:Animals 1415:or 100 1348:Batteau 182:neurons 168:of the 142:synapse 136:in the 130:cochlea 3064:People 2998:Umwelt 2820:  2797:  2789:  2707:  2699:  2664:  2656:  2638:Nature 2621:  2613:  2578:  2528:  2520:  2459:  2449:  2441:  2313:  2266:  2142:  2134:  2075:  1955:  1945:  1937:  1887:  1533:, see 1500:Qsound 1407:. The 243:timbre 233:pinnae 3313:Sound 2853:(PDF) 2795:S2CID 2705:S2CID 2662:S2CID 2619:S2CID 2526:S2CID 2394:(PDF) 2132:JSTOR 1363:pinna 1311:pinna 218:sound 144:onto 114:pinna 93:sound 2818:ISBN 2787:PMID 2697:PMID 2654:PMID 2611:PMID 2576:PMID 2537:2022 2518:ISSN 2457:PMID 2439:ISSN 2311:PMID 2264:PMID 2140:PMID 2073:ISBN 1953:PMID 1935:ISSN 1885:PMID 1724:and 1413:Bark 1166:and 1085:and 746:HRIR 734:HRTF 662:1000 554:> 186:axon 34:and 2865:doi 2779:doi 2749:doi 2689:doi 2685:192 2646:doi 2642:410 2603:doi 2599:179 2568:doi 2510:doi 2447:PMC 2431:doi 2359:doi 2303:doi 2254:doi 2172:doi 2124:doi 1943:PMC 1925:doi 1877:doi 1791:by 1627:or 1595:ear 1514:5.1 1438:. 1417:Mel 678:sin 670:0.8 645:1.0 532:sin 479:sin 278:ear 160:In 3294:: 2859:. 2855:. 2832:^ 2793:. 2785:. 2775:34 2773:. 2759:^ 2745:31 2743:, 2730:^ 2703:. 2695:. 2683:. 2660:. 2652:. 2640:. 2617:. 2609:. 2597:. 2574:. 2566:. 2556:98 2554:. 2524:. 2516:. 2508:. 2498:22 2496:. 2492:. 2469:^ 2455:. 2445:. 2437:. 2427:16 2425:. 2421:. 2396:. 2357:. 2347:10 2345:. 2309:. 2301:. 2291:43 2289:. 2285:. 2262:. 2250:51 2248:. 2244:. 2168:27 2166:. 2152:^ 2138:. 2130:. 2120:62 2118:. 2106:^ 2081:. 2029:^ 2008:^ 1994:^ 1974:^ 1951:. 1941:. 1933:. 1923:. 1921:15 1919:. 1915:. 1883:. 1873:41 1871:. 1746:. 1537:. 1465:) 1432:dB 1293:. 235:. 180:: 2965:e 2958:t 2951:v 2871:. 2867:: 2861:4 2826:. 2801:. 2781:: 2751:: 2711:. 2691:: 2668:. 2648:: 2625:. 2605:: 2582:. 2570:: 2562:: 2539:. 2512:: 2504:: 2463:. 2433:: 2406:. 2365:. 2361:: 2353:: 2317:. 2305:: 2297:: 2270:. 2256:: 2178:. 2174:: 2146:. 2126:: 1959:. 1927:: 1891:. 1879:: 1516:/ 1241:r 1179:R 1175:H 1152:L 1148:H 1125:0 1121:P 1098:R 1094:P 1071:L 1067:P 1044:, 1041:) 1035:, 1032:r 1029:( 1024:0 1020:P 1015:/ 1011:) 1005:, 999:, 993:, 987:, 984:r 981:( 976:R 972:P 968:= 965:) 959:, 953:, 947:, 941:, 938:r 935:( 930:R 926:H 922:= 917:R 913:H 893:) 887:, 884:r 881:( 876:0 872:P 867:/ 863:) 857:, 851:, 845:, 839:, 836:r 833:( 828:L 824:P 820:= 817:) 811:, 805:, 799:, 793:, 790:r 787:( 782:L 778:H 774:= 769:L 765:H 666:) 658:/ 654:f 651:( 648:+ 642:= 639:D 636:I 633:I 593:f 551:f 541:, 524:c 521:r 513:2 498:f 488:, 471:c 468:r 460:3 454:{ 449:= 446:D 443:T 440:I 420:c 400:r 360:f 79:) 75:( 65:. 38:. 20:)

Index

Binaural hearing
acoustic location
3D sound localization

lead section
summarize
provide an accessible overview
sound
auditory system
pinna
ear drum
middle ear
oval window
cochlea
hair cells
organ of Corti
synapse
spiral ganglion
cochlear nerve
vertebrates
superior olivary nucleus
brainstem
Jeffress
delay lines
neurons
axon
cross-correlation
precedence effect
inferior colliculus
sound

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.