Knowledge

3D sound localization

Source đź“ť

3008:) is fundamentally limited by the physical size of the array. If the array is too small, then the microphones are spaced too closely together so that they all record essentially the same sound (with ITF near zero), making it extremely difficult to estimate the orientation. Thus, it is not uncommon for microphone arrays to range from tens of centimeters in length (for desktop applications) to many tens of meters in length (for underwater localization). However, microphone arrays of this size then become impractical to use on small robots. even for large robots, such microphone arrays can be cumbersome to mount and to maneuver. In contrast, the ability to localize sound using a single microphone (which can be made extremely small) holds the potential of significantly more compact, as well as lower cost and power, devices for localization. 2963:
of the sound source, and the amplitude of the ICTD signal can be represented as a function of the elevation angle of the sound source and the distance between the two microphones. In the case of multiple sources, the ICTD signal has data points forming multiple discontinuous sinusoidal waveforms. Machine learning techniques such as Random sample consensus (RANSAC) and Density-based spatial clustering of applications with noise (DBSCAN) can be applied to identify phase shifts (mapping to azimuths) and amplitudes (mapping to elevations) of each discontinuous sinusoidal waveform in the ICTD signal.
3033: 2248: 3024:). First, compute HRTFs for the 3D sound localization, by formulating two equations; one represents the signal of a given sound source and the other indicates the signal output from the robot head microphones for the sound transferred from the source. Monaural input data are processed by these HRTFs, and the results are output from stereo headphones. The disadvantage of this method is that many parametric operations are necessary for the whole set of filters to realize the 3D sound localization, resulting in high computational complexity. 3057:
in a direction-dependent way). The approach models the typical distribution of natural and artificial sounds, as well as the direction-dependent changes to sounds induced by the pinna. The experimental results also show that the algorithm is able to fairly accurately localize a wide range of sounds, such as human speech, dog barking, waterfall, thunder, and so on. In contrast to microphone arrays, this approach also offers the potential of significantly more compact, as well as lower cost and power, devices for sound localization.
98: 1260: 2273: 823: 2938: 2972: 1597: 536: 2299:
binaural hearing model. The HRTF can be derived based on various cues for localization. Sound localization with HRTF is filtering the input signal with a filter which is designed based on the HRTF. Instead of using the neural networks, a head-related transfer function is used and the localization is based on a simple correlation approach.
2541: 1393: 2259:
The measurement procedure involves manually moving the AVS sensor around the sound source while a stereo camera is used to extract the instantaneous position of the sensor in three-dimensional space. The recorded signals are then split into multiple segments and assigned to a set of positions using a
2255:
Scan-based techniques are a powerful tool for localizing and visualizing time-stationary sound sources, as they only require the use of a single sensor and a position tracking system. One popular method for achieving this is through the use of an Acoustic Vector Sensor (AVS), also known as a 3D Sound
3044:
As shown in the figure, the implementation procedure of this realtime algorithm is divided into three phases, (i) Frequency Division, (ii) Sound Localization, and (iii) Mixing. In the case of 3D sound localization for a monaural sound source, the audio input data are divided into two: left and right
2962:
The rotation of the two-microphone array (also referred as bi-microphone array ) leads to a sinusoidal inter-channel time difference (ICTD) signal for a stationary sound source present in a 3D environment. The phase shift of the resulting sinusoidal signal can be directly mapped to the azimuth angle
3056:
Monaural localization is made possible by the structure of the pinna (outer ear), which modifies the sound in a way that is dependent on its incident angle. A machine learning approach is adapted for monaural localization using only a single microphone and an “artificial pinna” (that distorts sound
3003:
Typically, sound localization is performed by using two (or more) microphones. By using the difference of arrival times of a sound at the two microphones, one can mathematically estimate the direction of the sound source. However, the accuracy with which an array of microphones can localize a sound
2298:
In the real sound localization, the robot head and the torso play a functional role, in addition to the two pinnae. This functions as spatial linear filtering and the filtering is always quantified in terms of Head-Related Transfer Function (HRTF). HRTF also uses the robot head sensor, which is the
2289:
by rotating the head with a settled white noise sound source and analyzing the spectrum. Experiments show that the system can identify the direction of the source well in a certain range of angle of arrival. It cannot identify the sound coming outside the range due to the collapsed spectrum pattern
2284:
method. The sensor is a robot dummy head with 2 sensor microphones along with the artificial pinna (reflector). The robot head has 2 rotation axes and can rotate horizontally and vertically. The reflector causes the spectrum change into a certain pattern for incoming white noise sound wave and this
2263:
The results of the AVS analysis can be presented over a 3D sketch of the tested object, providing a visual representation of the sound distribution around a 3D mesh of the object or environment. This can be useful for localizing sound sources in a variety of fields, such as architectural acoustics,
1292:
The AVS is one kind of collocated multiple microphone array, it makes use of a multiple microphone array approach for estimating the sound directions by multiple arrays and then finds the locations by using reflection information such as where the direction is detected where different arrays cross.
1236:
The advantage of this method is that it detects the direction of the sound and derives the distance of sound sources. The main drawback of the beamforming approach is the imperfect nature of sound localization accuracy and capability, versus the neural network approach, which uses moving speakers.
109:
uses sound source localization techniques to identify the location of a target. 3D sound localization is also used for effective human-robot interaction. With the increasing demand for robotic hearing, some applications of 3D sound localization such as human-machine interface, handicapped aid, and
2994:
IID-based or ITD-based sound localization methods have a main problem called Front-back confusion. In this sound localization based on a hierarchical neural network system, to solve this issue, an IID estimation is with ITD estimation. This system was used for broadband sounds and be deployed for
152:
The first clue our hearing uses is interaural time difference. Sound from a source directly in front of or behind us will arrive simultaneously at both ears. If the source moves to the left or right, our ears pick up the sound from the same source arriving at both ears - but with a certain delay.
2979:
The Hierarchical Fuzzy Artificial Neural Networks Approach sound localization system was modeled on biologically binaural sound localization. Some primitive animals with two ears and small brains can perceive 3D space and process sounds, although the process is not fully understood. Some animals
1276:
A sound signal is first windowed using a rectangular window, then each resulting segment signal is created as a frame. 4 parallel frames are detected from XYZO array and used for DOA estimation. The 4 frames are split into small blocks with equal size, then the Hamming window and FFT are used to
422: 2053: 818:{\displaystyle {{R}^{\text{RWPHAT}}}_{i,j}\left(\tau \right)=\sum _{k=0}^{L-1}{\frac {{\zeta }_{i}\left(k\right){X}_{i}\left(k\right){\zeta }_{j}\left(k\right){{X}_{j}}^{*}\left(k\right)}{\left|{X}_{i}\left(k\right)\right|\left|{X}_{j}\left(k\right)\right|}}{e}^{j2\pi k\tau /L}} 2991:(ITD-based) and interaural intensity difference(IID-based) sound localization methods for higher accuracy that is similar to that of humans. Hierarchical Fuzzy Artificial Neural Networks were used with the goal of the same sound localization accuracy as human ears. 2314:
CSP method is also used for the binaural model. The idea is that the angle of arrival can be derived through the time delay of arrival (TDOA) between two microphones, and TDOA can be estimated by finding the maximum coefficients of CSP. CSP coefficients are derived
1005: 1301:
Sound reflections always occur in an actual environment and microphone arrays cannot avoid observing those reflections. This multiple array approach was tested using fixed arrays in the ceiling; the performance of the moving scenario still need to be tested.
2322: 1245:
This method relates to the technique of Real-Time sound localization utilizing an Acoustic Vector Sensor (AVS) array, which measures all three components of the acoustic particle velocity, as well as the sound pressure, unlike conventional acoustic
2945:
In order to estimate the location of a source in 3D space, two line sensor arrays can be placed horizontally and vertically. An example is a 2D line array used for underwater source localization. By processing the data from two arrays using the
2260:
spatial discretization algorithm. This allows for the computation of a vector representation of the acoustic variations across the sound field, using combinations of the sound pressure and the three orthogonal acoustic particle velocities.
1592:{\displaystyle dist\left(dir_{1},dir_{2}\right)={\frac {\left({\overrightarrow {v_{1}}}\times {\overrightarrow {v_{2}}}\right)\times {\overrightarrow {p_{1}p_{2}}}}{\left|{\overrightarrow {v_{1}}}\times {\overrightarrow {v_{2}}}\right|}}} 1277:
convert each block from a time domain to a frequency domain. Then the output of this system is represented by a horizontal angle and a vertical angle of the sound sources which is found by the peak in the combined 3D spatial spectrum.
524: 225:
in order to find the maximum value of the output of a beamformer steered in all possible directions. Using the Reliability Weighted Phase Transform (RWPHAT) method, The output energy of M-microphone delay-and-sum beamformer is
2837: 30:. The source location is usually determined by the direction of the incoming sound waves (horizontal and vertical angles) and the distance between the source and sensors. It involves the structure arrangement design of the 1890: 212:
The motivation of using this method is that based on previous research. This method is used for multiple sound source tracking and localizing despite soundtracking and localization only apply for a single sound source.
232: 1381: 1288:
wide band sound sources simultaneously. Applying an O array can make more available acoustic information, such as amplitude and time difference. Most importantly, XYZO array has a better performance with a tiny size.
1901: 1386:
Where r is the distance between array center to source, and AU is angle uncertainly. Measurement is used for judging whether two directions cross at some location or not. Minimum distance between two lines:
2750: 1250:
that only utilize the pressure information and delays in the propagating acoustic field. Exploiting this extra information, AVS arrays are able to significantly improve the accuracy of source localization.
3048:
A distinctive feature of this approach is that the audible frequency band is divided into three so that a distinct procedure of 3D sound localization can be exploited for each of the three subbands.
3266:
Liang, Yun; Cui, Zheng; Zhao, Shengkui; Rupnow, Kyle; Zhang, Yihao; Jones, Douglas L.; Chen, Deming (2012). "Real-time implementation and performance optimization of 3D sound localization on GPUs".
880: 875: 3290: 1054: 3117: 2980:
experience difficulty in 3D sound location due to small head size. Additionally, the wavelength of communication sound may be much larger than their head diameter, as is the case with
1310:
Angle uncertainty (AU) will occur when estimating direction, and position uncertainty (PU) will also aggravate with increasing distance between the array and the source. We know that:
2105: 3948: 2536:{\displaystyle csp_{ij}(k)={\text{IFFT}}\left\{{\frac {{\text{FFT}}\cdot {\text{FFT}}^{*}}{\left|{\text{FFT}}\right\vert \cdot \left|{\text{FFT}}\right\vert \quad }}\right\}\quad } 205:
This approach utilizes eight microphones combined with a steered beamformer enhanced by the Reliability Weighted Phase Transform (RWPHAT). The final results are filtered through a
2929:
is also used for localizing several sound sources and reduce the localization errors. The system is capable of identifying several moving sound source using only two microphones.
2140: 1231: 49:
to localize sound, by comparing the information received from each ear in a complex process that involves a significant amount of synthesis. It is difficult to localize using
1273:• Can be used in combination with the Offline Calibration Process to measure and interpolate the impulse response of X, Y, Z and O arrays, to obtain their steering vector. 3637:
Gala, Deepak; Lindsay, Nathan; Sun, Liang (July 2018). "Realtime Active Sound Source Localization for Unmanned Ground Robots Using a Self-Rotational Bi-Microphone Array".
3411: 1280:
The advantages of this array, compared with past microphone array, are that this device has a high performance even if the aperture is small, and it can localize multiple
1161: 2616: 2580: 1193: 2920: 1666: 1633: 1084: 2887: 2678: 2194: 2167: 1720: 1693: 2264:
noise control, and audio engineering, as it allows for a detailed understanding of the sound distribution and its interactions with the surrounding environment.
2217: 430: 2860: 2656: 2636: 2237: 1124: 1104: 3838:
Noriaki, Sakamoto; wataru, Kobayashi; Takao, Onoye; Isao, Shirakawa (2001). "DSP implementation of 3D sound localization algorithm for monaural sound source".
3351:
Valin, J.M.; Michaud, F.; Rouat, Jean (14–19 May 2006). "Robust 3D Localization and Tracking of Sound Sources Using Beamforming and Particle Filtering".
2950:
method, the direction, range and depth of the source can be identified simultaneously. Unlike the binaural hearing model, this method is similar to the
2285:
pattern is used for the cue of the vertical localization. The cue for horizontal localization is ITD. The system makes use of a learning process using
2761: 1267:• Contains three orthogonally placed acoustic particle velocity sensors (shown as X, Y and Z array) and one omnidirectional acoustic microphone (O). 1731: 417:{\displaystyle E=K+2\sum _{{m}_{1}=1}^{M-1}\sum _{{m}_{2}=0}^{{m}_{1}-1}{{R}^{\text{RWPHAT}}}_{i,j}\left({\tau }_{{m}_{1}}-{\tau }_{{m}_{2}}\right)} 1316: 3116:
Keyrouz, Fakheredine; Diepold, Klaus; Keyrouz, Shady (September 2007). "High performance 3D sound localization for surveillance applications".
3744:
Keyrouz, Fakheredine; Diepold, Klaus (May 2008). "A novel biologically inspired neural network solution for robotic 3D sound source sensing".
3787:
Hill, P.A.; Nelson, P.A.; Kirkeby, O.; Hamada, H. (December 2000). "Resolution of front-back confusion in virtual acoustic imaging systems".
3476:
Ishi, C.T.; Even, J.; Hagita, N. (November 2013). "Using multiple microphone arrays and reflections for 3D localization of sound sources".
2290:
of the reflector. Binaural hearing use only 2 microphones and is capable of concentrating on one source among multiple sources of noises.
3955: 2048:{\displaystyle \mathrm {POS} _{source}={\frac {\left(\mathrm {POS} _{1}\times w_{1}+\mathrm {POS} _{2}\times w_{2}\right)}{w_{1}+w_{2}}}} 3709:
Multi-Sound-Source Localization Using Machine Learning for Small Autonomous Unmanned Vehicles with a Self-Rotating Bi-Microphone Array
2687: 3316:
Ephraim, Y.; Malah, D. (Dec 1984). "Speech enhancement using a minimum-mean square error short-time spectral amplitude estimator".
3517: 3086: 3985: 3896: 3855: 3493: 3378: 3187: 3135: 1895:
Two lines are judged as crossing. When two lines are crossing, we can compute the sound source location using the following:
1000:{\displaystyle {{\zeta }^{n}}_{i}\left(k\right)={\frac {{{\xi }^{n}}_{i}\left(k\right)}{{{\xi }^{n}}_{i}\left(k\right)+1}}} 118:
Localization cues are features that help localize sound. Cues for sound localization include binaural and monoaural cues.
105:
Applications of sound source localization include sound source separation, sound source tracking, and speech enhancement.
1233:
is the delay of arrival for that microphone. The more specific procedure of this method is proposed by Valin and Michaud
129:
Binaural cues are generated by the difference in hearing between the left and right ears. These differences include the
3040:
A DSP-based implementation of a realtime 3D sound localization approach with the use of an embedded DSP can reduce the
2926: 2987:
Based on previous binaural sound localization methods, a hierarchical fuzzy artificial neural network system combines
831: 3552: 3242: 3684:
Three-dimensional sound source localization for unmanned ground vehicles with a self-rotational two-microphone array
1010: 3081: 3021: 2303: 181: 133:(ITD) and the interaural intensity difference (IID). Binaural cues are used mostly for horizontal localization. 3225:
Nakashima, H.; Mukai, T. (2005). "3D Sound Source Localization System Based on Learning of Binaural Hearing".
3160: 2060: 81:. Existing real-time passive sound localization systems are mainly based on the time-difference-of-arrival ( 3071: 3990: 3096: 2110: 3941: 3291:"Direct acoustic vector field mapping: new scanning tools for measuring 3D sound intensity in 3D space" 3005: 2988: 1198: 147: 130: 3840:
ICECS 2001. 8th IEEE International Conference on Electronics, Circuits and Systems (Cat. No.01EX483)
3431:"Calibration Proposal for New Antenna Array Architectures and Technologies for Space Communications" 3928: 3204: 153:
Another way of saying it could be, that the two ears pick up different phases of the same signal.
4032: 3066: 27: 3429:
Salas Natera, M.A.; Martinez Rodriguez-Osorio, R.; de Haro Ariet, L.; Sierra Perez, M. (2012).
3041: 1132: 23: 3428: 3410:
Pérez Cabo, Daniel; de Bree, Hans Elias; Fernandez Comesaña, Daniel; Sobreira Seoane, Manuel.
3032: 2585: 2549: 2247: 1166: 3578:
Evaluation of Two-Channel-Based Sound Source Localization using 3D Moving Sound Creation Tool
2892: 1638: 1605: 86: 1059: 3975: 3796: 3611: 3442: 2865: 2663: 2172: 2145: 1698: 1671: 877:
reflect the reliability of each frequency component, and defined as the Wiener Filter gain
519:{\displaystyle {{R}^{\text{RWPHAT}}}_{i,j}\left({\tau }_{{m}_{1}}-{\tau }_{{m}_{2}}\right)} 8: 4010: 3923: 3800: 3615: 3446: 3353:
2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings
2199: 4053: 4015: 3902: 3861: 3769: 3716: 3664: 3646: 3558: 3499: 3392: 3356: 3248: 3141: 3091: 3076: 2947: 2845: 2641: 2621: 2222: 1109: 1089: 177: 143: 4022: 4000: 3964: 3892: 3851: 3820: 3812: 3761: 3548: 3489: 3458: 3384: 3374: 3333: 3298: 3271: 3238: 3183: 3131: 527: 35: 3906: 3865: 3773: 3562: 3503: 3145: 4058: 4005: 3884: 3843: 3804: 3753: 3726: 3687: 3668: 3656: 3619: 3599: 3581: 3540: 3481: 3450: 3412:"Real life harmonic source localization using a network of acoustic vector sensors" 3366: 3325: 3252: 3230: 3123: 2142:
is the position where each direction intersect the line with minimum distance, and
166: 46: 3396: 3045:
channels and the audio input data in time series are processed one after another.
97: 3879:
Saxena, A.; Ng, A.Y. (2009). "Learning sound location from a single microphone".
3537:
2006 IEEE International Symposium on Signal Processing and Information Technology
2925:
CPS method does not require the system impulse response data that HRTF needs. An
2286: 206: 3602:; Messer,H. (Jan 1996). "Three-Dimensional Source Localization in a Waveguide". 3370: 2832:{\displaystyle {\theta }=\cos ^{-1}{\frac {v\cdot \tau }{d_{\max }\cdot F_{s}}}} 3995: 3888: 3730: 3544: 3329: 3234: 2951: 1285: 173: 123: 3847: 3757: 3660: 3485: 3454: 3409: 3127: 1885:{\displaystyle dist(dir_{1},dir_{2})<abs(PU_{1}(r_{1}))+abs(PU_{2}(r_{2}))} 16:
Acoustic technology to locate the source of a sound in three-dimensional space
4047: 3816: 3765: 3462: 3388: 3337: 3302: 3288: 3275: 1281: 3289:
Fernandez Comesana, D.; Steltenpool, S.; Korbasiewicz, M.; Tijs, E. (2015).
3980: 3824: 3575: 1247: 74: 66: 1376:{\displaystyle PU\left(r\right)={\frac {\pm AU}{360}}\times 2\pi \times r} 3585: 2966: 1259: 161:
There are many different methods of 3D sound localization. For instance:
3692: 3478:
2013 IEEE/RSJ International Conference on Intelligent Robots and Systems
2272: 222: 78: 3808: 3623: 3706: 3681: 3636: 2971: 70: 3933: 3119:
2007 IEEE Conference on Advanced Video and Signal Based Surveillance
3721: 3651: 3532: 3361: 54: 50: 3227:
2005 IEEE International Conference on Systems, Man and Cybernetics
2998: 1305: 2937: 31: 3016:
A general way to implement 3d sound localization is to use the
2922:
is the distance with maximum time delay between 2 microphones.
2745:{\displaystyle {\tau }=\operatorname {arg} \max\{csp_{ij}(k)\}} 2281: 42: 3576:
Hyun-Don Kim; Komatani, K.; Ogata, T.; Okuno,H.G. (Jan 2008).
3027: 1296: 187:
Real-time methods using an Acoustic Vector Sensor (AVS) array
3881:
2009 IEEE International Conference on Robotics and Automation
3318:
IEEE Transactions on Acoustics, Speech, and Signal Processing
2981: 2954:
method. The method can be used to localize a distant source.
106: 26:
technology that is used to locate the source of a sound in a
2309: 3017: 82: 3430: 2293: 3786: 3682:
Gala, Deepak; Lindsay, Nathan; Sun, Liang (June 2018).
3518:"Reducing noise emissions from Lontra's LP2 compressor" 3268:
Automation and Test in Europe Conference and Exhibition
3182:(Eighth ed.). Cengage Learning. pp. 293–297. 2267: 1240: 3837: 3707:
Gala, Deepak; Lindsay, Nathan; Sun, Liang (Oct 2021).
3533:"An Enhanced Binaural 3D Sound Localization Algorithm" 2967:
Hierarchical Fuzzy Artificial Neural Networks Approach
216: 3598: 3115: 2895: 2868: 2848: 2764: 2690: 2666: 2644: 2624: 2588: 2552: 2325: 2225: 2202: 2175: 2148: 2113: 2063: 1904: 1734: 1701: 1674: 1641: 1608: 1396: 1319: 1201: 1169: 1135: 1112: 1092: 1062: 1013: 883: 834: 539: 433: 235: 2957: 2256:Intensity Probe, in combination with a 3D tracker. 427:Where E indicates the energy, and K is a constant, 209:that tracks sources and prevents false directions. 3265: 2914: 2881: 2854: 2831: 2744: 2672: 2650: 2630: 2610: 2574: 2535: 2239:from the array to the line with minimum distance. 2231: 2211: 2188: 2161: 2134: 2099: 2047: 1884: 1714: 1687: 1660: 1627: 1591: 1375: 1225: 1187: 1155: 1118: 1098: 1078: 1048: 999: 869: 817: 518: 416: 172:Different techniques for optimal results, such as 3350: 2975:Structure of How to derive the Azimuth Estimation 2169:is the weighted factors. As the weighting factor 1126:, computed using the decision-directed approach. 530:defined by Reliability Weighted Phase Transform: 221:To maximize the output energy of a delay-and-sum 4045: 2808: 2705: 1695:are vectors parallel to detected direction, and 870:{\displaystyle {{\zeta }^{n}}_{i}\left(k\right)} 126:and are generally used in vertical localization. 2999:3D sound localization for monaural sound source 1306:Learning how to apply Multiple Microphone Array 137: 3743: 3530: 3475: 3435:IEEE Antennas and Wireless Propagation Letters 3224: 1049:{\displaystyle {{\xi }^{n}}_{i}\left(k\right)} 200: 113: 65:Sound localization technology is used in some 3949: 3531:Keyrouz, Fakheredine; Diepold, Klaus (2006). 3051: 3011: 1270:• Commonly used both in air and underwater. 165:Different types of sensor structure, such as 89:, and are not practical in noisy conditions. 3789:Journal of the Acoustical Society of America 3713:Journal of Intelligent & Robotic Systems 3639:Journal of Intelligent & Robotic Systems 3315: 2739: 2708: 2107:is the estimation of sound source position, 1263:Picture of a 3D Acoustic Vector Sensor (air) 3036:dsp implementation of 3d sound localization 3028:DSP implementation of 3D sound localization 2251:3D sound localization of a large compressor 1297:Motivation of the Advanced Microphone array 110:military applications, are being explored. 85:) approach, limiting sound localization to 3956: 3942: 3220: 3218: 2276:Structure of the binaural robot dummy head 3924:3-D Localization of Virtual Sound Sources 3720: 3691: 3650: 3360: 3177: 2310:Cross-power spectrum phase (CSP) analysis 193:Offline methods (according to timeliness) 3878: 3031: 2970: 2936: 2271: 2246: 1258: 1254: 96: 3215: 2932: 2100:{\displaystyle \mathrm {POS} _{source}} 4046: 3604:IEEE Transactions on Signal Processing 3087:Perceptual-based 3D sound localization 2242: 3986:Computational auditory scene analysis 3963: 3937: 2941:Demonstration of 2d line sensor array 2294:Head-related Transfer Function (HRTF) 2618:are signals entering the microphone 2268:Learning method for binaural hearing 1241:Collocated Microphone Array Approach 3842:. Vol. 2. pp. 1061–1064. 3229:. Vol. 4. pp. 3534–3539. 3196: 217:Beamformer-based Sound Localization 122:Monoaural cues can be obtained via 13: 2927:expectation-maximization algorithm 2135:{\displaystyle \mathrm {POS} _{n}} 2122: 2119: 2116: 2072: 2069: 2066: 1990: 1987: 1984: 1956: 1953: 1950: 1913: 1910: 1907: 14: 4070: 3917: 3178:Goldstein, E.Bruce (2009-02-13). 2958:Self-rotating Bi-Microphone Array 1226:{\displaystyle {\tau }_{{m}_{n}}} 1056:is an estimate of a prior SNR at 3929:3-D Acoustic Vector Sensor (air) 3202: 3158: 2862:is the sound propagation speed, 169:and binaural hearing robot head. 3872: 3831: 3780: 3737: 3700: 3675: 3630: 3592: 3569: 3524: 3510: 3469: 3422: 3403: 2532: 2524: 2280:Binaural hearing learning is a 101:Collecting Multibeam Sonar Data 92: 3344: 3309: 3282: 3259: 3171: 3152: 3109: 3082:Head-related transfer function 3022:Head-related transfer function 2889:is the sampling frequency and 2736: 2730: 2605: 2599: 2569: 2563: 2516: 2513: 2507: 2494: 2473: 2470: 2464: 2451: 2430: 2426: 2420: 2407: 2396: 2393: 2387: 2374: 2351: 2345: 2304:Head-related transfer function 1879: 1876: 1863: 1847: 1832: 1829: 1816: 1800: 1785: 1747: 182:Multiple signal classification 1: 3715:. Vol. 103, no. 3. 3161:"Noise Source Identification" 3102: 60: 3072:Acoustic source localization 1722:are the position of arrays. 138:How does one localize sound? 7: 3991:Music information retrieval 3371:10.1109/ICASSP.2006.1661100 3355:. Vol. 4. p. IV. 3097:Vertical sound localization 3060: 2680:) then can be estimated by: 201:Steered Beamformer Approach 114:Cues for sound localization 10: 4075: 3889:10.1109/ROBOT.2009.5152861 3731:10.1007/s10846-021-01481-4 3545:10.1109/ISSPIT.2006.270883 3330:10.1109/TASSP.1984.1164453 3235:10.1109/ICSMC.2005.1571695 3052:Single microphone approach 3012:Conventional HRTF approach 3006:Interaural time difference 2995:non-stationary scenarios. 2989:interaural time difference 2755:Sound source direction is 1086:microphone, at time frame 156: 148:Interaural time difference 141: 131:interaural time difference 3971: 3848:10.1109/ICECS.2001.957673 3758:10.1007/s00500-007-0249-9 3661:10.1007/s10846-018-0908-3 3486:10.1109/IROS.2013.6696919 3455:10.1109/LAWP.2012.2215952 3128:10.1109/AVSS.2007.4425372 1156:{\displaystyle x_{m_{n}}} 196:Microphone Array Approach 3295:Proceedings of Euronoise 3180:Sensation and Perception 3042:computational complexity 2611:{\displaystyle s_{j}(n)} 2575:{\displaystyle s_{i}(n)} 1188:{\displaystyle {m}^{th}} 526:is the microphone pairs 4033:3D sound reconstruction 3067:3D sound reconstruction 2915:{\displaystyle d_{max}} 1661:{\displaystyle dir_{2}} 1628:{\displaystyle dir_{1}} 53:hearing, especially in 45:(including humans) use 28:three-dimensional space 3883:. pp. 1737–1742. 3037: 2976: 2942: 2916: 2883: 2856: 2833: 2746: 2674: 2660:Time delay of arrival( 2652: 2632: 2612: 2576: 2537: 2277: 2252: 2233: 2213: 2190: 2163: 2136: 2101: 2049: 1886: 1716: 1689: 1662: 1629: 1593: 1377: 1264: 1227: 1189: 1157: 1120: 1100: 1080: 1079:{\displaystyle i^{th}} 1050: 1001: 871: 819: 607: 520: 418: 331: 286: 102: 4028:3D sound localization 3035: 2974: 2940: 2917: 2884: 2882:{\displaystyle F_{s}} 2857: 2834: 2747: 2675: 2673:{\displaystyle \tau } 2653: 2633: 2613: 2577: 2538: 2275: 2250: 2234: 2214: 2191: 2189:{\displaystyle w_{n}} 2164: 2162:{\displaystyle w_{n}} 2137: 2102: 2050: 1887: 1717: 1715:{\displaystyle p_{i}} 1690: 1688:{\displaystyle v_{i}} 1663: 1630: 1594: 1378: 1262: 1255:Acoustic Vector Array 1228: 1190: 1158: 1121: 1101: 1081: 1051: 1002: 872: 820: 581: 521: 419: 287: 251: 100: 87:two-dimensional space 20:3D sound localization 3976:Acoustic fingerprint 3586:10.1109/ICKS.2008.25 3539:. pp. 662–665. 3480:. pp. 3937–42. 2933:2D sensor line array 2893: 2866: 2846: 2762: 2688: 2664: 2642: 2622: 2586: 2550: 2323: 2223: 2200: 2196:, we determined use 2173: 2146: 2111: 2061: 1902: 1732: 1699: 1672: 1668:are two directions, 1639: 1606: 1394: 1317: 1199: 1167: 1133: 1110: 1090: 1060: 1011: 881: 832: 828:the weighted factor 537: 431: 233: 4011:Speaker recognition 3801:2000ASAJ..108.2901H 3693:10.11159/cdsr18.104 3616:1996ITSP...44....1T 3447:2012IAWPL..11.1129S 3167:. BrĂĽel & Kjær. 2243:Scanning Techniques 1163:is the signal from 190:Scanning techniques 77:, surveillance and 4016:Speech recognition 3122:. pp. 563–6. 3092:Sound localization 3077:Binaural recording 3038: 2977: 2948:maximum likelihood 2943: 2912: 2879: 2852: 2829: 2742: 2670: 2648: 2628: 2608: 2572: 2533: 2278: 2253: 2229: 2212:{\displaystyle PU} 2209: 2186: 2159: 2132: 2097: 2045: 1882: 1712: 1685: 1658: 1625: 1589: 1373: 1265: 1223: 1185: 1153: 1116: 1096: 1076: 1046: 997: 867: 815: 516: 414: 178:maximum likelihood 144:Sound localization 103: 4041: 4040: 4023:Sound recognition 4001:Speech processing 3965:Computer audition 3898:978-1-4244-2788-8 3857:978-0-7803-7057-9 3809:10.1121/1.1323235 3624:10.1109/78.482007 3495:978-1-4673-6358-7 3380:978-1-4244-0469-8 3205:"Listening in 3D" 3189:978-0-495-60149-4 3137:978-1-4244-1695-0 2952:spectral analysis 2855:{\displaystyle v} 2827: 2651:{\displaystyle j} 2631:{\displaystyle i} 2526: 2492: 2449: 2405: 2372: 2360: 2232:{\displaystyle r} 2043: 1587: 1580: 1560: 1537: 1500: 1480: 1356: 1119:{\displaystyle k} 1099:{\displaystyle n} 995: 781: 551: 528:cross-correlation 445: 343: 124:spectral analysis 36:signal processing 4066: 4006:Speech analytics 3958: 3951: 3944: 3935: 3934: 3911: 3910: 3876: 3870: 3869: 3835: 3829: 3828: 3784: 3778: 3777: 3741: 3735: 3734: 3724: 3704: 3698: 3697: 3695: 3679: 3673: 3672: 3654: 3634: 3628: 3627: 3596: 3590: 3589: 3573: 3567: 3566: 3528: 3522: 3521: 3514: 3508: 3507: 3473: 3467: 3466: 3426: 3420: 3419: 3407: 3401: 3400: 3364: 3348: 3342: 3341: 3313: 3307: 3306: 3286: 3280: 3279: 3263: 3257: 3256: 3222: 3213: 3212: 3209:BrĂĽel & Kjær 3200: 3194: 3193: 3175: 3169: 3168: 3156: 3150: 3149: 3113: 2921: 2919: 2918: 2913: 2911: 2910: 2888: 2886: 2885: 2880: 2878: 2877: 2861: 2859: 2858: 2853: 2838: 2836: 2835: 2830: 2828: 2826: 2825: 2824: 2812: 2811: 2801: 2790: 2785: 2784: 2769: 2751: 2749: 2748: 2743: 2729: 2728: 2695: 2679: 2677: 2676: 2671: 2657: 2655: 2654: 2649: 2637: 2635: 2634: 2629: 2617: 2615: 2614: 2609: 2598: 2597: 2581: 2579: 2578: 2573: 2562: 2561: 2542: 2540: 2539: 2534: 2531: 2527: 2525: 2523: 2519: 2506: 2505: 2493: 2490: 2480: 2476: 2463: 2462: 2450: 2447: 2439: 2438: 2437: 2419: 2418: 2406: 2403: 2386: 2385: 2373: 2370: 2367: 2361: 2358: 2344: 2343: 2238: 2236: 2235: 2230: 2218: 2216: 2215: 2210: 2195: 2193: 2192: 2187: 2185: 2184: 2168: 2166: 2165: 2160: 2158: 2157: 2141: 2139: 2138: 2133: 2131: 2130: 2125: 2106: 2104: 2103: 2098: 2096: 2095: 2075: 2054: 2052: 2051: 2046: 2044: 2042: 2041: 2040: 2028: 2027: 2017: 2013: 2012: 2011: 1999: 1998: 1993: 1978: 1977: 1965: 1964: 1959: 1942: 1937: 1936: 1916: 1891: 1889: 1888: 1883: 1875: 1874: 1862: 1861: 1828: 1827: 1815: 1814: 1784: 1783: 1765: 1764: 1721: 1719: 1718: 1713: 1711: 1710: 1694: 1692: 1691: 1686: 1684: 1683: 1667: 1665: 1664: 1659: 1657: 1656: 1634: 1632: 1631: 1626: 1624: 1623: 1598: 1596: 1595: 1590: 1588: 1586: 1582: 1581: 1576: 1575: 1566: 1561: 1556: 1555: 1546: 1539: 1538: 1533: 1532: 1531: 1522: 1521: 1511: 1506: 1502: 1501: 1496: 1495: 1486: 1481: 1476: 1475: 1466: 1458: 1453: 1449: 1448: 1447: 1429: 1428: 1382: 1380: 1379: 1374: 1357: 1352: 1341: 1336: 1232: 1230: 1229: 1224: 1222: 1221: 1220: 1219: 1214: 1207: 1195:microphone and 1194: 1192: 1191: 1186: 1184: 1183: 1175: 1162: 1160: 1159: 1154: 1152: 1151: 1150: 1149: 1125: 1123: 1122: 1117: 1106:, for frequency 1105: 1103: 1102: 1097: 1085: 1083: 1082: 1077: 1075: 1074: 1055: 1053: 1052: 1047: 1045: 1034: 1033: 1028: 1027: 1026: 1021: 1006: 1004: 1003: 998: 996: 994: 987: 976: 975: 970: 969: 968: 963: 954: 953: 942: 941: 936: 935: 934: 929: 920: 915: 904: 903: 898: 897: 896: 891: 876: 874: 873: 868: 866: 855: 854: 849: 848: 847: 842: 824: 822: 821: 816: 814: 813: 809: 788: 782: 780: 779: 775: 774: 763: 762: 757: 746: 742: 741: 730: 729: 724: 712: 711: 700: 699: 694: 693: 692: 687: 679: 668: 667: 662: 656: 645: 644: 639: 633: 622: 621: 616: 609: 606: 595: 577: 566: 565: 554: 553: 552: 549: 547: 525: 523: 522: 517: 515: 511: 510: 509: 508: 507: 502: 495: 486: 485: 484: 483: 478: 471: 460: 459: 448: 447: 446: 443: 441: 423: 421: 420: 415: 413: 409: 408: 407: 406: 405: 400: 393: 384: 383: 382: 381: 376: 369: 358: 357: 346: 345: 344: 341: 339: 330: 323: 322: 317: 310: 303: 302: 297: 285: 274: 267: 266: 261: 167:microphone array 73:fields, such as 47:binaural hearing 4074: 4073: 4069: 4068: 4067: 4065: 4064: 4063: 4044: 4043: 4042: 4037: 3967: 3962: 3920: 3915: 3914: 3899: 3877: 3873: 3858: 3836: 3832: 3785: 3781: 3742: 3738: 3705: 3701: 3680: 3676: 3635: 3631: 3597: 3593: 3574: 3570: 3555: 3529: 3525: 3516: 3515: 3511: 3496: 3474: 3470: 3427: 3423: 3408: 3404: 3381: 3349: 3345: 3314: 3310: 3287: 3283: 3264: 3260: 3245: 3223: 3216: 3201: 3197: 3190: 3176: 3172: 3157: 3153: 3138: 3114: 3110: 3105: 3063: 3054: 3030: 3014: 3001: 2969: 2960: 2935: 2900: 2896: 2894: 2891: 2890: 2873: 2869: 2867: 2864: 2863: 2847: 2844: 2843: 2820: 2816: 2807: 2803: 2802: 2791: 2789: 2777: 2773: 2765: 2763: 2760: 2759: 2721: 2717: 2691: 2689: 2686: 2685: 2681: 2665: 2662: 2661: 2659: 2643: 2640: 2639: 2623: 2620: 2619: 2593: 2589: 2587: 2584: 2583: 2557: 2553: 2551: 2548: 2547: 2501: 2497: 2489: 2488: 2484: 2458: 2454: 2446: 2445: 2441: 2440: 2433: 2429: 2414: 2410: 2402: 2381: 2377: 2369: 2368: 2366: 2362: 2357: 2336: 2332: 2324: 2321: 2320: 2316: 2312: 2296: 2287:neural networks 2270: 2245: 2224: 2221: 2220: 2201: 2198: 2197: 2180: 2176: 2174: 2171: 2170: 2153: 2149: 2147: 2144: 2143: 2126: 2115: 2114: 2112: 2109: 2108: 2076: 2065: 2064: 2062: 2059: 2058: 2036: 2032: 2023: 2019: 2018: 2007: 2003: 1994: 1983: 1982: 1973: 1969: 1960: 1949: 1948: 1947: 1943: 1941: 1917: 1906: 1905: 1903: 1900: 1899: 1870: 1866: 1857: 1853: 1823: 1819: 1810: 1806: 1779: 1775: 1760: 1756: 1733: 1730: 1729: 1706: 1702: 1700: 1697: 1696: 1679: 1675: 1673: 1670: 1669: 1652: 1648: 1640: 1637: 1636: 1619: 1615: 1607: 1604: 1603: 1571: 1567: 1565: 1551: 1547: 1545: 1544: 1540: 1527: 1523: 1517: 1513: 1512: 1510: 1491: 1487: 1485: 1471: 1467: 1465: 1464: 1460: 1459: 1457: 1443: 1439: 1424: 1420: 1413: 1409: 1395: 1392: 1391: 1342: 1340: 1326: 1318: 1315: 1314: 1308: 1299: 1257: 1243: 1215: 1210: 1209: 1208: 1203: 1202: 1200: 1197: 1196: 1176: 1171: 1170: 1168: 1165: 1164: 1145: 1141: 1140: 1136: 1134: 1131: 1130: 1111: 1108: 1107: 1091: 1088: 1087: 1067: 1063: 1061: 1058: 1057: 1035: 1029: 1022: 1017: 1016: 1015: 1014: 1012: 1009: 1008: 977: 971: 964: 959: 958: 957: 956: 955: 943: 937: 930: 925: 924: 923: 922: 921: 919: 905: 899: 892: 887: 886: 885: 884: 882: 879: 878: 856: 850: 843: 838: 837: 836: 835: 833: 830: 829: 805: 789: 784: 783: 764: 758: 753: 752: 751: 747: 731: 725: 720: 719: 718: 714: 713: 701: 695: 688: 683: 682: 681: 680: 669: 663: 658: 657: 646: 640: 635: 634: 623: 617: 612: 611: 610: 608: 596: 585: 567: 555: 548: 543: 542: 541: 540: 538: 535: 534: 503: 498: 497: 496: 491: 490: 479: 474: 473: 472: 467: 466: 465: 461: 449: 442: 437: 436: 435: 434: 432: 429: 428: 401: 396: 395: 394: 389: 388: 377: 372: 371: 370: 365: 364: 363: 359: 347: 340: 335: 334: 333: 332: 318: 313: 312: 311: 298: 293: 292: 291: 275: 262: 257: 256: 255: 234: 231: 230: 219: 207:particle filter 203: 159: 150: 140: 116: 95: 63: 17: 12: 11: 5: 4072: 4062: 4061: 4056: 4039: 4038: 4036: 4035: 4030: 4025: 4020: 4019: 4018: 4013: 4008: 3998: 3996:Semantic audio 3993: 3988: 3983: 3978: 3972: 3969: 3968: 3961: 3960: 3953: 3946: 3938: 3932: 3931: 3926: 3919: 3918:External links 3916: 3913: 3912: 3897: 3871: 3856: 3830: 3795:(6): 2901–10. 3779: 3746:Soft Computing 3736: 3699: 3674: 3645:(3): 935–954. 3629: 3591: 3580:. ICERI 2008. 3568: 3553: 3523: 3509: 3494: 3468: 3421: 3416:EuroNoise 2015 3402: 3379: 3343: 3324:(6): 1109–21. 3308: 3281: 3258: 3243: 3214: 3195: 3188: 3170: 3151: 3136: 3107: 3106: 3104: 3101: 3100: 3099: 3094: 3089: 3084: 3079: 3074: 3069: 3062: 3059: 3053: 3050: 3029: 3026: 3013: 3010: 3000: 2997: 2968: 2965: 2959: 2956: 2934: 2931: 2909: 2906: 2903: 2899: 2876: 2872: 2851: 2840: 2839: 2823: 2819: 2815: 2810: 2806: 2800: 2797: 2794: 2788: 2783: 2780: 2776: 2772: 2768: 2753: 2752: 2741: 2738: 2735: 2732: 2727: 2724: 2720: 2716: 2713: 2710: 2707: 2704: 2701: 2698: 2694: 2669: 2647: 2627: 2607: 2604: 2601: 2596: 2592: 2571: 2568: 2565: 2560: 2556: 2544: 2543: 2530: 2522: 2518: 2515: 2512: 2509: 2504: 2500: 2496: 2487: 2483: 2479: 2475: 2472: 2469: 2466: 2461: 2457: 2453: 2444: 2436: 2432: 2428: 2425: 2422: 2417: 2413: 2409: 2401: 2398: 2395: 2392: 2389: 2384: 2380: 2376: 2365: 2356: 2353: 2350: 2347: 2342: 2339: 2335: 2331: 2328: 2311: 2308: 2295: 2292: 2269: 2266: 2244: 2241: 2228: 2208: 2205: 2183: 2179: 2156: 2152: 2129: 2124: 2121: 2118: 2094: 2091: 2088: 2085: 2082: 2079: 2074: 2071: 2068: 2056: 2055: 2039: 2035: 2031: 2026: 2022: 2016: 2010: 2006: 2002: 1997: 1992: 1989: 1986: 1981: 1976: 1972: 1968: 1963: 1958: 1955: 1952: 1946: 1940: 1935: 1932: 1929: 1926: 1923: 1920: 1915: 1912: 1909: 1893: 1892: 1881: 1878: 1873: 1869: 1865: 1860: 1856: 1852: 1849: 1846: 1843: 1840: 1837: 1834: 1831: 1826: 1822: 1818: 1813: 1809: 1805: 1802: 1799: 1796: 1793: 1790: 1787: 1782: 1778: 1774: 1771: 1768: 1763: 1759: 1755: 1752: 1749: 1746: 1743: 1740: 1737: 1709: 1705: 1682: 1678: 1655: 1651: 1647: 1644: 1622: 1618: 1614: 1611: 1600: 1599: 1585: 1579: 1574: 1570: 1564: 1559: 1554: 1550: 1543: 1536: 1530: 1526: 1520: 1516: 1509: 1505: 1499: 1494: 1490: 1484: 1479: 1474: 1470: 1463: 1456: 1452: 1446: 1442: 1438: 1435: 1432: 1427: 1423: 1419: 1416: 1412: 1408: 1405: 1402: 1399: 1384: 1383: 1372: 1369: 1366: 1363: 1360: 1355: 1351: 1348: 1345: 1339: 1335: 1332: 1329: 1325: 1322: 1307: 1304: 1298: 1295: 1286:high frequency 1256: 1253: 1242: 1239: 1218: 1213: 1206: 1182: 1179: 1174: 1148: 1144: 1139: 1115: 1095: 1073: 1070: 1066: 1044: 1041: 1038: 1032: 1025: 1020: 993: 990: 986: 983: 980: 974: 967: 962: 952: 949: 946: 940: 933: 928: 918: 914: 911: 908: 902: 895: 890: 865: 862: 859: 853: 846: 841: 826: 825: 812: 808: 804: 801: 798: 795: 792: 787: 778: 773: 770: 767: 761: 756: 750: 745: 740: 737: 734: 728: 723: 717: 710: 707: 704: 698: 691: 686: 678: 675: 672: 666: 661: 655: 652: 649: 643: 638: 632: 629: 626: 620: 615: 605: 602: 599: 594: 591: 588: 584: 580: 576: 573: 570: 564: 561: 558: 546: 514: 506: 501: 494: 489: 482: 477: 470: 464: 458: 455: 452: 440: 425: 424: 412: 404: 399: 392: 387: 380: 375: 368: 362: 356: 353: 350: 338: 329: 326: 321: 316: 309: 306: 301: 296: 290: 284: 281: 278: 273: 270: 265: 260: 254: 250: 247: 244: 241: 238: 218: 215: 202: 199: 198: 197: 194: 191: 188: 185: 174:neural network 170: 158: 155: 139: 136: 135: 134: 127: 115: 112: 94: 91: 62: 59: 15: 9: 6: 4: 3: 2: 4071: 4060: 4057: 4055: 4052: 4051: 4049: 4034: 4031: 4029: 4026: 4024: 4021: 4017: 4014: 4012: 4009: 4007: 4004: 4003: 4002: 3999: 3997: 3994: 3992: 3989: 3987: 3984: 3982: 3979: 3977: 3974: 3973: 3970: 3966: 3959: 3954: 3952: 3947: 3945: 3940: 3939: 3936: 3930: 3927: 3925: 3922: 3921: 3908: 3904: 3900: 3894: 3890: 3886: 3882: 3875: 3867: 3863: 3859: 3853: 3849: 3845: 3841: 3834: 3826: 3822: 3818: 3814: 3810: 3806: 3802: 3798: 3794: 3790: 3783: 3775: 3771: 3767: 3763: 3759: 3755: 3751: 3747: 3740: 3732: 3728: 3723: 3718: 3714: 3710: 3703: 3694: 3689: 3686:. CDSR 2018. 3685: 3678: 3670: 3666: 3662: 3658: 3653: 3648: 3644: 3640: 3633: 3625: 3621: 3617: 3613: 3609: 3605: 3601: 3595: 3587: 3583: 3579: 3572: 3564: 3560: 3556: 3554:0-7803-9754-1 3550: 3546: 3542: 3538: 3534: 3527: 3519: 3513: 3505: 3501: 3497: 3491: 3487: 3483: 3479: 3472: 3464: 3460: 3456: 3452: 3448: 3444: 3440: 3436: 3432: 3425: 3417: 3413: 3406: 3398: 3394: 3390: 3386: 3382: 3376: 3372: 3368: 3363: 3358: 3354: 3347: 3339: 3335: 3331: 3327: 3323: 3319: 3312: 3304: 3300: 3296: 3292: 3285: 3277: 3273: 3269: 3262: 3254: 3250: 3246: 3244:0-7803-9298-1 3240: 3236: 3232: 3228: 3221: 3219: 3210: 3206: 3203:Kjær, BrĂĽel. 3199: 3191: 3185: 3181: 3174: 3166: 3162: 3159:Kjær, BrĂĽel. 3155: 3147: 3143: 3139: 3133: 3129: 3125: 3121: 3120: 3112: 3108: 3098: 3095: 3093: 3090: 3088: 3085: 3083: 3080: 3078: 3075: 3073: 3070: 3068: 3065: 3064: 3058: 3049: 3046: 3043: 3034: 3025: 3023: 3019: 3009: 3007: 2996: 2992: 2990: 2985: 2983: 2973: 2964: 2955: 2953: 2949: 2939: 2930: 2928: 2923: 2907: 2904: 2901: 2897: 2874: 2870: 2849: 2821: 2817: 2813: 2804: 2798: 2795: 2792: 2786: 2781: 2778: 2774: 2770: 2766: 2758: 2757: 2756: 2733: 2725: 2722: 2718: 2714: 2711: 2702: 2699: 2696: 2692: 2684: 2683: 2682: 2667: 2645: 2625: 2602: 2594: 2590: 2566: 2558: 2554: 2528: 2520: 2510: 2502: 2498: 2485: 2481: 2477: 2467: 2459: 2455: 2442: 2434: 2423: 2415: 2411: 2399: 2390: 2382: 2378: 2363: 2354: 2348: 2340: 2337: 2333: 2329: 2326: 2319: 2318: 2317: 2307: 2305: 2300: 2291: 2288: 2283: 2274: 2265: 2261: 2257: 2249: 2240: 2226: 2206: 2203: 2181: 2177: 2154: 2150: 2127: 2092: 2089: 2086: 2083: 2080: 2077: 2037: 2033: 2029: 2024: 2020: 2014: 2008: 2004: 2000: 1995: 1979: 1974: 1970: 1966: 1961: 1944: 1938: 1933: 1930: 1927: 1924: 1921: 1918: 1898: 1897: 1896: 1871: 1867: 1858: 1854: 1850: 1844: 1841: 1838: 1835: 1824: 1820: 1811: 1807: 1803: 1797: 1794: 1791: 1788: 1780: 1776: 1772: 1769: 1766: 1761: 1757: 1753: 1750: 1744: 1741: 1738: 1735: 1728: 1727: 1726: 1723: 1707: 1703: 1680: 1676: 1653: 1649: 1645: 1642: 1620: 1616: 1612: 1609: 1583: 1577: 1572: 1568: 1562: 1557: 1552: 1548: 1541: 1534: 1528: 1524: 1518: 1514: 1507: 1503: 1497: 1492: 1488: 1482: 1477: 1472: 1468: 1461: 1454: 1450: 1444: 1440: 1436: 1433: 1430: 1425: 1421: 1417: 1414: 1410: 1406: 1403: 1400: 1397: 1390: 1389: 1388: 1370: 1367: 1364: 1361: 1358: 1353: 1349: 1346: 1343: 1337: 1333: 1330: 1327: 1323: 1320: 1313: 1312: 1311: 1303: 1294: 1290: 1287: 1283: 1282:low frequency 1278: 1274: 1271: 1268: 1261: 1252: 1249: 1248:sensor arrays 1238: 1234: 1216: 1211: 1204: 1180: 1177: 1172: 1146: 1142: 1137: 1127: 1113: 1093: 1071: 1068: 1064: 1042: 1039: 1036: 1030: 1023: 1018: 991: 988: 984: 981: 978: 972: 965: 960: 950: 947: 944: 938: 931: 926: 916: 912: 909: 906: 900: 893: 888: 863: 860: 857: 851: 844: 839: 810: 806: 802: 799: 796: 793: 790: 785: 776: 771: 768: 765: 759: 754: 748: 743: 738: 735: 732: 726: 721: 715: 708: 705: 702: 696: 689: 684: 676: 673: 670: 664: 659: 653: 650: 647: 641: 636: 630: 627: 624: 618: 613: 603: 600: 597: 592: 589: 586: 582: 578: 574: 571: 568: 562: 559: 556: 544: 533: 532: 531: 529: 512: 504: 499: 492: 487: 480: 475: 468: 462: 456: 453: 450: 438: 410: 402: 397: 390: 385: 378: 373: 366: 360: 354: 351: 348: 336: 327: 324: 319: 314: 307: 304: 299: 294: 288: 282: 279: 276: 271: 268: 263: 258: 252: 248: 245: 242: 239: 236: 229: 228: 227: 224: 214: 210: 208: 195: 192: 189: 186: 183: 179: 175: 171: 168: 164: 163: 162: 154: 149: 145: 132: 128: 125: 121: 120: 119: 111: 108: 99: 90: 88: 84: 80: 76: 72: 68: 58: 56: 52: 48: 44: 39: 37: 33: 29: 25: 22:refers to an 21: 4027: 3981:Audio mining 3880: 3874: 3839: 3833: 3792: 3788: 3782: 3752:(7): 721–9. 3749: 3745: 3739: 3712: 3708: 3702: 3683: 3677: 3642: 3638: 3632: 3607: 3603: 3600:Tabrikian,J. 3594: 3577: 3571: 3536: 3526: 3512: 3477: 3471: 3438: 3434: 3424: 3415: 3405: 3352: 3346: 3321: 3317: 3311: 3294: 3284: 3267: 3261: 3226: 3208: 3198: 3179: 3173: 3164: 3154: 3118: 3111: 3055: 3047: 3039: 3015: 3002: 2993: 2986: 2978: 2961: 2944: 2924: 2841: 2754: 2658:respectively 2545: 2313: 2301: 2297: 2279: 2262: 2258: 2254: 2057: 1894: 1724: 1601: 1385: 1309: 1300: 1291: 1279: 1275: 1272: 1269: 1266: 1244: 1235: 1128: 827: 426: 220: 211: 204: 160: 151: 117: 104: 93:Applications 75:hearing aids 64: 40: 38:techniques. 19: 18: 3610:(1): 1–13. 3441:: 1129–32. 3297:: 891–895. 4048:Categories 3722:1804.05111 3652:1804.03372 3362:1604.01642 3103:References 2302:See more: 223:beamformer 142:See also: 79:navigation 61:Technology 4054:Acoustics 3817:0001-4966 3766:1432-7643 3463:1536-1225 3389:1520-6149 3338:0096-3518 3303:2226-5147 3276:1530-1591 3270:: 832–5. 2814:⋅ 2799:τ 2796:⋅ 2787:⁡ 2779:− 2767:θ 2703:⁡ 2693:τ 2668:τ 2482:⋅ 2435:∗ 2400:⋅ 2001:× 1967:× 1578:→ 1563:× 1558:→ 1535:→ 1508:× 1498:→ 1483:× 1478:→ 1368:× 1365:π 1359:× 1344:± 1205:τ 1019:ξ 961:ξ 927:ξ 889:ζ 840:ζ 803:τ 797:π 697:∗ 660:ζ 614:ζ 601:− 583:∑ 572:τ 493:τ 488:− 469:τ 391:τ 386:− 367:τ 325:− 289:∑ 280:− 253:∑ 71:acoustics 3907:14665341 3866:60528168 3825:11144583 3774:30037380 3563:14042947 3504:16043629 3165:bksv.com 3146:11238184 3061:See also 1007:, where 184:(MUSIC). 55:3D space 51:monaural 24:acoustic 4059:Hearing 3797:Bibcode 3669:4745823 3612:Bibcode 3443:Bibcode 3253:7446711 3004:(using 157:Methods 43:mammals 32:sensors 3905:  3895:  3864:  3854:  3823:  3815:  3772:  3764:  3667:  3561:  3551:  3502:  3492:  3461:  3397:557491 3395:  3387:  3377:  3336:  3301:  3274:  3251:  3241:  3186:  3144:  3134:  2842:Where 2546:Where 2282:bionic 550:RWPHAT 444:RWPHAT 342:RWPHAT 3903:S2CID 3862:S2CID 3770:S2CID 3717:arXiv 3665:S2CID 3647:arXiv 3559:S2CID 3500:S2CID 3393:S2CID 3357:arXiv 3249:S2CID 3142:S2CID 2982:frogs 1602:where 107:Sonar 67:audio 41:Most 3893:ISBN 3852:ISBN 3821:PMID 3813:ISSN 3762:ISSN 3549:ISBN 3490:ISBN 3459:ISSN 3385:ISSN 3375:ISBN 3334:ISSN 3299:ISSN 3272:ISSN 3239:ISBN 3184:ISBN 3132:ISBN 3018:HRTF 2638:and 2582:and 2359:IFFT 1789:< 1635:and 1284:and 1129:The 180:and 146:and 83:TDOA 69:and 34:and 3885:doi 3844:doi 3805:doi 3793:108 3754:doi 3727:doi 3688:doi 3657:doi 3620:doi 3582:doi 3541:doi 3482:doi 3451:doi 3367:doi 3326:doi 3231:doi 3124:doi 2809:max 2775:cos 2706:max 2700:arg 2491:FFT 2448:FFT 2404:FFT 2371:FFT 2315:by: 2219:or 1725:If 1354:360 4050:: 3901:. 3891:. 3860:. 3850:. 3819:. 3811:. 3803:. 3791:. 3768:. 3760:. 3750:12 3748:. 3725:. 3711:. 3663:. 3655:. 3643:95 3641:. 3618:. 3608:44 3606:. 3557:. 3547:. 3535:. 3498:. 3488:. 3457:. 3449:. 3439:11 3437:. 3433:. 3414:. 3391:. 3383:. 3373:. 3365:. 3332:. 3322:32 3320:. 3293:. 3247:. 3237:. 3217:^ 3207:. 3163:. 3140:. 3130:. 2984:. 2306:. 176:, 57:. 3957:e 3950:t 3943:v 3909:. 3887:: 3868:. 3846:: 3827:. 3807:: 3799:: 3776:. 3756:: 3733:. 3729:: 3719:: 3696:. 3690:: 3671:. 3659:: 3649:: 3626:. 3622:: 3614:: 3588:. 3584:: 3565:. 3543:: 3520:. 3506:. 3484:: 3465:. 3453:: 3445:: 3418:. 3399:. 3369:: 3359:: 3340:. 3328:: 3305:. 3278:. 3255:. 3233:: 3211:. 3192:. 3148:. 3126:: 3020:( 2908:x 2905:a 2902:m 2898:d 2875:s 2871:F 2850:v 2822:s 2818:F 2805:d 2793:v 2782:1 2771:= 2740:} 2737:) 2734:k 2731:( 2726:j 2723:i 2719:p 2715:s 2712:c 2709:{ 2697:= 2646:j 2626:i 2606:) 2603:n 2600:( 2595:j 2591:s 2570:) 2567:n 2564:( 2559:i 2555:s 2529:} 2521:| 2517:] 2514:) 2511:n 2508:( 2503:j 2499:s 2495:[ 2486:| 2478:| 2474:] 2471:) 2468:n 2465:( 2460:i 2456:s 2452:[ 2443:| 2431:] 2427:) 2424:n 2421:( 2416:j 2412:s 2408:[ 2397:] 2394:) 2391:n 2388:( 2383:i 2379:s 2375:[ 2364:{ 2355:= 2352:) 2349:k 2346:( 2341:j 2338:i 2334:p 2330:s 2327:c 2227:r 2207:U 2204:P 2182:n 2178:w 2155:n 2151:w 2128:n 2123:S 2120:O 2117:P 2093:e 2090:c 2087:r 2084:u 2081:o 2078:s 2073:S 2070:O 2067:P 2038:2 2034:w 2030:+ 2025:1 2021:w 2015:) 2009:2 2005:w 1996:2 1991:S 1988:O 1985:P 1980:+ 1975:1 1971:w 1962:1 1957:S 1954:O 1951:P 1945:( 1939:= 1934:e 1931:c 1928:r 1925:u 1922:o 1919:s 1914:S 1911:O 1908:P 1880:) 1877:) 1872:2 1868:r 1864:( 1859:2 1855:U 1851:P 1848:( 1845:s 1842:b 1839:a 1836:+ 1833:) 1830:) 1825:1 1821:r 1817:( 1812:1 1808:U 1804:P 1801:( 1798:s 1795:b 1792:a 1786:) 1781:2 1777:r 1773:i 1770:d 1767:, 1762:1 1758:r 1754:i 1751:d 1748:( 1745:t 1742:s 1739:i 1736:d 1708:i 1704:p 1681:i 1677:v 1654:2 1650:r 1646:i 1643:d 1621:1 1617:r 1613:i 1610:d 1584:| 1573:2 1569:v 1553:1 1549:v 1542:| 1529:2 1525:p 1519:1 1515:p 1504:) 1493:2 1489:v 1473:1 1469:v 1462:( 1455:= 1451:) 1445:2 1441:r 1437:i 1434:d 1431:, 1426:1 1422:r 1418:i 1415:d 1411:( 1407:t 1404:s 1401:i 1398:d 1371:r 1362:2 1350:U 1347:A 1338:= 1334:) 1331:r 1328:( 1324:U 1321:P 1217:n 1212:m 1181:h 1178:t 1173:m 1147:n 1143:m 1138:x 1114:k 1094:n 1072:h 1069:t 1065:i 1043:) 1040:k 1037:( 1031:i 1024:n 992:1 989:+ 985:) 982:k 979:( 973:i 966:n 951:) 948:k 945:( 939:i 932:n 917:= 913:) 910:k 907:( 901:i 894:n 864:) 861:k 858:( 852:i 845:n 811:L 807:/ 800:k 794:2 791:j 786:e 777:| 772:) 769:k 766:( 760:j 755:X 749:| 744:| 739:) 736:k 733:( 727:i 722:X 716:| 709:) 706:k 703:( 690:j 685:X 677:) 674:k 671:( 665:j 654:) 651:k 648:( 642:i 637:X 631:) 628:k 625:( 619:i 604:1 598:L 593:0 590:= 587:k 579:= 575:) 569:( 563:j 560:, 557:i 545:R 513:) 505:2 500:m 481:1 476:m 463:( 457:j 454:, 451:i 439:R 411:) 403:2 398:m 379:1 374:m 361:( 355:j 352:, 349:i 337:R 328:1 320:1 315:m 308:0 305:= 300:2 295:m 283:1 277:M 272:1 269:= 264:1 259:m 249:2 246:+ 243:K 240:= 237:E

Index

acoustic
three-dimensional space
sensors
signal processing
mammals
binaural hearing
monaural
3D space
audio
acoustics
hearing aids
navigation
TDOA
two-dimensional space

Sonar
spectral analysis
interaural time difference
Sound localization
Interaural time difference
microphone array
neural network
maximum likelihood
Multiple signal classification
particle filter
beamformer
cross-correlation
sensor arrays

low frequency

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.

↑