1408:
1367:, one can distinguish between feature detection approaches that produce local decisions whether there is a feature of a given type at a given image point or not, and those who produce non-binary data as result. The distinction becomes relevant when the resulting detected features are relatively sparse. Although local decisions are made, the output from a feature detection step does not need to be a binary image. The result is often represented in terms of sets of (connected or unconnected) coordinates of the image points where features have been detected, sometimes with subpixel accuracy.
2070:
corresponding image region does not contain any spatial variation. As a consequence of this observation, it may be relevant to use a feature representation which includes a measure of certainty or confidence related to the statement about the feature value. Otherwise, it is a typical situation that the same descriptor is used to represent feature values of low certainty and feature values close to zero, with a resulting ambiguity in the interpretation of this descriptor. Depending on the application, such an ambiguity may or may not be acceptable.
1428:
1544:
1531:. From a practical viewpoint, a ridge can be thought of as a one-dimensional curve that represents an axis of symmetry, and in addition has an attribute of local ridge width associated with each ridge point. Unfortunately, however, it is algorithmically harder to extract ridge features from general classes of grey-level images than edge-, corner- or blob features. Nevertheless, ridge descriptors are frequently used for road extraction in aerial images and for extracting blood vessels in medical images—see
2081:. This enables a new feature descriptor to be computed from several descriptors, for example computed at the same image point but at different scales, or from different but neighboring points, in terms of a weighted average where the weights are derived from the corresponding certainties. In the simplest case, the corresponding computation can be implemented as a low-pass filtering of the featured image. The resulting feature image will, in general, be more stable to noise.
1374:. Consequently, a feature image can be seen as an image in the sense that it is a function of the same spatial (or temporal) variables as the original image, but where the pixel values hold information about image features instead of intensity or color. This means that a feature image can be processed in a similar way as an ordinary image generated by an image sensor. Feature images are also often computed as integrated step in algorithms for feature detection.
1451:-based processing is applied to images. The input data fed to the neural network is often given in terms of a feature vector from each image point, where the vector is constructed from several different features extracted from the image data. During a learning phase, the network can itself find which combinations of different features are useful for solving the problem at hand.
1494:. It was then noticed that the so-called corners were also being detected on parts of the image which were not corners in the traditional sense (for instance a small bright spot on a dark background may be detected). These points are frequently known as interest points, but the term "corner" is used by tradition.
1421:, which are correlated with "nodes" that represent visual features. The starfish match with a ringed texture and a star outline, whereas most sea urchins match with a striped texture and oval shape. However, the instance of a ring textured sea urchin creates a weakly weighted association between them.
1394:
A common example of feature vectors appears when each image point is to be classified as belonging to a specific class. Assuming that each image point has a corresponding feature vector based on a suitable set of features, meaning that each class is well separated in the corresponding feature space,
2101:
For example, if the orientation of an edge is represented in terms of an angle, this representation must have a discontinuity where the angle wraps from its maximal value to its minimal value. Consequently, it can happen that two similar orientations are represented by angles which have a mean that
1553:
includes methods for computing abstractions of image information and making local decisions at every image point whether there is an image feature of a given type at that point or not. The resulting features will be subsets of the image domain, often in the form of isolated points, continuous curves
1506:
Consider shrinking an image and then performing corner detection. The detector will respond to points which are sharp in the shrunk image, but may be smooth in the original image. It is at this point that the difference between a corner detector and a blob detector becomes somewhat vague. To a large
2109:
Another example relates to motion, where in some cases only the normal velocity relative to some edge can be extracted. If two such features have been extracted and they can be assumed to refer to same true velocity, this velocity is not given as the average of the normal velocity vectors. Hence,
1502:
Blobs provide a complementary description of image structures in terms of regions, as opposed to corners that are more point-like. Nevertheless, blob descriptors may often contain a preferred point (a local maximum of an operator response or a center of gravity) which means that many blob detectors
2135:
The algorithm is based on comparing and analyzing point correspondences between the reference image and the target image. If any part of the cluttered scene shares correspondences greater than the threshold, that part of the cluttered scene image is targeted and considered to include the reference
2048:
When a computer vision system or computer vision algorithm is designed the choice of feature representation can be a critical issue. In some cases, a higher level of detail in the description of a feature may be necessary for solving the problem, but this comes at the cost of having to deal with
1892:
for one example of a local histogram descriptor). In addition to such attribute information, the feature detection step by itself may also provide complementary attributes, such as the edge orientation and gradient magnitude in edge detection and the polarity and the strength of the blob in blob
1880:
Once features have been detected, a local image patch around the feature can be extracted. This extraction may involve quite considerable amounts of image processing. The result is known as a feature descriptor or feature vector. Among the approaches that are used to feature description, one can
1433:
Subsequent run of the network on an input image (left): The network correctly detects the starfish. However, the weakly weighted association between ringed texture and sea urchin also confers a weak signal to the latter from one of two features. In addition, a shell that was not included in the
1382:
In some applications, it is not sufficient to extract only one type of feature to obtain the relevant information from the image data. Instead two or more different features are extracted, resulting in two or more feature descriptors at each image point. A common practice is to organize the
2069:
Two examples of image features are local edge orientation and local velocity in an image sequence. In the case of orientation, the value of this feature may be more or less undefined if more than one edge are present in the corresponding neighborhood. Local velocity is undefined if the
1359:
There are many computer vision algorithms that use feature detection as the initial step, so as a result, a very large number of feature detectors have been developed. These vary widely in the kinds of feature detected, the computational complexity and the repeatability.
1336:
to see if there is a feature present at that pixel. If this is part of a larger algorithm, then the algorithm will typically only examine the image in the region of the features. As a built-in pre-requisite to feature detection, the input image is usually smoothed by a
1469:
magnitude. Furthermore, some common algorithms will then chain high gradient points together to form a more complete description of an edge. These algorithms usually place some constraints on the properties of an edge, such as shape, smoothness, and gradient value.
1273:
is a piece of information about the content of an image; typically about whether a certain region of the image has certain properties. Features may be specific structures in the image such as points, edges or objects. Features may also be the result of a general
1464:
Edges are points where there is a boundary (or an edge) between two image regions. In general, an edge can be of almost arbitrary shape, and may include junctions. In practice, edges are usually defined as sets of points in the image which have a strong
1301:
generally, though image processing has a very sophisticated collection of features. The feature concept is very general and the choice of features in a particular computer vision system may be highly dependent on the specific problem at hand.
2093:
operation or not. Most feature representations can be averaged in practice, but only in certain cases can the resulting descriptor be given a correct interpretation in terms of a feature value. Such representations are referred to as
1310:
There is no universal or exact definition of what constitutes a feature, and the exact definition often depends on the problem or the type of application. Nevertheless, a feature is typically defined as an "interesting" part of an
1318:
Since features are used as the starting point and main primitives for subsequent algorithms, the overall algorithm will often only be as good as its feature detector. Consequently, the desirable property for a feature detector is
1481:
The terms corners and interest points are used somewhat interchangeably and refer to point-like features in an image, which have a local two dimensional structure. The name "Corner" arose since early algorithms first performed
2110:
normal velocity vectors are not averageable. Instead, there are other representations of motions, using matrices or tensors, that give the true velocity in terms of an average operation of the normal velocity descriptors.
2049:
more data and more demanding processing. Below, some of the factors which are relevant for choosing a suitable representation are discussed. In this discussion, an instance of a feature representation is referred to as a
1486:, and then analysed the edges to find rapid changes in direction (corners). These algorithms were then developed so that explicit edge detection was no longer required, for instance by looking for high levels of
1507:
extent, this distinction can be remedied by including an appropriate notion of scale. Nevertheless, due to their response properties to different types of image structures at different scales, the LoG and DoH
309:
2538:
T. Lindeberg "Scale selection properties of generalized scale-space interest point detectors", Journal of
Mathematical Imaging and Vision, Volume 46, Issue 2, pages 177-210, 2013.
1282:
applied to the image. Other examples of features are related to motion in image sequences, or to shapes defined in terms of curves or boundaries between different image regions.
2029:
A specific image feature, defined in terms of a specific structure in the image data, can often be represented in different ways. For example, an edge can be represented as a
1356:
and there are time constraints, a higher level algorithm may be used to guide the feature detection stage, so that only certain parts of the image are searched for features.
2102:
does not lie close to either of the original angles and, hence, this representation is not averageable. There are other representations of edge orientation, such as the
101:
1169:
2073:
In particular, if a featured image will be used in subsequent processing, it may be a good idea to employ a feature representation that includes information about
1207:
302:
2549:
T. Lindeberg ``Image matching using generalized scale-space interest points", Journal of
Mathematical Imaging and Vision, volume 52, number 1, pages 3-36, 2015.
2089:
In addition to having certainty measures included in the representation, the representation of the corresponding feature values may itself be suitable for an
1164:
2821:
2692:
1154:
295:
1503:
may also be regarded as interest point operators. Blob detectors can detect areas in an image which are too smooth to be detected by a corner detector.
91:
86:
2851:
2773:
2242:
995:
2033:
in each image point that describes whether an edge is present at that point. Alternatively, we can instead use a representation which provides a
3145:
2846:
1434:
training gives a weak signal for the oval shape, also resulting in a weak signal for the sea urchin output. These weak signals may result in a
1202:
1848:
1289:
is any piece of information which is relevant for solving the computational task related to a certain application. This is the same sense as
3182:
3078:
2811:
1159:
1010:
2806:
741:
1413:
Simplified example of training a neural network in object detection: The network is trained by multiple images that are known to depict
1440:
In reality, textures and outlines would not be represented by single nodes, but rather by associated weight patterns of multiple nodes.
1242:
1045:
2841:
2582:
2041:
of the edge. Similarly, the color of a specific region can either be represented in terms of the average color (three scalars) or a
149:
2909:
2652:"Detecting Salient Blob-Like Image Structures and Their Scales with a Scale-Space Primal Sketch: A Method for Focus-of-Attention"
1677:
1121:
2926:
1715:
670:
2946:
2201:
1874:
1363:
When features are defined in terms of local neighborhood operations applied to an image, a procedure commonly referred to as
2856:
2826:
2816:
2796:
2766:
1179:
942:
477:
2303:
1197:
3011:
2836:
2741:
1030:
1005:
954:
1696:
96:
2866:
1922:
1889:
1810:
1791:
1558:
1078:
1073:
726:
240:
144:
3016:
736:
374:
3001:
2759:
2402:
1965:
255:
2651:
2609:
2721:
2226:
1235:
1131:
895:
716:
224:
111:
1772:
1383:
information provided by all these descriptors as the elements of one single vector, commonly referred to as a
139:
3110:
3041:
2892:
2170:
1658:
1106:
808:
584:
219:
106:
1527:
is a natural tool. A ridge descriptor computed from a grey-level image can be seen as a generalization of a
2191:
1063:
1000:
910:
888:
731:
721:
198:
2861:
1999:
1734:
1214:
1126:
1111:
572:
394:
177:
129:
3006:
2150:
1396:
1290:
1256:
1174:
1101:
851:
746:
534:
467:
427:
283:
278:
245:
1325:: whether or not the same feature will be detected in two or more different images of the same scene.
3058:
2971:
1448:
1353:
1228:
834:
602:
472:
1370:
When feature extraction is done without local decision making, the result is often referred to as a
3162:
3036:
2507:
2453:
2037:
instead of a boolean statement of the edge's existence and combine this with information about the
1332:
operation. That is, it is usually performed as the first operation on an image, and examines every
856:
776:
699:
617:
447:
409:
404:
364:
359:
2748:(summary and review of a number of feature detectors formulated based on a scale-space operations)
3098:
3088:
2831:
2034:
1753:
1557:
The extraction of features are sometimes made over several scalings. One of these methods is the
803:
652:
552:
379:
214:
134:
3135:
3103:
2882:
2598:, Journal of Mathematical Imaging and Vision, v. 4 n. 4, pp. 353–373, Dec. 1994.
2585:", Computer Vision, Graphics, and Image Processing vol. 22, no. 10, pp. 28–38, Apr. 1983.
2502:
2448:
2119:
2038:
1639:
1275:
983:
959:
861:
622:
597:
557:
369:
3140:
2951:
2897:
1407:
937:
759:
711:
567:
482:
354:
63:
43:
2693:"Object Detection in a Cluttered Scene Using Point Feature Matching - MATLAB & Simulink"
2477:
A Representation for Shape Based on Peaks and Ridges in the
Difference of Low Pass Transform
48:
3063:
3046:
3026:
2996:
2165:
866:
816:
8:
3068:
2931:
2561:
2516:
1601:
1298:
969:
905:
876:
781:
607:
540:
526:
512:
487:
437:
389:
349:
38:
3083:
2991:
2976:
2936:
2674:
2632:
2520:
2425:
2359:
2284:
2236:
947:
871:
657:
452:
273:
3021:
2966:
2918:
2737:
2636:
2524:
2276:
2222:
2197:
2160:
2024:
1975:
1040:
883:
796:
592:
562:
507:
502:
457:
399:
2678:
2363:
2288:
1543:
1427:
3130:
3093:
2941:
2801:
2729:
2666:
2624:
2512:
2458:
2443:
E. Rosten; T. Drummond (2006). "Machine learning for high-speed corner detection".
2417:
2351:
2268:
2155:
2124:
Features detected in each image can be matched across multiple images to establish
2103:
2030:
1948:
1907:
1584:
1512:
1346:
1329:
1294:
1266:
1068:
821:
771:
681:
665:
635:
497:
492:
442:
432:
330:
193:
77:
58:
2733:
2476:
2429:
1994:
Works with any parameterizable feature (class variables, cluster detection, etc..)
1345:
and one or several feature images are computed, often expressed in terms of local
1315:, and features are used as a starting point for many computer vision algorithms.
3125:
3073:
2782:
2145:
2042:
1980:
1970:
1935:
1917:
1829:
1594:
1532:
1262:
1096:
900:
766:
706:
172:
158:
2339:
3150:
3120:
3031:
2956:
2887:
2314:
2272:
1912:
1902:
1620:
1589:
1579:
1508:
1491:
1483:
1435:
1116:
647:
384:
120:
53:
29:
2628:
2595:
2421:
2355:
3176:
3115:
2378:
1338:
1321:
1312:
1035:
964:
846:
577:
462:
68:
2548:
2537:
3053:
2280:
1952:
2490:
2961:
1528:
1342:
841:
335:
264:
2462:
2670:
2479:", IEEE Transactions on PAMI, PAMI 6 (2), pp. 156–170, March 1984.
2256:
2078:
1418:
990:
686:
612:
2090:
2074:
1487:
1149:
930:
2562:"Robust wide baseline stereo from maximally stable extremum regions"
1564:
Further information on
Combination Of Shifted FIlter REsponses:
2610:"Edge detection and ridge detection with automatic scale selection"
1466:
1414:
2751:
1395:
the classification of each image point can be done using standard
1565:
925:
2559:
676:
2383:
9th IEEE Conference on
Computer Vision and Pattern Recognition
2261:
IEEE Transactions on
Pattern Analysis and Machine Intelligence
1882:
1333:
920:
915:
642:
2216:
2491:"Distinctive Image Features from Scale-Invariant Keypoints"
250:
2728:. Vol. IV. John Wiley and Sons. pp. 2495–2504.
2594:
D. Eberly, R. Gardner, B. Morse, S. Pizer, C. Scharlach,
1387:. The set of all possible feature vectors constitutes a
1208:
List of datasets in computer vision and image processing
2193:
Computer
Imaging: Digital Image Analysis and Processing
2340:"SUSAN - a new approach to low level image processing"
2337:
2259:(1986). "A Computational Approach To Edge Detection".
1497:
1377:
2442:
2301:
1571:Common feature detectors and their classification:
2403:"Feature detection with automatic scale selection"
16:Piece of information about the content of an image
1473:Locally, edges have a one-dimensional structure.
3174:
2917:
2726:Encyclopedia of Computer Science and Engineering
2560:J. Matas; O. Chum; M. Urban; T. Pajdla (2002).
2376:
2189:
2311:Proceedings of the 4th Alvey Vision Conference
1991:Arbitrary shapes (generalized Hough transform)
1203:List of datasets for machine-learning research
2767:
1236:
303:
2719:
2649:
2607:
2400:
2241:: CS1 maint: multiple names: authors list (
1476:
2774:
2760:
2183:
2064:
1243:
1229:
310:
296:
2542:
2531:
2506:
2452:
2659:International Journal of Computer Vision
2617:International Journal of Computer Vision
2495:International Journal of Computer Vision
2410:International Journal of Computer Vision
2344:International Journal of Computer Vision
1873:For broader coverage of this topic, see
1542:
1447:Another and related example occurs when
1352:Occasionally, when feature detection is
2488:
2396:
2394:
2392:
2370:
2295:
3175:
2927:3D reconstruction from multiple images
2601:
2445:European Conference on Computer Vision
2947:Simultaneous localization and mapping
2755:
2643:
2588:
2575:
2553:
2482:
2338:S. M. Smith; J. M. Brady (May 1997).
2304:"A combined corner and edge detector"
2255:
2217:Ferrie, C., & Kaiser, S. (2019).
1951:. Area based, differential approach.
1875:Feature extraction (machine learning)
1523:For elongated objects, the notion of
1511:are also mentioned in the article on
2583:Ridges and Valleys on Digital Images
2469:
2436:
2389:
2331:
2249:
1934:Edge direction, changing intensity,
3183:Feature detection (computer vision)
2781:
2190:Scott E Umbaugh (27 January 2005).
2005:
1198:Glossary of artificial intelligence
13:
3012:Automatic number-plate recognition
2713:
2517:10.1023/B:VISI.0000029664.99615.94
2313:. pp. 147–151. Archived from
1498:Blobs / regions of interest points
1378:Feature vectors and feature spaces
207:Affine invariant feature detection
14:
3194:
2569:British Machine Vision Conference
2475:J. L. Crowley and A. C. Parker, "
2084:
2018:
1923:Scale-invariant feature transform
1890:scale-invariant feature transform
1792:Hessian strength feature measures
1559:scale-invariant feature transform
1328:Feature detection is a low-level
145:Maximally stable extremal regions
102:Hessian feature strength measures
3017:Automated species identification
2011:Deformable, parameterized shapes
1426:
1406:
3002:Audio-visual speech recognition
2685:
2377:J. Shi; C. Tomasi (June 1994).
2302:C. Harris; M. Stephens (1988).
1942:
1640:Harris & Stephens / Plessey
2847:Recognition and categorization
2447:. Springer. pp. 430–443.
2210:
1959:
618:Relevance vector machine (RVM)
1:
3111:Optical character recognition
3042:Content-based image retrieval
2734:10.1002/9780470050118.ecse609
2176:
2171:Vectorization (image tracing)
1868:
1305:
1107:Computational learning theory
671:Expectation–maximization (EM)
140:Determinant of Hessian (DoH)
135:Difference of Gaussians (DoG)
1928:
1896:
1538:
1064:Coefficient of determination
911:Convolutional neural network
623:Support vector machine (SVM)
199:Generalized structure tensor
7:
2139:
2113:
2000:Generalised Hough transform
1215:Outline of machine learning
1112:Empirical risk minimization
178:Generalized Hough transform
130:Laplacian of Gaussian (LoG)
10:
3199:
3007:Automatic image annotation
2842:Noise reduction techniques
2273:10.1109/TPAMI.1986.4767851
2219:Neural Networks for Babies
2151:Automatic image annotation
2117:
2022:
1888:and local histograms (see
1872:
1830:Principal curvature ridges
1563:
1343:scale-space representation
1257:Feature (machine learning)
1254:
852:Feedforward neural network
603:Artificial neural networks
3159:
2972:Free viewpoint television
2908:
2875:
2789:
2724:. In Benjamin Wah (ed.).
2596:Ridges for image analysis
2106:, which are averageable.
1518:
1477:Corners / interest points
1354:computationally expensive
835:Artificial neural network
3037:Computer-aided diagnosis
2379:"Good Features to Track"
2014:Active contours (snakes)
1459:
1454:
1144:Journals and conferences
1091:Mathematical foundations
1001:Temporal difference (TD)
857:Recurrent neural network
777:Conditional random field
700:Dimensionality reduction
448:Dimensionality reduction
410:Quantum machine learning
405:Neuromorphic engineering
365:Self-supervised learning
360:Semi-supervised learning
3099:Moving object detection
3089:Medical image computing
2852:Research infrastructure
2822:Image sensor technology
2629:10.1023/A:1008097225773
2422:10.1023/A:1008045108935
2356:10.1023/A:1007963824710
2065:Certainty or confidence
1754:Difference of Gaussians
553:Apprenticeship learning
215:Affine shape adaptation
3136:Video content analysis
3104:Small object detection
2883:Computer stereo vision
2126:corresponding features
2120:Correspondence problem
1773:Determinant of Hessian
1554:or connected regions.
1547:
1438:result for sea urchin.
1276:neighborhood operation
1102:Bias–variance tradeoff
984:Reinforcement learning
960:Spiking neural network
370:Reinforcement learning
279:Implementation details
3141:Video motion analysis
2952:Structure from motion
2898:3D object recognition
2720:T. Lindeberg (2009).
2650:T. Lindeberg (1993).
2608:T. Lindeberg (1998).
2401:T. Lindeberg (1998).
1735:Laplacian of Gaussian
1697:Level curve curvature
1546:
938:Neural radiance field
760:Structured prediction
483:Structured prediction
355:Unsupervised learning
97:Level curve curvature
3064:Foreground detection
3047:Reverse image search
3027:Bioimage informatics
2997:Activity recognition
2166:Foreground detection
2130:corresponding points
1127:Statistical learning
1025:Learning with humans
817:Local outlier factor
3131:Autonomous vehicles
3069:Gesture recognition
2932:2D to 3D conversion
2571:. pp. 384–393.
2463:10.1007/11744023_34
2045:(three functions).
1572:
1299:pattern recognition
970:Electrochemical RAM
877:reservoir computing
608:Logistic regression
527:Supervised learning
513:Multimodal learning
488:Feature engineering
433:Generative modeling
395:Rule-based learning
390:Curriculum learning
350:Supervised learning
325:Part of a series on
233:Feature description
3146:Video surveillance
3084:Landmark detection
2992:3D pose estimation
2977:Volumetric capture
2937:Gaussian splatting
2893:Object recognition
2807:Commercial systems
2671:10.1007/BF01469346
2053:feature descriptor
1570:
1548:
1365:feature extraction
538: •
453:Density estimation
274:Scale-space axioms
3170:
3169:
3079:Image restoration
3022:Augmented reality
2987:
2986:
2967:4D reconstruction
2919:3D reconstruction
2812:Feature detection
2697:www.mathworks.com
2203:978-0-8493-2919-7
2161:Feature selection
2035:certainty measure
2025:Visual descriptor
1976:Template matching
1866:
1865:
1551:Feature detection
1280:feature detection
1253:
1252:
1058:Model diagnostics
1041:Human-in-the-loop
884:Boltzmann machine
797:Anomaly detection
593:Linear regression
508:Ontology learning
503:Grammar induction
478:Semantic analysis
473:Association rules
458:Anomaly detection
400:Neuro-symbolic AI
320:
319:
23:Feature detection
3190:
3094:Object detection
3059:Face recognition
2942:Shape from focus
2915:
2914:
2802:Digital geometry
2776:
2769:
2762:
2753:
2752:
2747:
2707:
2706:
2704:
2703:
2689:
2683:
2682:
2656:
2647:
2641:
2640:
2614:
2605:
2599:
2592:
2586:
2579:
2573:
2572:
2566:
2557:
2551:
2546:
2540:
2535:
2529:
2528:
2510:
2489:D. Lowe (2004).
2486:
2480:
2473:
2467:
2466:
2456:
2440:
2434:
2433:
2407:
2398:
2387:
2386:
2374:
2368:
2367:
2335:
2329:
2328:
2326:
2325:
2319:
2308:
2299:
2293:
2292:
2253:
2247:
2246:
2240:
2232:
2214:
2208:
2207:
2187:
2156:Feature learning
2104:structure tensor
2055:
2054:
2031:boolean variable
2006:Flexible methods
1988:Circles/ellipses
1949:Motion detection
1908:Corner detection
1849:Grey-level blobs
1678:Shi & Tomasi
1576:Feature detector
1573:
1569:
1513:corner detection
1430:
1410:
1347:image derivative
1330:image processing
1295:machine learning
1267:image processing
1245:
1238:
1231:
1192:Related articles
1069:Confusion matrix
822:Isolation forest
767:Graphical models
546:
545:
498:Learning to rank
493:Feature learning
331:Machine learning
322:
321:
312:
305:
298:
194:Structure tensor
186:Structure tensor
78:Corner detection
19:
18:
3198:
3197:
3193:
3192:
3191:
3189:
3188:
3187:
3173:
3172:
3171:
3166:
3155:
3126:Robotic mapping
3074:Image denoising
2983:
2904:
2871:
2837:Motion analysis
2785:
2783:Computer vision
2780:
2744:
2716:
2714:Further reading
2711:
2710:
2701:
2699:
2691:
2690:
2686:
2654:
2648:
2644:
2612:
2606:
2602:
2593:
2589:
2580:
2576:
2564:
2558:
2554:
2547:
2543:
2536:
2532:
2487:
2483:
2474:
2470:
2441:
2437:
2405:
2399:
2390:
2375:
2371:
2336:
2332:
2323:
2321:
2317:
2306:
2300:
2296:
2254:
2250:
2234:
2233:
2229:
2221:. Sourcebooks.
2215:
2211:
2204:
2188:
2184:
2179:
2146:Computer vision
2142:
2122:
2116:
2087:
2067:
2052:
2051:
2043:color histogram
2027:
2021:
2008:
1981:Hough transform
1971:Blob extraction
1962:
1945:
1936:autocorrelation
1931:
1918:Ridge detection
1899:
1878:
1871:
1568:
1541:
1533:ridge detection
1521:
1500:
1479:
1462:
1457:
1445:
1444:
1443:
1442:
1441:
1439:
1431:
1423:
1422:
1411:
1380:
1308:
1285:More broadly a
1263:computer vision
1259:
1249:
1220:
1219:
1193:
1185:
1184:
1145:
1137:
1136:
1097:Kernel machines
1092:
1084:
1083:
1059:
1051:
1050:
1031:Active learning
1026:
1018:
1017:
986:
976:
975:
901:Diffusion model
837:
827:
826:
799:
789:
788:
762:
752:
751:
707:Factor analysis
702:
692:
691:
675:
638:
628:
627:
548:
547:
531:
530:
529:
518:
517:
423:
415:
414:
380:Online learning
345:
333:
316:
173:Hough transform
165:Hough transform
159:Ridge detection
87:Harris operator
17:
12:
11:
5:
3196:
3186:
3185:
3168:
3167:
3160:
3157:
3156:
3154:
3153:
3151:Video tracking
3148:
3143:
3138:
3133:
3128:
3123:
3121:Remote sensing
3118:
3113:
3108:
3107:
3106:
3101:
3091:
3086:
3081:
3076:
3071:
3066:
3061:
3056:
3051:
3050:
3049:
3039:
3034:
3032:Blob detection
3029:
3024:
3019:
3014:
3009:
3004:
2999:
2994:
2988:
2985:
2984:
2982:
2981:
2980:
2979:
2974:
2964:
2959:
2957:View synthesis
2954:
2949:
2944:
2939:
2934:
2929:
2923:
2921:
2912:
2906:
2905:
2903:
2902:
2901:
2900:
2890:
2888:Motion capture
2885:
2879:
2877:
2873:
2872:
2870:
2869:
2864:
2859:
2854:
2849:
2844:
2839:
2834:
2829:
2824:
2819:
2814:
2809:
2804:
2799:
2793:
2791:
2787:
2786:
2779:
2778:
2771:
2764:
2756:
2750:
2749:
2743:978-0470050118
2742:
2715:
2712:
2709:
2708:
2684:
2665:(3): 283–318.
2642:
2623:(2): 117–154.
2600:
2587:
2581:R. Haralick, "
2574:
2552:
2541:
2530:
2508:10.1.1.73.2924
2481:
2468:
2454:10.1.1.60.3991
2435:
2388:
2369:
2330:
2294:
2267:(6): 679–714.
2248:
2227:
2209:
2202:
2181:
2180:
2178:
2175:
2174:
2173:
2168:
2163:
2158:
2153:
2148:
2141:
2138:
2136:object there.
2118:Main article:
2115:
2112:
2086:
2085:Averageability
2083:
2066:
2063:
2023:Main article:
2020:
2019:Representation
2017:
2016:
2015:
2012:
2007:
2004:
2003:
2002:
1997:
1996:
1995:
1992:
1989:
1986:
1978:
1973:
1968:
1961:
1958:
1957:
1956:
1944:
1941:
1940:
1939:
1930:
1927:
1926:
1925:
1920:
1915:
1913:Blob detection
1910:
1905:
1903:Edge detection
1898:
1895:
1870:
1867:
1864:
1863:
1860:
1857:
1854:
1851:
1845:
1844:
1841:
1838:
1835:
1832:
1826:
1825:
1822:
1819:
1816:
1813:
1807:
1806:
1803:
1800:
1797:
1794:
1788:
1787:
1784:
1781:
1778:
1775:
1769:
1768:
1765:
1762:
1759:
1756:
1750:
1749:
1746:
1743:
1740:
1737:
1731:
1730:
1727:
1724:
1721:
1718:
1712:
1711:
1708:
1705:
1702:
1699:
1693:
1692:
1689:
1686:
1683:
1680:
1674:
1673:
1670:
1667:
1664:
1661:
1655:
1654:
1651:
1648:
1645:
1642:
1636:
1635:
1632:
1629:
1626:
1623:
1617:
1616:
1613:
1610:
1607:
1604:
1598:
1597:
1592:
1587:
1582:
1577:
1540:
1537:
1520:
1517:
1509:blob detectors
1499:
1496:
1492:image gradient
1484:edge detection
1478:
1475:
1461:
1458:
1456:
1453:
1449:neural network
1436:false positive
1432:
1425:
1424:
1412:
1405:
1404:
1403:
1402:
1401:
1397:classification
1385:feature vector
1379:
1376:
1307:
1304:
1251:
1250:
1248:
1247:
1240:
1233:
1225:
1222:
1221:
1218:
1217:
1212:
1211:
1210:
1200:
1194:
1191:
1190:
1187:
1186:
1183:
1182:
1177:
1172:
1167:
1162:
1157:
1152:
1146:
1143:
1142:
1139:
1138:
1135:
1134:
1129:
1124:
1119:
1117:Occam learning
1114:
1109:
1104:
1099:
1093:
1090:
1089:
1086:
1085:
1082:
1081:
1076:
1074:Learning curve
1071:
1066:
1060:
1057:
1056:
1053:
1052:
1049:
1048:
1043:
1038:
1033:
1027:
1024:
1023:
1020:
1019:
1016:
1015:
1014:
1013:
1003:
998:
993:
987:
982:
981:
978:
977:
974:
973:
967:
962:
957:
952:
951:
950:
940:
935:
934:
933:
928:
923:
918:
908:
903:
898:
893:
892:
891:
881:
880:
879:
874:
869:
864:
854:
849:
844:
838:
833:
832:
829:
828:
825:
824:
819:
814:
806:
800:
795:
794:
791:
790:
787:
786:
785:
784:
779:
774:
763:
758:
757:
754:
753:
750:
749:
744:
739:
734:
729:
724:
719:
714:
709:
703:
698:
697:
694:
693:
690:
689:
684:
679:
673:
668:
663:
655:
650:
645:
639:
634:
633:
630:
629:
626:
625:
620:
615:
610:
605:
600:
595:
590:
582:
581:
580:
575:
570:
560:
558:Decision trees
555:
549:
535:classification
525:
524:
523:
520:
519:
516:
515:
510:
505:
500:
495:
490:
485:
480:
475:
470:
465:
460:
455:
450:
445:
440:
435:
430:
428:Classification
424:
421:
420:
417:
416:
413:
412:
407:
402:
397:
392:
387:
385:Batch learning
382:
377:
372:
367:
362:
357:
352:
346:
343:
342:
339:
338:
327:
326:
318:
317:
315:
314:
307:
300:
292:
289:
288:
287:
286:
281:
276:
268:
267:
261:
260:
259:
258:
253:
248:
243:
235:
234:
230:
229:
228:
227:
225:Hessian affine
222:
217:
209:
208:
204:
203:
202:
201:
196:
188:
187:
183:
182:
181:
180:
175:
167:
166:
162:
161:
155:
154:
153:
152:
147:
142:
137:
132:
124:
123:
121:Blob detection
117:
116:
115:
114:
109:
104:
99:
94:
92:Shi and Tomasi
89:
81:
80:
74:
73:
72:
71:
66:
61:
56:
51:
46:
41:
33:
32:
30:Edge detection
26:
25:
15:
9:
6:
4:
3:
2:
3195:
3184:
3181:
3180:
3178:
3165:
3164:
3163:Main category
3158:
3152:
3149:
3147:
3144:
3142:
3139:
3137:
3134:
3132:
3129:
3127:
3124:
3122:
3119:
3117:
3116:Pose tracking
3114:
3112:
3109:
3105:
3102:
3100:
3097:
3096:
3095:
3092:
3090:
3087:
3085:
3082:
3080:
3077:
3075:
3072:
3070:
3067:
3065:
3062:
3060:
3057:
3055:
3052:
3048:
3045:
3044:
3043:
3040:
3038:
3035:
3033:
3030:
3028:
3025:
3023:
3020:
3018:
3015:
3013:
3010:
3008:
3005:
3003:
3000:
2998:
2995:
2993:
2990:
2989:
2978:
2975:
2973:
2970:
2969:
2968:
2965:
2963:
2960:
2958:
2955:
2953:
2950:
2948:
2945:
2943:
2940:
2938:
2935:
2933:
2930:
2928:
2925:
2924:
2922:
2920:
2916:
2913:
2911:
2907:
2899:
2896:
2895:
2894:
2891:
2889:
2886:
2884:
2881:
2880:
2878:
2874:
2868:
2865:
2863:
2860:
2858:
2855:
2853:
2850:
2848:
2845:
2843:
2840:
2838:
2835:
2833:
2830:
2828:
2825:
2823:
2820:
2818:
2815:
2813:
2810:
2808:
2805:
2803:
2800:
2798:
2795:
2794:
2792:
2788:
2784:
2777:
2772:
2770:
2765:
2763:
2758:
2757:
2754:
2745:
2739:
2735:
2731:
2727:
2723:
2722:"Scale-space"
2718:
2717:
2698:
2694:
2688:
2680:
2676:
2672:
2668:
2664:
2660:
2653:
2646:
2638:
2634:
2630:
2626:
2622:
2618:
2611:
2604:
2597:
2591:
2584:
2578:
2570:
2563:
2556:
2550:
2545:
2539:
2534:
2526:
2522:
2518:
2514:
2509:
2504:
2500:
2496:
2492:
2485:
2478:
2472:
2464:
2460:
2455:
2450:
2446:
2439:
2431:
2427:
2423:
2419:
2416:(2): 77–116.
2415:
2411:
2404:
2397:
2395:
2393:
2384:
2380:
2373:
2365:
2361:
2357:
2353:
2349:
2345:
2341:
2334:
2320:on 2022-04-01
2316:
2312:
2305:
2298:
2290:
2286:
2282:
2278:
2274:
2270:
2266:
2262:
2258:
2252:
2244:
2238:
2230:
2224:
2220:
2213:
2205:
2199:
2196:. CRC Press.
2195:
2194:
2186:
2182:
2172:
2169:
2167:
2164:
2162:
2159:
2157:
2154:
2152:
2149:
2147:
2144:
2143:
2137:
2133:
2131:
2127:
2121:
2111:
2107:
2105:
2099:
2097:
2092:
2082:
2080:
2076:
2071:
2062:
2060:
2056:
2046:
2044:
2040:
2036:
2032:
2026:
2013:
2010:
2009:
2001:
1998:
1993:
1990:
1987:
1984:
1983:
1982:
1979:
1977:
1974:
1972:
1969:
1967:
1964:
1963:
1954:
1950:
1947:
1946:
1937:
1933:
1932:
1924:
1921:
1919:
1916:
1914:
1911:
1909:
1906:
1904:
1901:
1900:
1894:
1891:
1887:
1885:
1876:
1861:
1858:
1855:
1852:
1850:
1847:
1846:
1842:
1839:
1836:
1833:
1831:
1828:
1827:
1823:
1820:
1817:
1814:
1812:
1809:
1808:
1804:
1801:
1798:
1795:
1793:
1790:
1789:
1785:
1782:
1779:
1776:
1774:
1771:
1770:
1766:
1763:
1760:
1757:
1755:
1752:
1751:
1747:
1744:
1741:
1738:
1736:
1733:
1732:
1728:
1725:
1722:
1719:
1717:
1714:
1713:
1709:
1706:
1703:
1700:
1698:
1695:
1694:
1690:
1687:
1684:
1681:
1679:
1676:
1675:
1671:
1668:
1665:
1662:
1660:
1657:
1656:
1652:
1649:
1646:
1643:
1641:
1638:
1637:
1633:
1630:
1627:
1624:
1622:
1619:
1618:
1614:
1611:
1608:
1605:
1603:
1600:
1599:
1596:
1593:
1591:
1588:
1586:
1583:
1581:
1578:
1575:
1574:
1567:
1562:
1560:
1555:
1552:
1545:
1536:
1534:
1530:
1526:
1516:
1514:
1510:
1504:
1495:
1493:
1489:
1485:
1474:
1471:
1468:
1452:
1450:
1437:
1429:
1420:
1416:
1409:
1400:
1398:
1392:
1390:
1389:feature space
1386:
1375:
1373:
1372:feature image
1368:
1366:
1361:
1357:
1355:
1350:
1348:
1344:
1340:
1335:
1331:
1326:
1324:
1323:
1322:repeatability
1316:
1314:
1303:
1300:
1296:
1292:
1288:
1283:
1281:
1277:
1272:
1268:
1264:
1258:
1246:
1241:
1239:
1234:
1232:
1227:
1226:
1224:
1223:
1216:
1213:
1209:
1206:
1205:
1204:
1201:
1199:
1196:
1195:
1189:
1188:
1181:
1178:
1176:
1173:
1171:
1168:
1166:
1163:
1161:
1158:
1156:
1153:
1151:
1148:
1147:
1141:
1140:
1133:
1130:
1128:
1125:
1123:
1120:
1118:
1115:
1113:
1110:
1108:
1105:
1103:
1100:
1098:
1095:
1094:
1088:
1087:
1080:
1077:
1075:
1072:
1070:
1067:
1065:
1062:
1061:
1055:
1054:
1047:
1044:
1042:
1039:
1037:
1036:Crowdsourcing
1034:
1032:
1029:
1028:
1022:
1021:
1012:
1009:
1008:
1007:
1004:
1002:
999:
997:
994:
992:
989:
988:
985:
980:
979:
971:
968:
966:
965:Memtransistor
963:
961:
958:
956:
953:
949:
946:
945:
944:
941:
939:
936:
932:
929:
927:
924:
922:
919:
917:
914:
913:
912:
909:
907:
904:
902:
899:
897:
894:
890:
887:
886:
885:
882:
878:
875:
873:
870:
868:
865:
863:
860:
859:
858:
855:
853:
850:
848:
847:Deep learning
845:
843:
840:
839:
836:
831:
830:
823:
820:
818:
815:
813:
811:
807:
805:
802:
801:
798:
793:
792:
783:
782:Hidden Markov
780:
778:
775:
773:
770:
769:
768:
765:
764:
761:
756:
755:
748:
745:
743:
740:
738:
735:
733:
730:
728:
725:
723:
720:
718:
715:
713:
710:
708:
705:
704:
701:
696:
695:
688:
685:
683:
680:
678:
674:
672:
669:
667:
664:
662:
660:
656:
654:
651:
649:
646:
644:
641:
640:
637:
632:
631:
624:
621:
619:
616:
614:
611:
609:
606:
604:
601:
599:
596:
594:
591:
589:
587:
583:
579:
578:Random forest
576:
574:
571:
569:
566:
565:
564:
561:
559:
556:
554:
551:
550:
543:
542:
537:
536:
528:
522:
521:
514:
511:
509:
506:
504:
501:
499:
496:
494:
491:
489:
486:
484:
481:
479:
476:
474:
471:
469:
466:
464:
463:Data cleaning
461:
459:
456:
454:
451:
449:
446:
444:
441:
439:
436:
434:
431:
429:
426:
425:
419:
418:
411:
408:
406:
403:
401:
398:
396:
393:
391:
388:
386:
383:
381:
378:
376:
375:Meta-learning
373:
371:
368:
366:
363:
361:
358:
356:
353:
351:
348:
347:
341:
340:
337:
332:
329:
328:
324:
323:
313:
308:
306:
301:
299:
294:
293:
291:
290:
285:
282:
280:
277:
275:
272:
271:
270:
269:
266:
263:
262:
257:
254:
252:
249:
247:
244:
242:
239:
238:
237:
236:
232:
231:
226:
223:
221:
220:Harris affine
218:
216:
213:
212:
211:
210:
206:
205:
200:
197:
195:
192:
191:
190:
189:
185:
184:
179:
176:
174:
171:
170:
169:
168:
164:
163:
160:
157:
156:
151:
148:
146:
143:
141:
138:
136:
133:
131:
128:
127:
126:
125:
122:
119:
118:
113:
110:
108:
105:
103:
100:
98:
95:
93:
90:
88:
85:
84:
83:
82:
79:
76:
75:
70:
69:Roberts cross
67:
65:
62:
60:
57:
55:
52:
50:
47:
45:
42:
40:
37:
36:
35:
34:
31:
28:
27:
24:
21:
20:
3161:
3054:Eye tracking
2910:Applications
2876:Technologies
2862:Segmentation
2725:
2700:. Retrieved
2696:
2687:
2662:
2658:
2645:
2620:
2616:
2603:
2590:
2577:
2568:
2555:
2544:
2533:
2498:
2494:
2484:
2471:
2444:
2438:
2413:
2409:
2382:
2372:
2350:(1): 45–78.
2347:
2343:
2333:
2322:. Retrieved
2315:the original
2310:
2297:
2264:
2260:
2251:
2218:
2212:
2192:
2185:
2134:
2129:
2125:
2123:
2108:
2100:
2095:
2088:
2072:
2068:
2058:
2057:, or simply
2050:
2047:
2028:
1966:Thresholding
1953:Optical flow
1943:Image motion
1883:
1879:
1556:
1550:
1549:
1524:
1522:
1505:
1501:
1480:
1472:
1463:
1446:
1393:
1388:
1384:
1381:
1371:
1369:
1364:
1362:
1358:
1351:
1349:operations.
1341:kernel in a
1327:
1320:
1317:
1309:
1286:
1284:
1279:
1270:
1260:
1122:PAC learning
809:
658:
653:Hierarchical
585:
539:
533:
49:Differential
22:
2962:Visual hull
2857:Researchers
2385:. Springer.
2096:averageable
2039:orientation
1960:Shape based
1893:detection.
1529:medial axis
1419:sea urchins
1006:Multi-agent
943:Transformer
842:Autoencoder
598:Naive Bayes
336:data mining
265:Scale space
2832:Morphology
2790:Categories
2702:2019-07-06
2655:(abstract)
2613:(abstract)
2406:(abstract)
2324:2021-02-11
2228:1492671207
2177:References
2079:confidence
2059:descriptor
1869:Extraction
1539:Detection
1306:Definition
1255:See also:
991:Q-learning
889:Restricted
687:Mean shift
636:Clustering
613:Perceptron
541:regression
443:Clustering
438:Regression
2637:207658261
2525:221242327
2503:CiteSeerX
2501:(2): 91.
2449:CiteSeerX
2257:Canny, J.
2237:cite book
2091:averaging
2075:certainty
1929:Curvature
1897:Low-level
1488:curvature
1150:ECML PKDD
1132:VC theory
1079:ROC curve
1011:Self-play
931:DeepDream
772:Bayes net
563:Ensembles
344:Paradigms
3177:Category
2867:Software
2827:Learning
2817:Geometry
2797:Datasets
2679:11998035
2364:15033310
2289:13284142
2281:21869365
2140:See also
2128:such as
2114:Matching
1881:mention
1561:(SIFT).
1467:gradient
1415:starfish
1399:method.
1339:Gaussian
573:Boosting
422:Problems
284:Pyramids
64:Robinson
1566:COSFIRE
1490:in the
1291:feature
1287:feature
1271:feature
1155:NeurIPS
972:(ECRAM)
926:AlexNet
568:Bagging
59:Prewitt
44:Deriche
2740:
2677:
2635:
2523:
2505:
2451:
2430:723210
2428:
2362:
2287:
2279:
2225:
2200:
1585:Corner
1525:ridges
1519:Ridges
948:Vision
804:RANSAC
682:OPTICS
677:DBSCAN
661:-means
468:AutoML
2675:S2CID
2633:S2CID
2565:(PDF)
2521:S2CID
2426:S2CID
2360:S2CID
2318:(PDF)
2307:(PDF)
2285:S2CID
1985:Lines
1886:-jets
1659:SUSAN
1621:Sobel
1602:Canny
1595:Ridge
1460:Edges
1455:Types
1334:pixel
1313:image
1170:IJCAI
996:SARSA
955:Mamba
921:LeNet
916:U-Net
742:t-SNE
666:Fuzzy
643:BIRCH
107:SUSAN
54:Sobel
39:Canny
2738:ISBN
2277:PMID
2243:link
2223:ISBN
2198:ISBN
1859:Yes
1843:Yes
1821:Yes
1811:MSER
1802:Yes
1799:Yes
1783:Yes
1780:Yes
1764:Yes
1761:Yes
1745:Yes
1742:Yes
1723:Yes
1716:FAST
1704:Yes
1685:Yes
1666:Yes
1663:Yes
1647:Yes
1644:Yes
1625:Yes
1606:Yes
1590:Blob
1580:Edge
1417:and
1297:and
1269:, a
1265:and
1180:JMLR
1165:ICLR
1160:ICML
1046:RLHF
862:LSTM
648:CURE
334:and
251:GLOH
246:SURF
241:SIFT
150:PCBR
112:FAST
2730:doi
2667:doi
2625:doi
2513:doi
2459:doi
2418:doi
2352:doi
2269:doi
2077:or
1862:No
1856:No
1853:No
1840:No
1837:No
1834:No
1824:No
1818:No
1815:No
1805:No
1796:No
1786:No
1777:No
1767:No
1758:No
1748:No
1739:No
1729:No
1726:No
1720:No
1710:No
1707:No
1701:No
1691:No
1688:No
1682:No
1672:No
1669:No
1653:No
1650:No
1634:No
1631:No
1628:No
1615:No
1612:No
1609:No
1293:in
1278:or
1261:In
906:SOM
896:GAN
872:ESN
867:GRU
812:-NN
747:SDL
737:PGD
732:PCA
727:NMF
722:LDA
717:ICA
712:CCA
588:-NN
256:HOG
3179::
2736:.
2695:.
2673:.
2663:11
2661:.
2657:.
2631:.
2621:30
2619:.
2615:.
2567:.
2519:.
2511:.
2499:60
2497:.
2493:.
2457:.
2424:.
2414:30
2412:.
2408:.
2391:^
2381:.
2358:.
2348:23
2346:.
2342:.
2309:.
2283:.
2275:.
2263:.
2239:}}
2235:{{
2132:.
2098:.
2061:.
1535:.
1515:.
1391:.
1175:ML
2775:e
2768:t
2761:v
2746:.
2732::
2705:.
2681:.
2669::
2639:.
2627::
2527:.
2515::
2465:.
2461::
2432:.
2420::
2366:.
2354::
2327:.
2291:.
2271::
2265:8
2245:)
2231:.
2206:.
1955:.
1938:.
1884:N
1877:.
1244:e
1237:t
1230:v
810:k
659:k
586:k
544:)
532:(
311:e
304:t
297:v
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.