Feature extraction and image processing for computer vision / (Record no. 43802)
[ view plain ]
000 -LEADER | |
---|---|
fixed length control field | 13269cam a2200385#a 4500 |
003 - CONTROL NUMBER IDENTIFIER | |
control field | CUTN |
005 - DATE AND TIME OF LATEST TRANSACTION | |
control field | 20241028161248.0 |
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION | |
fixed length control field | 191125s2019 xx o 000 0 eng d |
020 ## - INTERNATIONAL STANDARD BOOK NUMBER | |
International Standard Book Number | 9780128149775 |
020 ## - INTERNATIONAL STANDARD BOOK NUMBER | |
International Standard Book Number | 0128149779 |
020 ## - INTERNATIONAL STANDARD BOOK NUMBER | |
Cancelled/invalid ISBN | 9780128149768 |
020 ## - INTERNATIONAL STANDARD BOOK NUMBER | |
Cancelled/invalid ISBN | 0128149760 |
041 ## - LANGUAGE CODE | |
Language | English |
082 04 - DEWEY DECIMAL CLASSIFICATION NUMBER | |
Classification number | 006.37 |
Edition number | 23 |
Item number | NIX |
100 1# - MAIN ENTRY--PERSONAL NAME | |
Personal name | Nixon, Mark S. |
245 10 - TITLE STATEMENT | |
Title | Feature extraction and image processing for computer vision / |
Statement of responsibility, etc | Mark S. Nixon, Alberto S. Aguado. |
250 ## - EDITION STATEMENT | |
Edition statement | 4th edition. |
260 ## - PUBLICATION, DISTRIBUTION, ETC. (IMPRINT) | |
Place of publication, distribution, etc | [S.l.] : |
Name of publisher, distributor, etc | Academic Press, |
Date of publication, distribution, etc | 2020. |
300 ## - PHYSICAL DESCRIPTION | |
Extent | XIV, 626 pages : |
Other physical details | ilustraciones ; |
Dimensions | 24 cm. |
505 ## - FORMATTED CONTENTS NOTE | |
Title | Preface<br/><br/>1. Introduction<br/><br/>1.1 Overview<br/><br/>1.2 Human and computer vision<br/><br/>1.3 The human vision system<br/><br/>1.3.1 The eye<br/><br/>1.3.2 The neural system<br/><br/>1.3.3 Processing<br/><br/>1.4 Computer vision systems<br/><br/>1.4.1 Cameras<br/><br/>1.4.2 Computer interfaces<br/><br/>1.5 Processing images<br/><br/>1.5.1 Processing<br/><br/>1.5.2 Hello Python, hello images!<br/><br/>1.5.3 Mathematical tools<br/><br/>1.5.4 Hello Matlab<br/><br/>1.6 Associated literature<br/><br/>1.6.1 Journals, magazines and conferences<br/><br/>1.6.2 Textbooks<br/><br/>1.6.3 The web<br/><br/>1.7 Conclusions<br/><br/>References<br/><br/>2. Images, sampling and frequency domain processing<br/><br/>2.1 Overview<br/><br/>2.2 Image formation<br/><br/>2.3 The Fourier Transform<br/><br/>2.4 The sampling criterion<br/><br/>2.5 The discrete Fourier Transform<br/><br/>2.5.1 One-dimensional transform<br/><br/>2.5.2 Two-dimensional transform<br/><br/>2.6 Properties of the Fourier Transform<br/><br/>2.6.1 Shift invariance<br/><br/>2.6.2 Rotation<br/><br/>2.6.3 Frequency scaling<br/><br/>2.6.4 Superposition (linearity)<br/><br/>2.6.5 The importance of phase<br/><br/>2.7 Transforms other than Fourier<br/><br/>2.7.1 Discrete cosine transform<br/><br/>2.7.2 Discrete Hartley Transform<br/><br/>2.7.3 Introductory wavelets<br/><br/>2.7.3.1 Gabor Wavelet<br/><br/>2.7.3.2 Haar Wavelet<br/><br/>2.7.4 Other transforms<br/><br/>2.8 Applications using frequency domain properties<br/><br/>2.9 Further reading<br/><br/>References<br/><br/>3. Image processing<br/><br/>3.1 Overview<br/><br/>3.2 Histograms<br/><br/>3.3 Point operators<br/><br/>3.3.1 Basic point operations<br/><br/>3.3.2 Histogram normalisation<br/><br/>3.3.3 Histogram equalisation<br/><br/>3.3.4 Thresholding<br/><br/>3.4 Group operations<br/><br/>3.4.1 Template convolution<br/><br/>3.4.2 Averaging operator<br/><br/>3.4.3 On different template size<br/><br/>3.4.4 Template convolution via the Fourier transform<br/><br/>3.4.5 Gaussian averaging operator<br/><br/>3.4.6 More on averaging<br/><br/>3.5 Other image processing operators<br/><br/>3.5.1 Median filter<br/><br/>3.5.2 Mode filter<br/><br/>3.5.3 Nonlocal means<br/><br/>3.5.4 Bilateral filtering<br/><br/>3.5.5 Anisotropic diffusion<br/><br/>3.5.6 Comparison of smoothing operators<br/><br/>3.5.7 Force field transform<br/><br/>3.5.8 Image ray transform<br/><br/>3.6 Mathematical morphology<br/><br/>3.6.1 Morphological operators<br/><br/>3.6.2 Grey level morphology<br/><br/>3.6.3 Grey level erosion and dilation<br/><br/>3.6.4 Minkowski operators<br/><br/>3.7 Further reading<br/><br/>References<br/><br/>4. Low-level feature extraction (including edge detection)<br/><br/>4.1 Overview<br/><br/>4.2 Edge detection<br/><br/>4.2.1 First-order edge detection operators<br/><br/>4.2.1.1 Basic operators<br/><br/>4.2.1.2 Analysis of the basic operators<br/><br/>4.2.1.3 Prewitt edge detection operator<br/><br/>4.2.1.4 Sobel edge detection operator<br/><br/>4.2.1.5 The Canny edge detector<br/><br/>4.2.2 Second-order edge detection operators<br/><br/>4.2.2.1 Motivation<br/><br/>4.2.2.2 Basic operators: The Laplacian<br/><br/>4.2.2.3 The Marr–Hildreth operator<br/><br/>4.2.3 Other edge detection operators<br/><br/>4.2.4 Comparison of edge detection operators<br/><br/>4.2.5 Further reading on edge detection<br/><br/>4.3 Phase congruency<br/><br/>4.4 Localised feature extraction<br/><br/>4.4.1 Detecting image curvature (corner extraction)<br/><br/>4.4.1.1 Definition of curvature<br/><br/>4.4.1.2 Computing differences in edge direction<br/><br/>4.4.1.3 Measuring curvature by changes in intensity (differentiation)<br/><br/>4.4.1.4 Moravec and Harris detectors<br/><br/>4.4.1.5 Further reading on curvature<br/><br/>4.4.2 Feature point detection; region/patch analysis<br/><br/>4.4.2.1 Scale invariant feature transform<br/><br/>4.4.2.2 Speeded up robust features<br/><br/>4.4.2.3 FAST, ORB, FREAK, LOCKY and other keypoint detectors<br/><br/>4.4.2.4 Other techniques and performance issues<br/><br/>4.4.3 Saliency<br/><br/>4.4.3.1 Basic saliency<br/><br/>4.4.3.2 Context aware saliency<br/><br/>4.4.3.3 Other saliency operators<br/><br/>4.5 Describing image motion<br/><br/>4.5.1 Area-based approach<br/><br/>4.5.2 Differential approach<br/><br/>4.5.3 Recent developments: deep flow, epic flow and extensions<br/><br/>4.5.4 Analysis of optical flow<br/><br/>4.6 Further reading<br/><br/>References<br/><br/>5. High-level feature extraction: fixed shape matching<br/><br/>5.1 Overview<br/><br/>5.2 Thresholding and subtraction<br/><br/>5.3 Template matching<br/><br/>5.3.1 Definition<br/><br/>5.3.2 Fourier transform implementation<br/><br/>5.3.3 Discussion of template matching<br/><br/>5.4 Feature extraction by low-level features<br/><br/>5.4.1 Appearance-based approaches<br/><br/>5.4.1.1 Object detection by templates<br/><br/>5.4.1.2 Object detection by combinations of parts<br/><br/>5.4.2 Distribution-based descriptors<br/><br/>5.4.2.1 Description by interest points (SIFT, SURF, BRIEF)<br/><br/>5.4.2.2 Characterising object appearance and shape<br/><br/>5.5 Hough transform<br/><br/>5.5.1 Overview<br/><br/>5.5.2 Lines<br/><br/>5.5.3 HT for circles<br/><br/>5.5.4 HT for ellipses<br/><br/>5.5.5 Parameter space decomposition<br/><br/>5.5.5.1 Parameter space reduction for lines<br/><br/>5.5.5.2 Parameter space reduction for circles<br/><br/>5.5.5.3 Parameter space reduction for ellipses<br/><br/>5.5.6 Generalised Hough transform<br/><br/>5.5.6.1 Formal definition of the GHT<br/><br/>5.5.6.2 Polar definition<br/><br/>5.5.6.3 The GHT technique<br/><br/>5.5.6.4 Invariant GHT<br/><br/>5.5.7 Other extensions to the HT<br/><br/>5.6 Further reading<br/><br/>References<br/><br/>6. High-level feature extraction: deformable shape analysis<br/><br/>6.1 Overview<br/><br/>6.2 Deformable shape analysis<br/><br/>6.2.1 Deformable templates<br/><br/>6.2.2 Parts-based shape analysis<br/><br/>6.3 Active contours (snakes)<br/><br/>6.3.1 Basics<br/><br/>6.3.2 The Greedy Algorithm for snakes<br/><br/>6.3.3 Complete (Kass) Snake implementation<br/><br/>6.3.4 Other Snake approaches<br/><br/>6.3.5 Further Snake developments<br/><br/>6.3.6 Geometric active contours (Level Set-Based Approaches)<br/><br/>6.4 Shape Skeletonisation<br/><br/>6.4.1 Distance transforms<br/><br/>6.4.2 Symmetry<br/><br/>6.5 Flexible shape models – active shape and active appearance<br/><br/>6.6 Further reading<br/><br/>References<br/><br/>7. Object description<br/><br/>7.1 Overview and invariance requirements<br/><br/>7.2 Boundary descriptions<br/><br/>7.2.1 Boundary and region<br/><br/>7.2.2 Chain codes<br/><br/>7.2.3 Fourier descriptors<br/><br/>7.2.3.1 Basis of Fourier descriptors<br/><br/>7.2.3.2 Fourier expansion<br/><br/>7.2.3.3 Shift invariance<br/><br/>7.2.3.4 Discrete computation<br/><br/>7.2.3.5 Cumulative angular function<br/><br/>7.2.3.6 Elliptic Fourier descriptors<br/><br/>7.2.3.7 Invariance<br/><br/>7.3 Region descriptors<br/><br/>7.3.1 Basic region descriptors<br/><br/>7.3.2 Moments<br/><br/>7.3.2.1 Definition and properties<br/><br/>7.3.2.2 Geometric moments<br/><br/>7.3.2.3 Geometric complex moments and centralised moments<br/><br/>7.3.2.4 Rotation and scale invariant moments<br/><br/>7.3.2.5 Zernike moments<br/><br/>7.3.2.6 Tchebichef moments<br/><br/>7.3.2.7 Krawtchouk moments<br/><br/>7.3.2.8 Other moments<br/><br/>7.4 Further reading<br/><br/>References<br/><br/>8. Region-based analysis<br/><br/>8.1 Overview<br/><br/>8.2 Region-based analysis<br/><br/>8.2.1 Watershed transform<br/><br/>8.2.2 Maximally stable extremal regions<br/><br/>8.2.3 Superpixels<br/><br/>8.2.3.1 Basic techniques and normalised cuts<br/><br/>8.2.3.2 Simple linear iterative clustering<br/><br/>8.3 Texture description and analysis<br/><br/>8.3.1 What is texture?<br/><br/>8.3.2 Performance requirements<br/><br/>8.3.3 Structural approaches<br/><br/>8.3.4 Statistical approaches<br/><br/>8.3.4.1 Co-occurrence matrix<br/><br/>8.3.4.2 Learning-based approaches<br/><br/>8.3.5 Combination approaches<br/><br/>8.3.6 Local binary patterns<br/><br/>8.3.7 Other approaches<br/><br/>8.3.8 Segmentation by texture<br/><br/>8.4 Further reading<br/><br/>References<br/><br/>9. Moving object detection and description<br/><br/>9.1 Overview<br/><br/>9.2 Moving object detection<br/><br/>9.2.1 Basic approaches<br/><br/>9.2.1.1 Detection by subtracting the background<br/><br/>9.2.1.2 Improving quality by morphology<br/><br/>9.2.2 Modelling and adapting to the (static) background<br/><br/>9.2.3 Background segmentation by thresholding<br/><br/>9.2.4 Problems and advances<br/><br/>9.3 Tracking moving features<br/><br/>9.3.1 Tracking moving objects<br/><br/>9.3.2 Tracking by local search<br/><br/>9.3.3 Problems in tracking<br/><br/>9.3.4 Approaches to tracking<br/><br/>9.3.5 MeanShift and Camshift<br/><br/>9.3.5.1 Kernel-based density estimation<br/><br/>9.3.5.2 MeanShift tracking 456<br/><br/>9.3.5.3 Camshift technique 461<br/><br/>9.3.6 Other approaches 465<br/><br/>9.4 Moving feature extraction and description 468<br/><br/>9.4.1 Moving (biological) shape analysis 468<br/><br/>9.4.2 Space–time interest points 470<br/><br/>9.4.3 Detecting moving shapes by shape matching in<br/><br/>image sequences 470<br/><br/>9.4.4 Moving shape description 474<br/><br/>9.5 Further reading 477<br/><br/>References 478<br/><br/>Contents xv<br/><br/>These proofs may contain color figures. Those figures may print black and white in the final printed book if a color print product has not been planned. The color figures will<br/><br/>appear in color in all electronic versions of this book.<br/><br/>To protect the rights of the author(s) and publisher we inform you that this PDF is an uncorrected proof for internal business use only by the author(s), editor(s), reviewer(s),<br/><br/>Elsevier and typesetter TNQ Books and Journals Pvt Ltd. It is not allowed to publish this proof online or in print. This proof copy is the copyright property of the publisher<br/><br/>and is confidential until formal publication.<br/><br/>10. Camera geometry fundamentals 483<br/><br/>10.1 Overview 483<br/><br/>10.2 Projective space 483<br/><br/>10.2.1 Homogeneous co-ordinates and projective<br/><br/>geometry 484<br/><br/>10.2.2 Representation of a line, duality and ideal points 485<br/><br/>10.2.3 Transformations in the projective space 487<br/><br/>10.2.4 Computing a planar homography 490<br/><br/>10.3 The perspective camera 493<br/><br/>10.3.1 Perspective camera model 494<br/><br/>10.3.2 Parameters of the perspective camera model 498<br/><br/>10.3.3 Computing a projection from an image 498<br/><br/>10.4 Affine camera<br/><br/>10.4.1 Affine camera model<br/><br/>10.4.2 Affine camera model and the perspective projection<br/><br/>10.4.3 Parameters of the affine camera model<br/><br/>10.5 Weak perspective model<br/><br/>10.6 Discussion<br/><br/>10.7 Further reading<br/><br/>References<br/><br/>11. Colour images<br/><br/>11.1 Overview<br/><br/>11.2 Colour image theory<br/><br/>11.2.1 Colour images<br/><br/>11.2.2 Tristimulus theory<br/><br/>11.2.3 The colourimetric equation<br/><br/>11.2.4 Luminosity function<br/><br/>11.3 Perception-based colour models: CIE RGB and CIE XYZ<br/><br/>11.3.1 CIE RGB colour model: Wright–Guild data<br/><br/>11.3.2 CIE RGB colour matching functions<br/><br/>11.3.3 CIE RGB chromaticity diagram and chromaticity co-ordinates<br/><br/>11.3.4 CIE XYZ colour model<br/><br/>11.3.5 CIE XYZ colour matching functions<br/><br/>11.3.6 XYZ chromaticity diagram<br/><br/>11.3.7 Uniform colour spaces: CIE LUV and CIE LAB<br/><br/>11.4 Additive and subtractive colour models<br/><br/>11.4.1 RGB and CMY<br/><br/>11.4.2 Transformation between RGB models<br/><br/>11.4.3 Transformation between RGB and CMY models<br/><br/>11.5 Luminance and chrominance colour models<br/><br/>11.5.1 YUV, YIQ and YCbCr models<br/><br/>11.5.2 Luminance and gamma correction<br/><br/>11.5.3 Chrominance<br/><br/>11.5.4 Transformations between YUV, YIQ and RGB colour models<br/><br/>11.5.5 Colour model for component video: YPbPr<br/><br/>11.5.6 Colour model for digital video: YCbCr<br/><br/>11.6 Additive perceptual colour models<br/><br/>11.6.1 The HSV and HLS colour models<br/><br/>11.6.2 The hexagonal model: HSV<br/><br/>11.6.3 The triangular model: HLS<br/><br/>11.6.4 Transformation between HLS and RGB<br/><br/>11.7 More colour models<br/><br/>References<br/><br/>12. Distance, classification and learning<br/><br/>12.1 Overview<br/><br/>12.2 Basis of classification and learning<br/><br/>12.3 Distance and classification<br/><br/>12.3.1 Distance measures<br/><br/>12.3.1.1 Manhattan and Euclidean Ln norms<br/><br/>12.3.1.2 Mahalanobis, Bhattacharrya and Matusita<br/><br/>12.3.1.3 Histogram intersection, Chi2 (c2) and the Earth Mover’s distance<br/><br/>12.3.2 The k-nearest neighbour for classification<br/><br/>12.4 Neural networks and Support Vector Machines<br/><br/>12.5 Deep learning<br/><br/>12.5.1 Basis of deep learning<br/><br/>12.5.2 Major deep learning architectures<br/><br/>12.5.3 Deep learning for feature extraction<br/><br/>12.5.4 Deep learning performance evaluation<br/><br/>12.6 Further reading<br/><br/>References |
520 ## - SUMMARY, ETC. | |
Summary, etc | Feature Extraction for Image Processing and Computer Vision is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in MATLAB and Python. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, "The main strength of the proposed book is the link between theory and exemplar code of the algorithms." Essential background theory is carefully explained.<br/><br/>This text gives students and researchers in image processing and computer vision a complete introduction to classic and state-of-the art methods in feature extraction together with practical guidance on their implementation. |
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM | |
Topical term or geographic name as entry element | Computer vision. |
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM | |
Topical term or geographic name as entry element | Computer vision |
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM | |
Topical term or geographic name as entry element | Pattern recognition systems. |
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM | |
Topical term or geographic name as entry element | Image processing |
700 1# - ADDED ENTRY--PERSONAL NAME | |
Personal name | Aguado, Alberto S. |
856 40 - ELECTRONIC LOCATION AND ACCESS | |
Uniform Resource Identifier | <a href="https://ezproxy.lib.gla.ac.uk/login?url=https://www.sciencedirect.com/science/book/9780128149768">https://ezproxy.lib.gla.ac.uk/login?url=https://www.sciencedirect.com/science/book/9780128149768</a> |
942 ## - ADDED ENTRY ELEMENTS (KOHA) | |
Source of classification or shelving scheme | Dewey Decimal Classification |
Koha item type | General Books |
504 ## - BIBLIOGRAPHY, ETC. NOTE | |
Bibliography, etc | Includes bibliographical references and index. |
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM | |
General subdivision | Mathematics. |
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM | |
General subdivision | Digital techniques. |
776 08 - ADDITIONAL PHYSICAL FORM ENTRY | |
Qualifying information | Original |
International Standard Book Number | 0128149760 |
-- | 9780128149768 |
Record control number | (OCoLC)1085211122. |
856 40 - ELECTRONIC LOCATION AND ACCESS | |
Public note | Connect to e-book |
907 ## - LOCAL DATA ELEMENT G, LDG (RLIN) | |
a | .b37624738 |
Withdrawn status | Lost status | Source of classification or shelving scheme | Damaged status | Not for loan | Collection code | Home library | Location | Shelving location | Date of Cataloging | Total Checkouts | Full call number | Barcode | Date last seen | Price effective from | Koha item type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Dewey Decimal Classification | Non-fiction | CUTN Central Library | CUTN Central Library | Generalia | 28/10/2024 | 006.37 NIX | 50389 | 28/10/2024 | 28/10/2024 | General Books |