Machine vision






Early Automatix (now part of Omron) machine vision system Autovision II from 1983 being demonstrated at a trade show. Camera on tripod is pointing down at a light table to produce backlit image shown on screen, which is then subjected to blob extraction.


Machine vision (MV) is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry. Machine vision is a term encompassing a large number of technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of computer science. It attempts to integrate existing technologies in new ways and apply them to solve real world problems. The term is the prevalent one for these functions in industrial automation environments but is also used for these functions in other environments such as security and vehicle guidance.


The overall machine vision process includes planning the details of the requirements and project, and then creating a solution. During run-time, the process starts with imaging, followed by automated analysis of the image and extraction of the required information.




Contents






  • 1 Definition


  • 2 Imaging based automatic inspection and sorting


    • 2.1 Methods and sequence of operation


    • 2.2 Equipment


    • 2.3 Imaging


    • 2.4 Image processing


    • 2.5 Outputs




  • 3 Imaging based robot guidance


  • 4 Market


  • 5 See also


  • 6 References


  • 7 External links





Definition


Definitions of the term "Machine vision" vary, but all include the technology and methods used to extract information from an image on an automated basis, as opposed to image processing, where the output is another image. The information extracted can be a simple good-part/bad-part signal, or more a complex set of data such as the identity, position and orientation of each object in an image. The information can be used for such applications as automatic inspection and robot and process guidance in industry, for security monitoring and vehicle guidance.[1][2][3] This field encompasses a large number of technologies, software and hardware products, integrated systems, actions, methods and expertise.[3][4] Machine vision is practically the only term used for these functions in industrial automation applications; the term is less universal for these functions in other environments such as security and vehicle guidance. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of basic computer science; machine vision attempts to integrate existing technologies in new ways and apply them to solve real world problems in a way that meets the requirements of industrial automation and similar application areas.[3]:5[5] The term is also used in a broader sense by trade shows and trade groups such as the Automated Imaging Association and the European Machine Vision Association. This broader definition also encompasses products and applications most often associated with image processing.[4] The primary uses for machine vision are automatic inspection and industrial robot/process guidance.[6][7]:6–10[8] See glossary of machine vision.



Imaging based automatic inspection and sorting


The primary uses for machine vision are imaging-based automatic inspection and sorting and robot guidance.;[6][7]:6–10 in this section the former is abbreviated as "automatic inspection". The overall process includes planning the details of the requirements and project, and then creating a solution.[9][10] This section describes the technical process that occurs during the operation of the solution.



Methods and sequence of operation


The first step in the automatic inspection sequence of operation is acquisition of an image, typically using cameras, lenses, and lighting that has been designed to provide the differentiation required by subsequent processing.[11][12] MV software packages and programs developed in them then employ various digital image processing techniques to extract the required information, and often make decisions (such as pass/fail) based on the extracted information.[13]



Equipment


The components of an automatic inspection system usually include lighting, a camera or other imager, a processor, software, and output devices.[7]:11–13



Imaging


The imaging device (e.g. camera) can either be separate from the main image processing unit or combined with it in which case the combination is generally called a smart camera or smart sensor.[14][15] When separated, the connection may be made to specialized intermediate hardware, a custom processing appliance, or a frame grabber within a computer using either an analog or standardized digital interface (Camera Link, CoaXPress).[16][17][18][19] MV implementations also use digital cameras capable of direct connections (without a framegrabber) to a computer via FireWire, USB or Gigabit Ethernet interfaces.[19][20]


While conventional (2D visible light) imaging is most commonly used in MV, alternatives include multispectral imaging, hyperspectral imaging, imaging various infrared bands,[21] line scan imaging, 3D imaging of surfaces and X-ray imaging.[6] Key differentiations within MV 2D visible light imaging are monochromatic vs. color, frame rate, resolution, and whether or not the imaging process is simultaneous over the entire image, making it suitable for moving processes.[22]


Though the vast majority of machine vision applications are solved using two-dimensional imaging, machine vision applications utilizing 3D imaging are a growing niche within the industry.[23][24] The most commonly used method for 3D imaging is scanning based triangulation which utilizes motion of the product or image during the imaging process. A laser is projected onto the surfaces of an object and viewed from a different angle. In machine vision this is accomplished with a scanning motion, either by moving the workpiece, or by moving the camera & laser imaging system. The line is viewed by a camera from a different angle; the deviation of the line represents shape variations. Lines from multiple scans are assembled into a depth map or point cloud.[25] Stereoscopic vision is used in special cases involving unique features present in both views of a pair of cameras.[25] Other 3D methods used for machine vision are time of flight and grid based.[25][23] One method is grid array based systems using pseudorandom structured light system as employed by the Microsoft Kinect system circa 2012.[26][27]



Image processing


After an image is acquired, it is processed.[18] Multiple stages of processing are generally used in a sequence that ends up as a desired result. A typical sequence might start with tools such as filters which modify the image, followed by extraction of objects, then extraction (e.g. measurements, reading of codes) of data from those objects, followed by communicating that data, or comparing it against target values to create and communicate "pass/fail" results. Machine vision image processing methods include;



  • Stitching/Registration: Combining of adjacent 2D or 3D images.[citation needed]

  • Filtering (e.g. morphological filtering)[28]

  • Thresholding: Thresholding starts with setting or determining a gray value that will be useful for the following steps. The value is then used to separate portions of the image, and sometimes to transform each portion of the image to simply black and white based on whether it is below or above that grayscale value.[29]

  • Pixel counting: counts the number of light or dark pixels[citation needed]


  • Segmentation: Partitioning a digital image into multiple segments to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze.[30][31]


  • Edge detection: finding object edges [32]

  • Color Analysis: Identify parts, products and items using color, assess quality from color, and isolate features using color.[6]


  • Blob detection and extraction: inspecting an image for discrete blobs of connected pixels (e.g. a black hole in a grey object) as image landmarks.[33]


  • Neural net / deep learning / machine learning processing: weighted and self-training multi-variable decision making [34] Circa 2018 there is a large expansion of this, using deep learning and machine learning to significantly expand machine vision capabilities.


  • Pattern recognition including template matching. Finding, matching, and/or counting specific patterns. This may include location of an object that may be rotated, partially hidden by another object, or varying in size.[35]


  • Barcode, Data Matrix and "2D barcode" reading [36]


  • Optical character recognition: automated reading of text such as serial numbers [37]


  • Gauging/Metrology: measurement of object dimensions (e.g. in pixels, inches or millimeters) [38]

  • Comparison against target values to determine a "pass or fail" or "go/no go" result. For example, with code or bar code verification, the read value is compared to the stored target value. For gauging, a measurement is compared against the proper value and tolerances. For verification of alpha-numberic codes, the OCR'd value is compared to the proper or target value. For inspection for blemishes, the measured size of the blemishes may be compared to the maximums allowed by quality standards.[36]



Outputs


A common output from automatic inspection systems is pass/fail decisions.[13] These decisions may in turn trigger mechanisms that reject failed items or sound an alarm. Other common outputs include object position and orientation information for robot guidance systems.[6] Additionally, output types include numerical measurement data, data read from codes and characters, counts and classification of objects, displays of the process or results, stored images, alarms from automated space monitoring MV systems, and process control signals.[9][12] This also includes user interfaces, interfaces for the integration of multi-component systems and automated data interchange.[39]



Imaging based robot guidance


Machine vision commonly provides location and orientation information to a robot to allow the robot to properly grasp the product. This capability is also used to guide motion that is simpler than robots, such as a 1 or 2 axis motion controller.[6] The overall process includes planning the details of the requirements and project, and then creating a solution. This section describes the technical process that occurs during the operation of the solution. Many of the process steps are the same as with automatic inspection except with a focus on providing position and orientation information as the end result.[6]



Market


The global Machine Vision market is expected to reach US$15.46 billion by the end of 2022 with 8.18% CAGR during the forecast period 2017-2022. Machine Vision Market is growing with positive growth in all regions. Increasing application areas year on year and advancement in technology and integration is driving the market on a global scale. Asia Pacific is dominating the global market with more than 30% of market share followed by Europe standing as the second biggest market due to heavy demand from the automotive and healthcare industry. North America stands as the third biggest market.
[40]



See also



  • Machine vision glossary

  • Feature detection (computer vision)

  • Foreground detection

  • Vision processing unit

  • Optical sorting



References





  1. ^ Steger, Carsten; Markus Ulrich; Christian Wiedemann (2018). Machine Vision Algorithms and Applications (2nd ed.). Weinheim: Wiley-VCH. p. 1. ISBN 978-3-527-41365-2. Retrieved 2018-01-30..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output .citation q{quotes:"""""""'""'"}.mw-parser-output .citation .cs1-lock-free a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-limited a,.mw-parser-output .citation .cs1-lock-registration a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-subscription a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-ws-icon a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Wikisource-logo.svg/12px-Wikisource-logo.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-maint{display:none;color:#33aa33;margin-left:0.3em}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}


  2. ^ Beyerer, Jürgen; Puente León, Fernando & Frese, Christian (2016). Machine Vision - Automated Visual Inspection: Theory, Practice and Applications. Berlin: Springer. doi:10.1007/978-3-662-47794-6. ISBN 978-3-662-47793-9. Retrieved 2016-10-11.


  3. ^ abc Graves, Mark & Bruce G. Batchelor (2003). Machine Vision for the Inspection of Natural Products. Springer. p. 5. ISBN 978-1-85233-525-0. Retrieved 2010-11-02.


  4. ^ ab Holton, W. Conard (October 2010). "By Any Other Name". Vision Systems Design. 15 (10). ISSN 1089-3709. Retrieved 2013-03-05.


  5. ^ Owen-Hill, Alex (July 21, 2016). "Robot Vision vs Computer Vision: What's the Difference?". Robotics Tomorrow.


  6. ^ abcdefg Turek, Fred D. (June 2011). "Machine Vision Fundamentals, How to Make Robots See". NASA Tech Briefs. 35 (6): 60–62. Retrieved 2011-11-29.


  7. ^ abc Cognex (2016). "Introduction to Machine Vision" (PDF). Assembly Magazine. Retrieved 9 February 2017.


  8. ^ Lückenhaus, Maximilian (May 1, 2016). "Machine Vision in IIoT". Quality Magazine.


  9. ^ ab West, Perry A Roadmap For Building A Machine Vision System Pages 1-35


  10. ^ Dechow, David (January 2009). "Integration: Making it Work". Vision & Sensors: 16–20. Retrieved 2012-05-12.


  11. ^ Hornberg, Alexander (2006). Handbook of Machine Vision. Wiley-VCH. p. 427. ISBN 978-3-527-40584-8. Retrieved 2010-11-05.


  12. ^ ab Demant C.; Streicher-Abel B. & Waszkewitz P. (1999). Industrial Image Processing: Visual Quality Control in Manufacturing. Springer-Verlag. ISBN 3-540-66410-6.
    [page needed]



  13. ^ ab Hornberg, Alexander (2006). Handbook of Machine Vision. Wiley-VCH. p. 429. ISBN 978-3-527-40584-8. Retrieved 2010-11-05.


  14. ^ Belbachir, Ahmed Nabil, ed. (2009). Smart Cameras. Springer. ISBN 978-1-4419-0952-7.
    [page needed]



  15. ^ Dechow, David (February 2013). "Explore the Fundamentals of Machine Vision: Part 1". Vision Systems Design. 18 (2): 14–15. Retrieved 2013-03-05.


  16. ^ Wilson, Andrew (May 31, 2011). "CoaXPress standard gets camera, frame grabber support". Vision Systems Design. Retrieved 2012-11-28.


  17. ^ Wilson, Dave (November 12, 2012). "Cameras certified as compliant with CoaXPress standard". Vision Systems Design. Retrieved 2013-03-05.


  18. ^ ab Davies, E.R. (1996). Machine Vision - Theory Algorithms Practicalities (2nd ed.). Harcourt & Company. ISBN 978-0-12-206092-2.
    [page needed].



  19. ^ ab Dinev, Petko (March 2008). "Digital or Analog? Selecting the Right Camera for an Application Depends on What the Machine Vision System is Trying to Achieve". Vision & Sensors: 10–14. Retrieved 2012-05-12.


  20. ^ Wilson, Andrew (December 2011). "Product Focus - Looking to the Future of Vision". Vision Systems Design. 16 (12). Retrieved 2013-03-05.


  21. ^ Wilson, Andrew (April 2011). "The Infrared Choice". Vision Systems Design. 16 (4): 20–23. Retrieved 2013-03-05.


  22. ^ West, Perry High Speed, Real-Time Machine Vision CyberOptics, pages 1-38


  23. ^ ab Murray, Charles J (February 2012). "3D Machine Vison Comes into Focus". Design News. Archived from the original on 2012-06-05. Retrieved 2012-05-12.


  24. ^ Davies, E.R. (2012). Computer and Machine Vision: Theory, Algorithms, Practicalities (4th ed.). Academic Press. pp. 410–411. ISBN 9780123869081. Retrieved 2012-05-13.


  25. ^ abc 3-D Imaging: A practical Overview for Machine Vision By Fred Turek & Kim Jackson Quality Magazine, March 2014 issue, Volume 53/Number 3 Pages 6-8


  26. ^ http://research.microsoft.com/en-us/people/fengwu/depth-icip-12.pdf HYBRID STRUCTURED LIGHT FOR SCALABLE DEPTH SENSING Yueyi Zhang, Zhiwei Xiong, Feng Wu University of Science and Technology of China, Hefei, China Microsoft Research Asia, Beijing, China


  27. ^ R.Morano, C.Ozturk, R.Conn, S.Dubin, S.Zietz, J.Nissano, "Structured light using pseudorandom codes", IEEE Transactions on Pattern Analysis and Machine Intelligence 20 (3)(1998)322–327


  28. ^ Demant C.; Streicher-Abel B. & Waszkewitz P. (1999). Industrial Image Processing: Visual Quality Control in Manufacturing. Springer-Verlag. p. 39. ISBN 3-540-66410-6.


  29. ^ Demant C.; Streicher-Abel B. & Waszkewitz P. (1999). Industrial Image Processing: Visual Quality Control in Manufacturing. Springer-Verlag. p. 96. ISBN 3-540-66410-6.


  30. ^ Linda G. Shapiro and George C. Stockman (2001): “Computer Vision”, pp 279-325, New Jersey, Prentice-Hall,
    ISBN 0-13-030796-3



  31. ^ Lauren Barghout. Visual Taxometric approach Image Segmentation using Fuzzy-Spatial Taxon Cut Yields Contextually Relevant Regions. Information Processing and Management of Uncertainty in Knowledge-Based
    Systems. CCIS Springer-Verlag. 2014



  32. ^ Demant C.; Streicher-Abel B. & Waszkewitz P. (1999). Industrial Image Processing: Visual Quality Control in Manufacturing. Springer-Verlag. p. 108. ISBN 3-540-66410-6.


  33. ^ Demant C.; Streicher-Abel B. & Waszkewitz P. (1999). Industrial Image Processing: Visual Quality Control in Manufacturing. Springer-Verlag. p. 95. ISBN 3-540-66410-6.


  34. ^ Turek, Fred D. (March 2007). "Introduction to Neural Net Machine Vision". Vision Systems Design. 12 (3). Retrieved 2013-03-05.


  35. ^ Demant C.; Streicher-Abel B. & Waszkewitz P. (1999). Industrial Image Processing: Visual Quality Control in Manufacturing. Springer-Verlag. p. 111. ISBN 3-540-66410-6.


  36. ^ ab Demant C.; Streicher-Abel B. & Waszkewitz P. (1999). Industrial Image Processing: Visual Quality Control in Manufacturing. Springer-Verlag. p. 125. ISBN 3-540-66410-6.


  37. ^ Demant C.; Streicher-Abel B. & Waszkewitz P. (1999). Industrial Image Processing: Visual Quality Control in Manufacturing. Springer-Verlag. p. 132. ISBN 3-540-66410-6.


  38. ^ Demant C.; Streicher-Abel B. & Waszkewitz P. (1999). Industrial Image Processing: Visual Quality Control in Manufacturing. Springer-Verlag. p. 191. ISBN 3-540-66410-6.


  39. ^ Hornberg, Alexander (2006). Handbook of Machine Vision. Wiley-VCH. p. 709. ISBN 978-3-527-40584-8. Retrieved 2010-11-05.


  40. ^ "Market Research Future Future". CIO: 46.




External links











Comments

Popular posts from this blog

Information security

Lambak Kiri

章鱼与海女图