An article on Industrial Cameras (2)
3. Black and white camera with color camera
Whether it is a CCD or a CMOS image sensor, the principle is to convert photons into electrons, where the number of photons is proportional to the number of electrons. For each pixel, the number of electrons is counted to form a grayscale image reflecting the intensity of the light. That is, the CCD and CMOS image sensors are "color blind" and do not have the ability to distinguish colors, and can only form black and white images.
In order to obtain a color image, color information is usually collected using a prism or a filter. Tri-prism mode: The prism is used to divide the incident light into three beams. Each beam of light is filtered by a different built-in grating to extract a certain three primary colors, and then respectively sensitized by using three CCDs, as shown in Fig. 3 (middle). Then combine the three images into a high-resolution, color-accurate image. Since the method requires three photosensitive chips, the cost is relatively expensive.
If the price factor is taken into consideration, Kodak's Bayer proposed an inexpensive compromise: using only one image sensor to solve the color recognition. His approach is to set up a filter in front of the image sensor, which is covered with filter points, one-to-one correspondence with the pixels below. The arrangement of the filter points on the Bayer filter is regular: around each green dot, there are 2 red dots, 2 blue dots, and 4 green dots. Since the human eye is most sensitive to green, the green is twice as much as red and blue. Since each filter point can only pass one of red, green and blue, all pixels should have the information of these three colors at the output, and the two color component values filtered out are in the later stage. In the algorithm processing, interpolation is performed by interpolation.
The Bayer filter approach has the significant advantage of being cost effective, and is currently used in most color cameras. However, black and white cameras capture photons of all wavelengths, while Bayer color cameras only accept photons in three bands of RGB, and do a de-mosaic neighborhood averaging operation, so neither the luminous flux nor the detail performance is weaker than the grayscale camera. Compares the color camera with the black and white camera in the same environment. Therefore, if color is not required as an inspection requirement, industrial cameras generally choose a black and white camera because the black and white camera has higher precision with the same resolution.
4. Line array camera and area array camera
Industrial cameras can be divided into line array cameras and area array cameras according to the arrangement of pixels. Line arrays and area array cameras have their own advantages and disadvantages, and are suitable for different application environments.
Linear array cameras, as the name suggests, are "line" in the field of view. Its sensors usually have only one line of photosensitive elements, and they are continuously photographed in a "line" scan, and then a huge two-dimensional image is synthesized. In some applications, such as high frequency scanning and high resolution, line array cameras have specific advantages over area array cameras. For example, when detecting round or cylindrical items, multiple array cameras may be required to cover the entire surface of the item. But if we put the item in front of a line camera and then rotate the item, in this way the image is unfolded and we can capture an image of the entire surface. Moreover, line array cameras are also easier to install into tighter applications, such as when the camera has to pass through the rollers on the conveyor belt to view the bottom of the item. In addition, line array cameras typically provide higher resolution than traditional area array cameras. Since line array cameras require items to move to create images, they are often well suited for detecting products that are in continuous motion.
Compared to linear array cameras, area array cameras use "face" as the unit for image acquisition. The area array camera's sensor has more photosensitive pixels arranged in a matrix. The area camera can acquire a complete target image at one time, and has a faster detection speed than a line camera. Most common inspection cameras are based on area array scanning, including measurement of area, shape, size, position, and even temperature, but the area of the camera does not have more lines per line and has a limited frame rate. Camera pixels are usually expressed in tens of units and arranged in a matrix. For example, a pixel matrix of a 1 megapixel camera is W x H (width x height) = 1000 x 1000. The resolution of the camera means that one pixel represents the size of the actual object, which is represented by um x um. The smaller the value, the higher the resolution. The resolution is determined by the focal length of the selected lens. For the same camera, the lens with different focal lengths will have different resolutions. In terms of expressing image details, it is not determined by the number of pixels of the camera, but by resolution. Under the same resolution conditions, the more pixels, the larger the area that can be imaged. Although the sharpness is not determined by the pixels, a camera with a large pixel can reduce the number of photographs and thus increase the speed.
5. From the plane (2D) to the stereo (3D)
Both line array cameras and area array cameras can only achieve 2D imaging, lacking depth information. With the increase of detection accuracy and application scene complexity, 2D cameras are becoming more and more difficult, and can measure distance and 3D modeling. The camera came into being. With the continuous breakthrough of 3D vision technology, 3D cameras far surpass 2D cameras in terms of accuracy, speed and flexibility, and have become very popular in many traditional visual “painful application scenarios”.