Discuss how the fog camera is implemented

[ Huaqiang Security Network News ]
It is understood that the weather is constantly changing, and the foggy weather is not too small. How do we monitor the surveillance cameras in the foggy weather? The fog caused inconvenience to the monitoring, caused inconvenience to people's travel, and the driving safety factor was greatly reduced. This article explains how to use partial light to achieve fog monitoring.
Through fog camera
Prospects for the development of fogging technology
On ships, ships, airplanes, etc., the sighting system plays a very important role in sensing the situation around it. The sighting system is generally composed of a CCD camera and an infrared imaging system. The harsh marine meteorological environment such as fog, water vapor, rain and snow will seriously affect the image quality of CCD and infrared imaging systems, mainly in the contrast of image contrast, and the distant target is blurred and difficult to distinguish. Thereby affecting the perception of the surrounding situation.
The image processing algorithm is used to improve the contrast of the image, that is, the video anti-reflection technology has been widely used in foreign countries, especially in the United States. The contrast effects before and after image processing are as follows:
From this contrasting effect picture, we obviously feel that the contrast of the image has been greatly improved by the anti-transparent processing of the video, and the originally blurred ship becomes more clearly visible, thus improving the view. The viewing distance of the aiming system improves the system's ability to sense the surrounding situation. Therefore, the video anti-reflection technology has a good application prospect in the sighting system such as a ship aircraft. The application of this video anti-reflection technology, due to the limitations of algorithms and hardware implementation technology, has just started in China, and commercialized mature products are rarely seen.
The environment of the ocean is extremely harsh, and weather such as fog, rain, and moisture is common, and the sighting system needs to be able to observe distant, small, high-speed moving targets in time. If you can't find the target in time, you may be in a passive state. Therefore, the provision of such a video anti-reflection device is very necessary to enhance the observation capability of the sight-and-see system.
Lens fogging technology
In recent years, the use of video surveillance equipment to defend security has become a necessary means of all walks of life. However, the traditional video surveillance equipment has a drawback, without exception, the monitoring effect at night and fog is very unsatisfactory, and night and fog are the case for multiple times. In addition, for monitoring farther away, it is almost blank.
The principle of fogging is such that in the range of invisible light, light of a certain frequency can penetrate the mist, but because of the different wavelengths, it needs to be processed on the camera to achieve the purpose of focusing on it, and also needs to be in the camera. The new design is used to image the invisible light of this frequency. Since this invisible light does not have a corresponding visible color map, the image presented on the monitor is black and white. Shooting objects through clouds and water vapor is equivalent to passing through two lenses (waterdrops and actual lenses). Except that R rays can be correctly focused on the CCD imaging surface, GB in RGB light cannot be projected normally in CCD imaging. On the surface, this causes the normal mode lens to be unable to obtain images in the cloud and moisture in a normal and clear manner.
In the past, when the CCTV lens was still below 300mm, the observation distance was generally limited to 1km. This application has lower requirements for weather visibility, but the focal length has been developed to 750mm today. The influence of fog on the monitoring image has to be caused. We value it. This situation is especially important in remote monitoring such as highways, forest fire prevention, oilfield monitoring, and port terminals that are closer to the sea. This environment is often more prone to fog, making 24-hour uninterrupted monitoring a new challenge.
In response to this situation, a small number of manufacturers with design and research and development capabilities have worked hard to develop a lens with a fog-passing function, and successfully achieved the launch of the finished product. The emergence of this technology has greatly broadened the application range of video surveillance, and is another classic case in which human beings overcome the natural environment according to K's wisdom and wisdom. A few manufacturers in the market do not have the ability to produce fog-transparent lens products, using ordinary products to sell as fog-transparent lenses, claiming to have a fog-permeable function, is extremely irresponsible behavior. Of course, in the actual test, you can't get away with it, and you can't get rid of the fate of being eliminated. But users who need this function have created many obstacles and waste a lot of time on product selection.
Video through fogging technology
Video anti-reflection technology generally refers to clearing images that are unclear due to fog and moisture, emphasizing certain features of interest in the image, suppressing features that are not of interest, and improving image quality and information volume. rich. The enhanced image provides good conditions for the next application of the image. In general, there are two types of anti-reflection technologies: airspace and frequency domain. However, these methods have some drawbacks to the adaptability of different images. In the 1970s, American physicist Land et al. proposed the Retinex image enhancement method, which is an image processing model based on human visual perception. It compresses the dynamic range of the image and displays the details that are annihilated in the image. However, the algorithm is complicated, and the engineering implementation is difficult, especially for the real-time enhancement of real-time video. Because of the large amount of calculation, it is difficult to practically apply. As hardware performance improves, we are finally able to turn this universally adaptable image enhancement algorithm into an actual engineered product. This is the hardware product of the Retinex algorithm first implemented in the industry.
The Retinex algorithm is based on the human visual system's model of sensing and adjusting the color and brightness of objects. This model explains the phenomenon that the human eye does not specifically correspond to the wavelength and brightness of the color that cannot be explained by general color theory. Land has proved through a lot of experiments that the surface color I mentioned will not change due to changes in lighting conditions, that is, color constancy. Simply speaking, the color is constant, that is, in the midday sun, under incandescent light, or under dark lighting conditions, the same object color that humans perceive is consistent. For this reason, in the image operation, the illumination intensity should be removed, some uncertain and non-essential effects such as uneven illumination, and only the reflection properties of the object such as reflectivity should be retained. Images processed based on this method can make images have good effects in edge sharpening, dynamic range compression, and constant color.
The basic idea of ​​Retinex theory is to regard the original image as composed of the illumination image and the object reflection property. The illumination light image directly determines the dynamic range that the pixel can reach in an image. The object reflection property determines the intrinsic property of the image. Therefore, It is the basic idea of ​​Retinex theory to remove or reduce the influence of the illuminating image in the original image and thus retain the essential reflection property. Compared with other image enhancement methods, Retinex algorithm has the characteristics of sharpening, color constancy, large dynamic range compression and high color fidelity.
Multi-part video enhancers now only use the global-based Retinex image enhancement algorithm to calculate the relative brightness between adjacent pixel points by calculating the ratio between the gray values ​​of adjacent pixels in the logarithmic domain, and then pass the light and dark. The relationship is corrected for the original pixel point gray value, and finally the corrected pixel point gray value is linearly stretched to obtain an enhanced image. Therefore, the resulting enhanced image contrast is not high. The CASEVisionVE9901 video enhancer uses advanced multi-scale Retinex image enhancement algorithm with strong universality. It also provides optimized logarithmic histogram equalization processing and multiple noise filtering algorithms. It is based on the embedded hardware structure of DSP and has the advantages of small size, low power consumption and high performance. And real-time image processing, automatically adapt to PAL and NTSC video images. And has a very low delay, the delay time does not exceed one frame, that is, the PAL video delay is 40ms, and the NTSC video delay is 33ms. At the same time, it also supports full-screen enhancement and partial window enhancement, and the local enhancement window size and position can be dynamically adjusted.

Mirror

Mini Makeup Mirror,Cosmetic Mirror Portable,Plastic Mirror Comb

YANGJIANG TRI-WIN INDUSTRY & TRADE CO.,LTD , https://www.triwintableware.com