International Journal of Control, Automation, and Systems (2010) 8(1):99-106 DOI 10.1007/s12555-010-0113-z
http://www.springer.com/12555
A Threshold-based Thinning Algorithm for a Visual, Automated Snow-Cover Measurement System Ik-Sang Shin, Jong-Hyeong Kim, and Soon-Geul Lee* Abstract: An automated snow-cover measuring system is developed to measure the amount of snowfall by analyzing the visual image of a reference pole under an unstructured outdoor environment. The system consists of a reference pole, a CCD camera (including an infra-red module), and a PC which transfers the processed information to remote users via the Internet. The snow depth is estimated based on the lowest uncovered position of the pole with the captured image. After correcting the image distortion through an expansive coefficient curve, the corrected image is compared to a virtual measuring scale (VMS) for the accurate measurement. To enhance the visual measurement process, the captured images are pre-processed and the camera is calibrated for the natural outside light condition of the varied weather. The snow-cover measuring (SCM) algorithm is used to detect the height of the piled snow. This measuring system can also continuously transfer the raw images, as well as the estimated snow depth to remote clients through the Internet. The experimental results show that the system improves the reliability and accuracy of the measurement, and that it is convenient to use. Keywords: Outdoor environment, snow-cover measuring, thinning, visual image, VMS.
1. INTRODUCTION The occasional heavy and sudden snowfall in various areas of the world causes tremendous casualties and financial losses. The magnitude of the damage it can cause has been increased partially due to the lack of prompt warnings against sudden snowfall and heavy snow. Although the abnormal and sudden changes in the weather make it difficult to forecast snowfall promptly and accurately, prompt warnings would facilitate the effective management of the distribution and operation of limited resources, especially snow-removing equipments. Therefore, measuring the amount of snowfall as accurately as possible and predicting the occurrence of heavy snow have both become very important meteorological processes. Generally, the locations where snowfall is a main concern as a major snow-covering index of a region are usually difficult to examine due to the heavy snow, such as Daegwanryeong Province in Korea. The traditional __________ Manuscript received May 29, 2008; revised April 24, 2009 and October 20, 2009; accepted November 9, 2009. Recommended by Editorial Board member Dong-Joong Kang under the direction of Editor Jae-Bok Song. This work was partially supported by the grant of the National Research Foundation of Korea (No. R012008-000-20645-0) and the ITRC support program supervised by the National IT Industry Promotion Agency (NIPA-2009-C10900904-0005). Ik-Sang Shin is with National Academy of Agricultural Science, RDA, Korea (e-mail:
[email protected]). Jong-Hyeong Kim is with School of Mechanical Design and Automation Engineering, Seoul National University of Technology, Seoul, Korea (email:
[email protected]). Soon-Geul Lee is with the Industrial Liaison Research Center, Kyung Hee University, Yongin, Korea (e-mail:
[email protected]). * Corresponding author. © ICROS, KIEE and Springer 2010
measuring method for snow cover is by performing an ocular examination of the real height of piled snow on a vertical pole mounted on the ground [1]. With the development of sensor and measurement technology, more accurate and convenient methods for snow-cover measurement have been introduced [2,3]. The balance for measuring the amount of precipitation (BMAP) and the depth-of-snow-cover meter (DSCM) are some examples. BMAP measures the depth of snow cover by calculating the amount of snowfall from the mass of precipitation. This method easily makes for an unreliable measurement when there is inclusion of foreign materials such as leaves, dirt, and so on. Meanwhile, DSCM uses ultra-sonar signal to measure the distance from the emitter to the snow-covered surface. Such a measuring device, however, results in false measurements when an unexpected floating object blocks the propagation of the ultra-sonar signal. In particular, the device’s accurate functioning is compromised by heavy snow. To improve the performance of and to enhance the convenience involved in the snow measuring and monitoring process, we have developed an Internet-based automatic visual monitoring and measuring system for snow cover [4], which is called Cliview-High.® It incorporates a locally installed CCD camera which captures images of the vertical reference pole buried under snow. The captured images are periodically transferred to remote users through the Internet. The user can read off the virtual mark on the pole from the captured images, after which the Cliview-High system can automatically process the images to estimate the depth of the snow cover. Although remote accessibility to Cliview-High through the Internet enhances the promptness in
100
Ik-Sang Shin, Jong-Hyeong Kim, and Soon-Geul Lee
preparing against possible climatic disasters, there are many problems in measuring snow depth based on the images acquired by a camera. Under clear daylight, it is easy to estimate the snow cover on the pole with one’s own naked eyes. At night, however, it is difficult to identify the boundary between the top surface of the snow and the uncovered region of the reference pole. Adding more light would introduce other drawbacks such as ambient light intensity, exposure compensation, and so on. Moreover, the captured image is inevitably distorted by its system configuration of installation. To obtain accurate measurements based on the image, this problem should therefore be addressed. The Cliview-High system processes and analyzes the captured images to extract information about the snow cover as accurately as possible. To measure the snowfall from the image, a segmentation method [5-7] is chosen. For the segmentation of intensity images, there are four main approaches, namely, the threshold technique, the boundary-based method, the region-based method, and the hybrid technique. The threshold technique is based on the postulate that all pixels whose values lie within a certain range belong to one class. The boundary-based method uses the postulate that the pixel value changes rapidly at the boundary between regions, while the region-based method relies on the postulate that the neighboring pixels within the one region have similar values. This leads to the class of algorithms known as a region growing of which the “split and merge” technique is probably the best known example. On the other hand, hybrid techniques combine the boundary and region criteria. The main contribution of this research is the development of a real-time embedded system whose resources should be limited, implementing it to real application, and systemizing a practical and reliable method for snow-cover measurement under an unstructured natural environment regardless of the weather condition. The following sections of this paper explain the details of the image-processing approaches and the use of the Virtual Measuring Scale (VMS) in Cliview-High. Section 2 explains the general configuration of CliviewHigh, while Section 3 illustrates the image preprocessing and automatic pole detection, and then presents the calibration of the images. This section also shows the use of VMS and the automated detection of the snowcovered surface. The experimental results are discussed in Sections 4 and the conclusion of this paper is given in Section 5.
CCD Camera
Fig. 1. Configuration of the Cliview-High. with black-colored chemicals to minimize the adhesion of snow. The distance between the CCD camera and the reference pole is 4m. The infra-red lighting module is also installed in the shelter to adjust the intensity at nighttime instead of using the conventional outdoor lighting system. Even though the conventional outdoor lighting showed better performance, there had been several complaints from neighbors. The CCD camera captures the image of the uncovered pole along with its circumference. The image-processing unit determines the depth of the snow cover from the captured images. The operation of the CCD camera and the lighting system can be remotely controlled by a privileged user. The central control unit includes an Internet-based image transport system that periodically broadcasts the images taken by the CCD camera via a dedicated network or the Internet. The measurement of snowfall can be executed either automatically with a preset rate or manually through the remote operator. 3. THE SNOW-COVER RECOGNITION ALGORITHM There are four main requirements of the imageprocessing algorithm in Cliview-High as shown in Fig. 2. The first is to correct the image that is distorted using its geometric configuration of installation. The second is to preprocess the images and automatically detect the
2. SYSTEM CONFIGURATION Fig. 1 shows the configuration of Cliview-High. The system hardware is composed of a snow-cover measuring pole, a CCD camera, and a PC as the main control unit. The software consists of an image capturing module, an image processing algorithm, and a transferring module for the snow data. The pole is a cylindrical structure 1.5 meter in height, which is installed on the local instrumental shelter. It is coated
Snow-cover Reference Ruler
Clieview-High
Hardware Tuning Start
Day & Night Image Logging
Calibration
Software Image Preprocessing
Auto Ruler Detection
iteration Snow Data Trans. to Server
Filtering
Measurement Snow Depth Detection
Fig. 2. Block diagram of the Cliview-High system.
A Threshold-based Thinning Algorithm for a Visual, Automated Snow-Cover Measurement System
101
reference pole. The third is to determine the surface of the snow cover and then estimate the actual height of the snowfall with the corrected image. Finally, data filtering is implemented. This section explains the details of two processes that are designed to satisfy the two requirements. For the correction of the image, the authors have taken the wellknown expansive correction curve approach. After the correction, the corrected image is then compared to a socalled VMS for estimating the actual height of the snowfall. 3.1. Correction and VMS As shown in Fig. 3, the initial process aims to obtain an appropriate image from the raw CCD image with the following instrumental conditions: white balancing, brightness control, infrared mode, and so on. The imagecapturing process and all other processes shown in the figure can be controlled by a remote user. After the image is obtained, the VMS is established based on the corrected results of the distorted images obtained by the CCD camera. The raw image from the CCD camera is distorted and does not exactly represent the actual scale of the pictured subject. The camera is located higher than the reference pole and the camera angle becomes more oblique when the target locates at the bottom of the pole as shown in Fig. 4. Therefore, the obtained source image of the camera is distorted. The correction for barrel distortion is applied to the raw image as illustrated in Fig. 4. This is because the system requires a relatively precise 5mm resolution over the whole image area. The full range of measurement is 1.5m by the camera whose resolution is 704x480. One pixel has 2.1mm dimension, so even a single pixel is critical in obtaining the required precision. Most snow-depth measurements are smaller than 10cm. This value is meteorologically significant as a forecast and has a close relation to our life. Fig. 4(b) shows significant error between the pixel image and the corrected image under 10cm. Without the distortion correction, the system may present severely erroneous results for small snowfall. The correction approach taken in Cliview-High aims to expand the distorted image using an expansive correction curve function, where each pixel in the captured image is shifted radically. Correction (1), which is used in this work and which determines the distance to be shifted by for each pixel, is a third-order polynomial. It relates the distance of a pixel from the center of the source image rsrc to the corresponding distance in the corrected image rcor as shown in Fig. 4(a):
Fig. 3. Block diagram for correction.
(a)
(b) Fig. 4. Camera distortion: (a) correction of the distorted image taken by the CCD camera. (b) correction curve to the centimeter corresponding to the distortion ratio of the CCD camera. 1 2 3 rsrc = (a ⋅ rsrc + b ⋅ rsrc + c ⋅ rsrc + d ) ⋅ rcor .
(1)
In Fig. 4(b), the horizontal axis is the pixel position of rsrc , while the vertical axis is the position of the corrected image rcor . The solid line shows the relation between rsrc and rcor of (1). The parameter d represents the linear scaling of the image. A set of parameters d = 1 and a = b = c = 0 leaves the image with no distortion and is denoted by the dotted line in Fig. 4(b). The parameters a, b, and c distort the image. If we do not want to scale the image, we can correct the image through the set d so that a + b + c + d = 1. The parameters are specified for each installation of the camera and can be used for all images taken by the particular camera. In this work, the correcting parameters were experimentally obtained from a set of reference points evenly spaced (10cm apart to each other) in the actual reference pole. During this process of distortion correction, a socalled VMS is established. Once the mapping between the real height and the pixel position is obtained experimentally like the correction curve of Fig. 4, then the VMS can be established as a virtual scale whose values correspond to the real height for each pixel position of the raw image [4]. Meanwhile, VMS overlays on top of the corrected image and makes it possible to read the scale of the image. It also shows the actual distance between the pixel points and can be used for both manual and
102
Ik-Sang Shin, Jong-Hyeong Kim, and Soon-Geul Lee
automatic measurements of the snow cover. For manual measurement using one’s own eyes, the image and the linearly scaled VMS can be zoomed in. The user locates the surface of the snow-cover and reads the height of the snowfall from the corrected and zoomed image against the VMS. The automated measurement uses a similar method as the manual method, but it requires a reliable technique to automatically detect the surface of the snow cover. The snow-cover measuring algorithm is used for this purpose. Initially, it is necessary to carry out image preprocessing and pole detection to obtain exact results. 3.2. Image preprocessing and automatic pole detection Generally, a certain level of intensity and contrast is required for proper image processing. The intensity and contrast of a raw CCD image along time are largely varied according to the prevailing weather conditions. Fig. 5 shows the variation in intensity at Daegwanryeong during one sunny day. It is very difficult to recognize the characteristics of the target without any preprocessing because the image has a degraded intensity and contrast from early evening to 8:00 pm. Such variation could even be more severe if the weather condition is bad. A system that is just tuned to one good weather condition cannot handle the image of other conditions due to such severe variation. Therefore, as shown in Fig. 6, preprocessing that adjusts the overall intensity of the whole image and then increases the contrast of the partial image of the Region of Interest (ROI) makes this system an all-weather one. Even though the camera and the pole are fixed on the ground, other disturbances such as wind and/or ground vibration affect the relative motion of the camera and pole. Therefore, the position of the pole should be automatically detected at every constant interval as shown in Fig. 6. After cropping the specific region of the pole and its adjacent surrounding region from the captured image, a thinning method is applied. The thinning makes further analysis and recognition easy and fast. Histogram projection is used to determine the crossing point between the pole and its branch, which is a critical key of the position. The experimental result for this process of Fig. 6 is shown in Fig. 7.
Fig. 5. Variations in image intensity for one day.
Fig. 6. Block diagram for periodically automatic pole detection.
Fig. 7. Implementation of the pole detection system. 3.3. Segmentation for snow-cover measurement It is necessary to improve the input image with a median filter for the rapid search of the boundary between the snow-covered and the uncovered part of the reference pole within ROI. There is a variety of noise components such as pepper noise or high frequency noise and so on. The median filter is very effective in removing salt, pepper, and impulse noise while retaining image details because it does not depend on values which are significantly different from the typical values in the neighborhood [7]. The median filter with 5*5 mask size is applied to ROI. A region-based technique is used to determine the edge between the bare pole and the snow cover. When an object exhibits a uniform grey level and rests against a background of different gray levels, the gray-level threshold method is applied as a simple region-based technique. The value of 0 (black) is assigned to all pixels that have a grey level less than the threshold, and the value of 255 (white) is assigned to all pixels that have a gray level greater than the threshold. Afterwards, the image is segmented into two disjoint regions where one corresponds to the background, and the other to the object. The selection of an appropriate threshold is the single major problem for reliable segmentation. One of the useful approaches is the use of the average gray level of those pixels which are on the boundary between the object and the background. The MarrHildreth operator is used to select the threshold level in the other to detect a reliable edge. Marr and Hildreth suggested that a set of images with different levels of smoothness could be obtained by applying Gaussian filters of different scales to an image. To detect the edges in these images, it is necessary to find the zero-crossings of their second derivatives. Marr and Hildreth achieved this using the Laplacian of a Gaussian (LOG) function as
A Threshold-based Thinning Algorithm for a Visual, Automated Snow-Cover Measurement System
a filter. x 2 + y 2 − 2σ 2 ∇ 2 G ( x, y ) = 2πσ 6
− e
2
2
(x + y ) 2σ 2
(2)
Using this operator, the mean and standard deviations of the gray levels of edge pixels are computed. These mean and standard deviations represent the global threshold value. Fig. 8 shows the real snow-cover images captured by the Cliview-High System. The reference pole can be just distinguished as a bar-type image in the captured picture. Fig. 8 also shows the effect of snowfall at nighttime and the snow cover at daytime. Snowfall at nighttime is shown as lines and distorts the shape and the region of the pole as shown in Fig. 8(a). If there is heavy snowfall, the pole looks like it is divided into several parts as in Fig. 8(b). Moreover, snow may attach onto the pole with rapid speed. This can able to sufficiently cause blurring at the snow stream as shown in Fig. 8(c). The best condition for measurement is shown in Fig. 8(d) when the wind almost calms down and the weather is clean. This is because appropriate intensity and contrast can be guaranteed during a day with clear weather. Table 1 represents the mean and standard deviations of the gray level of pixels along the edge, which were computed through the Marr and Hildreth operator and those of the overall ROI image. The mean and standard deviation are varying along time. Therefore the threshold value should be not constant but should be properly varied according to the characteristics of the image distribution.
103
Table 1. The mean and standard deviations of the gray level of pixels (I) along the edge and those (II) of the overall ROI image (size: 500 x 36). µ 44.19 43.88 12.67 37.7
Input images (a) (b) (c) (d)
σ 19.58 20.38 22.72 19.06
µ 106.69 107.82 143.2 165.05
(I)
σ 85.94 85.52 85.06 91.06 (II)
m11
m12
m13
m 21
m 22
m 23
m 31
m 32
m 33
Fig. 9. Connectivity check mask for the threshold operation. The masks to check connectivity in ROI are shown in Fig. 9. As the pole has a uniform region with black color, the connected components which consist of the pole are discriminated from the others without eliminating the corner pixels. m = (m11 | m12 ⋅⋅⋅ | m33 ),
(3)
where
m11 =| f (i, j ) − f (i, j + 1) |≤ k & m11 =| f (i, j ) − f (i + 1, j ) |≤ k f (i, j ) is the original gray image and the proper value
(a)
(b)
of k is experimentally obtained by trials and errors under snowy condition. If the connectivity exists, that is, m is true, we threshold the current pixel of the gray image with the following rule. Otherwise, we set the pixel as 1 (white). 0 Bij = 1
(c)
(d)
Fig. 8. Real snow-cover images captured by the Cliview-High system. (a) Effect of snowfall at nighttime, (b) Effect of heavy snowfall at nighttime, (c) Snowflakes expression caused by the multiple effects at nighttime, d) Snow-cover captured at daytime.
f (i, j ) ≤ ( µ + cσ ) otherwise,
(4)
where Bij is a binary image after applying the threshold; and µ and σ are the mean and standard deviations of the gray level of the image pixels along the edge, respectively. The parameter c is the average value of the normalized intensity of all pixels in ROI. The complete segmentation algorithm is summarized as follows: Step 1: Apply a median filter with 5*5 mask size to ROI. Step 2: Detect the snow-covered edge of the pole in ROI. Then compute the mean and standard deviations of
Ik-Sang Shin, Jong-Hyeong Kim, and Soon-Geul Lee
104
the gray level of the image pixels along the edge using the Marr-Hildreth operator. Step 3: Perform the threshold operation. (a) Check the connectivity using connectivity check masks with (3). If the connectivity fails, set the current pixel as 1(white) and then go to (c). (b) If it succeeds, threshold the current pixel of the gray image with (4). (c) Repeat Step3 until the end of ROI. Step 4: Carry out the closing operation. (Dilation followed by erosion with the threshold level 5) Step 5: Execute the component labeling algorithm with 4-connectivity. Step 6: Follow size filtering with the condition, (size of regions) < ks Step 7: Execute the thinning algorithm to obtain simplified information on the object (the reference pole). Another important criterion considered is how to merge the separated regions. The reason in considering this criterion is illustrated in the gray images of Fig. 10. The first image of each Fig. 10(a)-(d) shows the ROI to be measured. The few white marks on the black pole in those images are the traces of snowflakes. Even after a series of pre-processes, the white traces of the snowflakes still remain and divide the black pole image into several regions as shown in the first image of Fig. 10. A region-merging criterion is used to eliminate this kind of undesirable disturbance caused by any small flying object and to increase the reliability of the measurement. If two or more regions with the same properties are separated by a certain pixel-distance, then the regions can be merged into one. The selection of this pixel-
(a) 1)
2)
3)
4)
5)
(b) 1)
2)
3)
4)
5)
distance value for region-merging is a trade-off problem between the robustness and the accuracy of SCM. If the threshold, the pixel-distance value, is too small, noises cannot be properly removed. On the contrary, if the threshold is too large, then farther noises would be included in ROI, and the measured height of the snow cover could be far off from the real value. A proper threshold value is chosen as 5-pixel distance based on experimental experience. As shown in each fourth image of Fig. 10(a)-(c), two broken parts by snowfall are merged into one using this threshold value. The thinning operation is executed after labeling and size filtering. The purpose of thinning is to reduce the image components into their essential information, the height of the pole, in order to facilitate further analysis and easy recognition. Finally, the depth of the snow cover is estimated by comparing the pixel information of the boundary with the physical height information stored in the VMS. 3.4. Data filtering Even though the SCM algorithm can give a series of snow-covered heights along time as the final results, the results must be filtered to reject high-frequency noise and to obtain reliable data. Given that the output is reported to the Central Control Station through the Internet every 10 minutes and is used as official weather information, the data filtering is performed as follows in Fig. 11: Step 1: Save the successive results of SCM that gives six numbers per minute into the data set, then perform sorting and median filtering for 10 minutes. This step
(c) 1)
2)
3)
4)
5)
(d) 1)
2)
3)
4)
5)
Fig. 10. The results of the algorithm processing from four input images of Fig. 8(a)-(d): the processing steps for each (a)~(d) are 1) automatic pole detection → 2) edge detection with the Marr-Hildreth operator → 3) threshold with the mean and standard deviation of the edge pixels → 4) component labeling after the closing operation which is dilation followed by erosion and then size filtering → 5) thinning processing for height information.
A Threshold-based Thinning Algorithm for a Visual, Automated Snow-Cover Measurement System
105
Table 2. Measured results during nighttime. Snow-Cover Measurement (a) Manual measurement 0 1.6 3.6 7.2 15 17.1 23.4 30.5 44.8 48.7
Fig. 11. Block diagram for snow-cover data filtering. aims to reject a sudden change. Step 2: Do binning, which divides the 10 numbers of the outputs obtained from Step 1 for 10 minutes in order to exclude the effect of other blocks and smooth the data. Step 3: Do time difference filtering which estimates whether the difference between the current and previous output is within the normal scope or not. 4. EXPERIMENTAL RESULTS The performance of Cliview-High has been tested under various conditions. This work, however, shows only a set of typical experimental results where the system has been installed at a mountainous area in the Gangwon Province of South Korea. It is the region where frequent and sudden heavy snowfall causes severe problems annually. From the winter of 2006, the height of snowfall has been measured automatically for 24 hours by CliviewHigh. A set of 12 different measurements using the proposed algorithm is compared to conventional manual measurements which are performed locally by an expert, and is then compared further with that of Seed Region
(1)
(2)
(3)
(4)
(5)
(6)
(b) Proposed algorithm
0.1 1.6 3.5 7.2 15.1 17 23.4 30.4 44.9 48.7 Average of Error
Difference (cm)
(c) SRG 0.1 1.5 3.6 7.2 15.2 17.1 23.3 30.3 44.7 48.7
|a–b| 0.1 0 0.1 0 0.1 0.1 0 0.1 0.1 0 0.06
|a–c| 0.1 0.1 0 0 0.2 0.1 0.1 0.2 0.1 0 0.09
Growing (SRG) method in Fig. 12. Processing with the SRG method would stop with the unexpected separation of image regions due to drift snow. If it stops, the seed point must be placed in the other separated region to continue processing, a task which requires more time to complete. Tables 2 and 3 show the typical results obtained from the experiments and the comparison with those of the SRG method. The measurements using Cliview-High have been done for various snowfall and lighting conditions. Measurements are performed for 10 different levels of snow height at snowy nighttime and for 10 different levels at snowy daytime. The minimum height tested is 1.6cm, and the maximum is 54.4cm. The error of the proposed method as compared to the manual measurement is limited under 0.1cm, which is considered to be much smaller than the acceptable error in usual snowcover measuring practices. The average error of the proposed method as compared to manual measurement is 0.06cm during nighttime and 0.056cm during daytime. In the meantime,
(7)
(8)
(9)
(10)
(11)
(12)
Fig. 12. The images that show 12 cases of measurement: “a” is the clipped image of ROI, “b” is the resultant image after processing with SRG, and “c” is the resultant image after processing with the proposed method for each (1)~(12).
Ik-Sang Shin, Jong-Hyeong Kim, and Soon-Geul Lee
106
Table 3. Measured results during daytime. [5]
Snow-Cover Measurement (a) Manual measurement 0 3.6 7.4 19.2 26.5 31.3 37.8 41.1 54.4
(b) Proposed algorithm
0 3.6 7.3 19.1 26.5 31.2 37.7 41.1 54.3 Average of Error
Difference (cm) (c) SRG 0 3.5 7.2 19.1 26.4 31.2 37.7 39.9 54.2
|a–b| 0 0 0.1 0.1 0 0.1 0.1 0.0 0.1 0.055
|a–c| 0 0.1 0.2 0.1 0.1 0.1 0.1 0.2 0.2 0.122
the average error of SRG is 0.09cm during nighttime and 0.122cm during daytime. During daytime, there is no major difference between the two methods. However, the proposed method shows significantly better performance than SRG during nighttime. 5. CONCLUSIONS In this work, a visual snow-cover monitoring and measuring system, Cliview-High, is developed. CliviewHigh captures the image of snow cover with the reference pole, estimates the height of the snow cover with a proposed method, and transfers the measured height and the image to remote users through the Internet. The Korean Meteorological Administration currently operates more than 20 systems at 19 sites in South Korea and provides Internet service to the public during the winter season. It has been observed that the system is robust under unstructured outdoor weather conditions. Moreover, the proposed algorithm has good reliability and enough accuracy to measure the height of the snow cover. The validation of Cliview-High has been experimentally proven, with the average error of 0.06cm under various conditions, by comparing it with the results of the manual measurement of an expert. This means that the error of the system is much smaller than the usually acceptable error in other snow measuring practices.
[1]
[2]
[3]
[4]
REFERENCES C. Labine, “Automatic monitoring of snow depth,” Proc. of the Int. Snow Science Workshop, Banff, Canada, pp. 179-183, 1996. A. T. C. Chang, R. E. J. Kelly, J. L. Foster, and T. Koike, “Estimation of snow depth from AMSR-E in the GAME/CEOP Siberia experiment region,” Proc. of IEEE IGARSS, pp. 3670-3673, 2004. S. H. Lim, G. H. Kim, S. G. Lee, and S. Rhim, “Internet-based visual snow-cover monitoring and measuring system,” Proc. of SICE-ICASE International Joint Conference, pp. 3887-3890, 2006. S. Rhim, G. H. Kim, S. H. Lim, and S. G. Lee, “Internet-based visual snow-cover measurement using virtual measuring scale,” Proc. of IMECS, pp.
[6]
[7]
[8]
1939-1942, 2007. N. Otsu, “A threshold selection method from graylevel histograms,” IEEE Trans. on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62-66, 1979. D. Marr and E. Hildreth, “Theory of edge detection,” Proc. of the Royal Society, vol. 207, no. 1167, pp. 187-217, 1980. R. Jain, R. Kasturi, and R. G. Schunck, Machine Vision, ch.4., McGraw-Hill International Editions, 1995 J. S. Bae and T. L. Song, “Image tracking algorithm using template matching and PSNF-m,” International Journal of Control, Automation, and Systems, vol. 6, no. 3, pp. 413-423, 2008.
Ik-Sang Shin received the B.E. degree in Electrical Engineering from Hanbat National University, Daejeon, Korea, the M.S. degree in Electrical Engineering from Myongji University, Seoul, Korea, and the Ph.D. degree in Mechanical Engineering from Sejong University in 1996, 1998 and 2005, respectively. He worked as a researcher in Florida State University from Sep. 2006 to Sep. 2007, and in Kyung Hee University from Oct. 2007 to Nov. 2008. He currently works in the Agricultural Engineering Farming Automation Division of the Rural Development Administration. His areas of interest include robotics and automation, control system for underwater vehicles, path planning for multi-mobile robots, and Internetbased application systems with machine vision. Jong-Hyeong Kim received the B.S. degree from Seoul National University, Korea in 1984, the M.S. degree in the Department of Production Engineering, and the Ph.D. degree in Department of Mechanical Engineering of the Korea Advanced Institute of Science and Technology (KAIST) in 1989 and 1995, respectively. From 1995 to 2002, he was the General Manager of the Factory Automation Research Institute of Samsung Electronics Co, Ltd., taking part in research and business in robot and machine vision. Since 2002, he has been a professor at Seoul National University of Technology. His research interests include machine vision, artificial intelligence, robot application and factory automation. He likewise served as the educational director of the Institute of Control, Automation, and Systems Engineers (ICASE) (2005~2007). Soon-Geul Lee received the B.E. degree in Mechanical Engineering from Seoul National University, Seoul, Korea, the M.S. degree in Production Engineering from KAIST, Seoul, Korea, and the Ph.D. degree in Mechanical Engineering from the University of Michigan in 1983, 1985 and 1993 respectively. Since 1996, he has been with the Department of Mechanical Engineering of Kyung Hee University, Yongin, Korea, where he is currently a professor. His research interests include robotics and automation, mechatronics, intelligent control, and biomechanics. He likewise served as the Director of the Korean Society of Precision Engineering (KSPE) (2005-2007) and for the Institute of Control, Automation, and Systems Engineers (ICASE) (2006).