• Title/Summary/Keyword: pixel differences

Search Result 101, Processing Time 0.024 seconds

Emotion Recognition System Using Neural Networks in Textile Images (신경망을 이용한 텍스타일 영상에서의 감성인식 시스템)

  • Kim, Na-Yeon;Shin, Yun-Hee;Kim, Soo-Jeong;Kim, Jee-In;Jeong, Karp-Joo;Koo, Hyun-Jin;Kim, Eun-Yi
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.9
    • /
    • pp.869-879
    • /
    • 2007
  • This paper proposes a neural network based approach for automatic human emotion recognition in textile images. To investigate the correlation between the emotion and the pattern, the survey is conducted on 20 peoples, which shows that a emotion is deeply affected by a pattern. Accordingly, a neural network based classifier is used for recognizing the pattern included in textiles. In our system, two schemes are used for describing the pattern; raw-pixel data extraction scheme using auto-regressive method (RDES) and wavelet transformed data extraction scheme (WTDES). To assess the validity of the proposed method, it was applied to recognize the human emotions in 100 textiles, and the results shows that using WTDES guarantees better performance than using RDES. The former produced the accuracy of 71%, while the latter produced the accuracy of 90%. Although there are some differences according to the data extraction scheme, the proposed method shows the accuracy of 80% on average. This result confirmed that our system has the potential to be applied for various application such as textile industry and e-business.

The Study on Optimal Image Processing and Identifying Threshold Values for Enhancing the Accuracy of Damage Information from Natural Disasters (자연재해 피해정보 산출의 정확도 향상을 위한 최적 영상처리 및 임계치 결정에 관한 연구)

  • Seo, Jung-Taek;Kim, Kye-Hyun
    • Spatial Information Research
    • /
    • v.19 no.5
    • /
    • pp.1-11
    • /
    • 2011
  • This study mainly focused on the method of accurately extracting damage information in the im agery change detection process using the constructed high resolution aerial im agery. Bongwha-gun in Gyungsangbuk-do which had been severely damaged from a localized torrential downpour at the end of July, 2008 was selected as study area. This study utilized aerial im agery having photographing scale of 30cm gray image of pre-disaster and 40cm color image of post-disaster. In order to correct errors from the differences of the image resolution of pre-/post-disaster and time series, the prelim inary phase of image processing techniques such as normalizing, contrast enhancement and equalizing were applied to reduce errors. The extent of the damage was calculated using one to one comparison of the intensity of each pixel of pre-/post-disaster im aged. In this step, threshold values which facilitate to extract the extent that damage investigator wants were applied by setting difference values of the intensity of pixel of pre-/post-disaster. The accuracy of optimal image processing and the result of threshold values were verified using the error matrix. The results of the study enabled the early exaction of the extents of the damages using the aerial imagery with identical characteristics. It was also possible to apply to various damage items for imagery change detection in case of utilizing multi-band im agery. Furthermore, more quantitative estimation of the dam ages would be possible with the use of numerous GIS layers such as land cover and cadastral maps.

A Study on Extraction of Croplands Located nearby Coastal Areas Using High-Resolution Satellite Imagery and LiDAR Data (고해상도 위성영상과 LiDAR 자료를 활용한 해안지역에 인접한 농경지 추출에 관한 연구)

  • Choung, Yun-Jae
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.18 no.1
    • /
    • pp.170-181
    • /
    • 2015
  • A research on extracting croplands located nearby coastal areas using the spatial information data sets is the important task for managing the agricultural products in coastal areas. This research aims to extract the various croplands(croplands on mountains and croplands on plain areas) located nearby coastal areas using the KOMPSAT-2 imagery, the high-resolution satellite imagery, and the airborne topographic LiDAR(Light Detection And Ranging) data acquired in coastal areas of Uljin, Korea. Firstly, the NDVI(Normalized Difference Vegetation Index) imagery is generated from the KOMPSAT-2 imagery, and the vegetation areas are extracted from the NDVI imagery by using the appropriate threshold. Then, the DSM(Digital Surface Model) and DEM(Digital Elevation Model) are generated from the LiDAR data by using interpolation method, and the CHM(Canopy Height Model) is generated using the differences of the pixel values of the DSM and DEM. Then the plain areas are extracted from the CHM by using the appropriate threshold. The low slope areas are also extracted from the slope map generated using the pixel values of the DEM. Finally, the areas of intersection of the vegetation areas, the plain areas and the low slope areas are extracted with the areas higher than the threshold and they are defined as the croplands located nearby coastal areas. The statistical results show that 85% of the croplands on plain areas and 15% of the croplands on mountains located nearby coastal areas are extracted by using the proposed methodology.

A Research on Applicability of Drone Photogrammetry for Dam Safety Inspection (드론 Photogrammetry 기반 댐 시설물 안전점검 적용성 연구)

  • DongSoon Park;Jin-Il Yu;Hojun You
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.27 no.5
    • /
    • pp.30-39
    • /
    • 2023
  • Large dams, which are critical infrastructures for disaster prevention, are exposed to various risks such as aging, floods, and earthquakes. Better dam safety inspection and diagnosis using digital transformation technologies are needed. Traditional visual inspection methods by human inspectors have several limitations, including many inaccessible areas, danger of working at heights, and know-how based subjective inspections. In this study, drone photogrammetry was performed on two large dams to evaluate the applicability of digital data-based dam safety inspection and propose a data management methodology for continuous use. High-quality 3D digital models with GSD (ground sampling distance) within 2.5 cm/pixel were generated by flat double grid missions and manual photography methods, despite reservoir water surface and electromagnetic interferences, and severe altitude differences ranging from 42 m to 99.9 m of dam heights. Geometry profiles of the as-built conditions were easily extracted from the generated 3D mesh models, orthomosaic images, and digital surface models. The effectiveness of monitoring dam deformation by photogrammetry was confirmed. Cracks and deterioration of dam concrete structures, such as spillways and intake towers, were detected and visualized efficiently using the digital 3D models. This can be used for safe inspection of inaccessible areas and avoiding risky tasks at heights. Furthermore, a methodology for mapping the inspection result onto the 3D digital model and structuring a relational database for managing deterioration information history was proposed. As a result of measuring the labor and time required for safety inspection at the SYG Dam spillway, the drone photogrammetry method was found to have a 48% productivity improvement effect compared to the conventional manpower visual inspection method. The drone photogrammetry-based dam safety inspection is considered very effective in improving work productivity and data reliability.

Intensity Compensation for Efficient Stereo Image Compression (효율적인 스테레오 영상 압축을 위한 밝기차 보상)

  • Jeon Youngtak;Jeon Byeungwoo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.2 s.302
    • /
    • pp.101-112
    • /
    • 2005
  • As we perceive the world as 3-dimensional through our two eyes, we can extract 3-dimensional information from stereo images obtained from two or more cameras. Since stereo images have a large amount of data, with recent advances in digital video coding technology, efficient compression algorithms have been developed for stereo images. In order to compress stereo images and to obtain 3-D information such as depth, we find disparity vectors by using disparity estimation algorithm generally utilizing pixel differences between stereo pairs. However, it is not unusual to have stereo images having different intensity values for several reasons, such as incorrect control of the iris of each camera, disagreement of the foci of two cameras, orientation, position, and different characteristics of CCD (charge-coupled device) cameras, and so on. The intensity differences of stereo pairs often cause undesirable problems such as incorrect disparity vectors and consequent low coding efficiency. By compensating intensity differences between left and right images, we can obtain higher coding efficiency and hopefully reduce the perceptual burden of brain to combine different information incoming from two eyes. We propose several methods of intensity compensation such as local intensity compensation, global intensity compensation, and hierarchical intensity compensation as very simple and efficient preprocessing tool. Experimental results show that the proposed algerian provides significant improvement in coding efficiency.

A Study of Usefulness for Megavoltage Computed Tomography on the Radiation Treatment Planning (메가볼트 에너지 전산화 단층 촬영을 이용한 치료계획의 유용성 연구)

  • Cho, Jeong-Hee;Kim, Joo-Ho;Khang, Hyun-Soo;Lee, Jong-Seok;Yoo, Beong-Gyu
    • Journal of radiological science and technology
    • /
    • v.33 no.4
    • /
    • pp.369-378
    • /
    • 2010
  • The purpose of this study was to investigate image differences between KVCT vs MVCT depending on a high densities metal included in the phantom and to analyze the r values for the purpose of the dose differences between each methods. We verified the possibilities for clinical indications that using MVCT is available for the radiation therapy treatment planning. Cheese phantom was used to get a density table for each CT and CT sinogram data was transferred to radiation planning computer through DICOM_RT. Using this data, the treatment dose plan has been calculated in RTP system. We compared the differences of r values between calculated and measured values, and then applied this data to the real patient's treatment planning. The contrast of MVCT image was superior to KVCT. In KVCT, each pixel which has more than 3.0 of density was difficult to be differentiated, but in MVCT, more than 5.0 density of pixels were distinguished clearly. With the normal phantom, the percentage of the case which has less than 1($r\leq1$, acceptable criteria) of gamma value, was 94.92% for KVCT and 93.87% for MVCT. But with the cheese phantom, which has high density plug, the percentage was 88.25% for KVCT and 93.77% for MVCT respectively. MVCT has many advantages than KVCT. Especially, when the patient has high density metal, such as total hip arthroplasty, MVCT is more efficient to define the anatomical structure around the high density implants without any artifacts. MVCT helps to calculate the treatment dose more accurately.

Analysis of Applicability of RPC Correction Using Deep Learning-Based Edge Information Algorithm (딥러닝 기반 윤곽정보 추출자를 활용한 RPC 보정 기술 적용성 분석)

  • Jaewon Hur;Changhui Lee;Doochun Seo;Jaehong Oh;Changno Lee;Youkyung Han
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.387-396
    • /
    • 2024
  • Most very high-resolution (VHR) satellite images provide rational polynomial coefficients (RPC) data to facilitate the transformation between ground coordinates and image coordinates. However, initial RPC often contains geometric errors, necessitating correction through matching with ground control points (GCPs). A GCP chip is a small image patch extracted from an orthorectified image together with height information of the center point, which can be directly used for geometric correction. Many studies have focused on area-based matching methods to accurately align GCP chips with VHR satellite images. In cases with seasonal differences or changed areas, edge-based algorithms are often used for matching due to the difficulty of relying solely on pixel values. However, traditional edge extraction algorithms,such as canny edge detectors, require appropriate threshold settings tailored to the spectral characteristics of satellite images. Therefore, this study utilizes deep learning-based edge information that is insensitive to the regional characteristics of satellite images for matching. Specifically,we use a pretrained pixel difference network (PiDiNet) to generate the edge maps for both satellite images and GCP chips. These edge maps are then used as input for normalized cross-correlation (NCC) and relative edge cross-correlation (RECC) to identify the peak points with the highest correlation between the two edge maps. To remove mismatched pairs and thus obtain the bias-compensated RPC, we iteratively apply the data snooping. Finally, we compare the results qualitatively and quantitatively with those obtained from traditional NCC and RECC methods. The PiDiNet network approach achieved high matching accuracy with root mean square error (RMSE) values ranging from 0.3 to 0.9 pixels. However, the PiDiNet-generated edges were thicker compared to those from the canny method, leading to slightly lower registration accuracy in some images. Nevertheless, PiDiNet consistently produced characteristic edge information, allowing for successful matching even in challenging regions. This study demonstrates that improving the robustness of edge-based registration methods can facilitate effective registration across diverse regions.

Comparison of Computer and Human Face Recognition According to Facial Components

  • Nam, Hyun-Ha;Kang, Byung-Jun;Park, Kang-Ryoung
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.1
    • /
    • pp.40-50
    • /
    • 2012
  • Face recognition is a biometric technology used to identify individuals based on facial feature information. Previous studies of face recognition used features including the eye, mouth and nose; however, there have been few studies on the effects of using other facial components, such as the eyebrows and chin, on recognition performance. We measured the recognition accuracy affected by these facial components, and compared the differences between computer-based and human-based facial recognition methods. This research is novel in the following four ways compared to previous works. First, we measured the effect of components such as the eyebrows and chin. And the accuracy of computer-based face recognition was compared to human-based face recognition according to facial components. Second, for computer-based recognition, facial components were automatically detected using the Adaboost algorithm and active appearance model (AAM), and user authentication was achieved with the face recognition algorithm based on principal component analysis (PCA). Third, we experimentally proved that the number of facial features (when including eyebrows, eye, nose, mouth, and chin) had a greater impact on the accuracy of human-based face recognition, but consistent inclusion of some feature such as chin area had more influence on the accuracy of computer-based face recognition because a computer uses the pixel values of facial images in classifying faces. Fourth, we experimentally proved that the eyebrow feature enhanced the accuracy of computer-based face recognition. However, the problem of occlusion by hair should be solved in order to use the eyebrow feature for face recognition.

How to utilize vegetation survey using drone image and image analysis software

  • Han, Yong-Gu;Jung, Se-Hoon;Kwon, Ohseok
    • Journal of Ecology and Environment
    • /
    • v.41 no.4
    • /
    • pp.114-119
    • /
    • 2017
  • This study tried to analyze error range and resolution of drone images using a rotary wing by comparing them with field measurement results and to analyze stands patterns in actual vegetation map preparation by comparing drone images with aerial images provided by National Geographic Information Institute of Korea. A total of 11 ground control points (GCPs) were selected in the area, and coordinates of the points were identified. In the analysis of aerial images taken by a drone, error per pixel was analyzed to be 0.284 cm. Also, digital elevation model (DEM), digital surface model (DSM), and orthomosaic image were abstracted. When drone images were comparatively analyzed with coordinates of ground control points (GCPs), root mean square error (RMSE) was analyzed as 2.36, 1.37, and 5.15 m in the direction of X, Y, and Z. Because of this error, there were some differences in locations between images edited after field measurement and images edited without field measurement. Also, drone images taken in the stream and the forest and 51 and 25 cm resolution aerial images provided by the National Geographic Information Institute of Korea were compared to identify stands patterns. To have a standard to classify polygons according to each aerial image, image analysis software (eCognition) was used. As a result, it was analyzed that drone images made more precise polygons than 51 and 25 cm resolution images provided by the National Geographic Information Institute of Korea. Therefore, if we utilize drones appropriately according to characteristics of subject, we can have advantages in vegetation change survey and general monitoring survey as it can acquire detailed information and can take images continuously.

An Illumination-Insensitive Stereo Matching Scheme Based on Weighted Mutual Information (조명 변화에 강인한 상호 정보량 기반 스테레오 정합 기법)

  • Heo, Yong Seok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.11
    • /
    • pp.2271-2283
    • /
    • 2015
  • In this paper, we propose a method which infers an accurate disparity map for radiometrically varying stereo images. For this end, firstly, we transform the input color images to the log-chromaticity color space from which a linear relationship can be established during constructing a joint pdf between input stereo images. Based on this linear property, we present a new stereo matching cost by combining weighted mutual information and the SIFT (Scale Invariant Feature Transform) descriptor with segment-based plane-fitting constraints to robustly find correspondences for stereo image pairs which undergo radiometric variations. Experimental results show that our method outperforms previous methods and produces accurate disparity maps even for stereo images with severe radiometric differences.