• Title/Summary/Keyword: 모의 정확도 향상

Search Result 741, Processing Time 0.037 seconds

Differential Multi-view Video Coding using View Interpolation (시점 보간법을 이용한 차분 다시점 비디오 부호화 방법)

  • Lee, Sang-Beom;Kim, Jun-Yup;Ho, Yo-Sung;Choi, Byeong-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2005.11a
    • /
    • pp.29-32
    • /
    • 2005
  • 3차원 비디오는 차세대 정보 통신 서비스 분야의 하나로, 사용자에게 시각적으로 고차원적인 서비스를 제공하는 것을 목적으로 한다. 이 가운데 다시점 비디오는 같은 시간, 여러 시점에서 영상 정보를 획득하여 사용자에게 원하는 시점의 영상 정보를 제공하는 3차원 비디오이며, 현재 방송 관련 연구 기관에서 차세대 실감방송 멀티미디어 서비스 개발을 목적으로 하는 연구가 활발히 진행되고 있다. 최근 MPEG 표준화 그룹에서는 다시점 비디오 부호화 (multi-view video coding, MVC) 방법에 관한 표준화 작업이 진행 중이며, 최신 비디오 압축 표준인 H.264를 이용한 여러 가지 방법들이 제안되었다. 현재 MVC 표준화 작업의 평가 기준이 되는 방법은 각 시점을 H.264로 부호화하는 방법인데, 이는 다시점 비디오 영상의 중요한 특성인 인접시점들 사이의 공간적 상관도를 전혀 고려하지 않았다. 본 논문에서는 시점 보간법을 이용하여 얻어진 중간 영상과 원영상과의 차분 영상을 부호화하는 알고리즘을 제안하고자 한다. 여기서 시점 보간법이란 좌우 두 시점 영상으로부터 변이값을 얻은 다음, 이를 이용하여 중간 시점 영상을 합성하는 방법을 말한다. 예를 들면,다시점 비디오의 홀수 번째 시점의 영상은 기존의 방법을 따르고, 짝수 번째 시점의 영상은 이미 부호화된 홀수 번째 시점의 영상을 이용하여 보간적으로 예측하여 원래 영상과 차분 영상을 구하여 부호화한다. 차분 영상은 영상의 복잡도가 많이 감소되어 원영상에 비해 보다 나은 부호화 효율을 보인다. 그러나 합성 영상이 각 장면마다 독립적으로 생성되므로 원영상에 비해 차분 영상의 시간적인 상관도가 줄어들어 I장면의 경우 부호화 효율이 크게 향상되었으나, 시간적인 상관도를 이용하는 P장면과 B장면에서는 오히려 좋지 않은 결과를 보였다. 통계는 전 국민에 대한 패널자료이기 때문에 통계적 활용의 범위가 방대하다. 특히 개인, 가구, 사업체 등 사회 활동의 주체들이 어떻게 변화하는지를 추적할 수 있는 자료를 생산함으로써 다양한 인과적 통계분석을 할 수 있다. 행정자료를 활용한 인구센서스의 이러한 특징은 국가의 교육정책, 노동정책, 복지정책 등 다양한 정책을 정확한 자료를 근거로 수립할 수 있는 기반을 제공한다(Gaasemyr, 1999). 이와 더불어 행정자료 기반의 인구센서스는 비용이 적게 드는 장점이 있다. 예를 들어 덴마크나 핀란드에서는 조사로 자료를 생산하던 때의 1/20 정도 비용으로 행정자료로 인구센서스의 모든 자료를 생산하고 있다. 특히, 최근 모든 행정자료들이 정보통신기술에 의해 데이터베이스 형태로 바뀌고, 인터넷을 근간으로 한 컴퓨터네트워크가 발달함에 따라 각 부처별로 행정을 위해 축적한 자료를 정보통신기술로 연계${cdot}$통합하면 막대한 조사비용을 들이지 않더라도 인구센서스자료를 적은 비용으로 생산할 수 있는 근간이 마련되었다. 이렇듯 행정자료 기반의 인구센서스가 많은 장점을 가졌지만, 그렇다고 모든 국가가 당장 행정자료로 인구센서스를 대체할 수 있는 것은 아니다. 행정자료로 인구센서스통계를 생산하기 위해서는 각 행정부서별로 사용하는 행정자료들을 연계${cdot}$통합할 수 있도록 국가사회전반에 걸쳐 행정 체제가 갖추어져야 하기 때문이다. 특히 모든 국민 개개인에 관한 기본정보, 개인들이 거주하며 생활하는 단위인 개별 주거단위에 관한 정보가 행정부에 등록되어 있고, 잘 정비되어 있어야 하며, 정보의 형태 또한 서로 연계가 가능하도록 표준화되어있어야 한다. 이와 더불어, 현재 인구센서스에서 표본조사를 통해 부가적으로 생산하는 경제활동통계를 생산하기 위해서는 개인이

  • PDF

Application of Very Short-Term Rainfall Forecasting to Urban Water Simulation using TREC Method (TREC기법을 이용한 초단기 레이더 강우예측의 도시유출 모의 적용)

  • Kim, Jong Pil;Yoon, Sun Kwon;Kim, Gwangseob;Moon, Young Il
    • Journal of Korea Water Resources Association
    • /
    • v.48 no.5
    • /
    • pp.409-423
    • /
    • 2015
  • In this study the very short-term rainfall forecasting and storm water forecasting using the weather radar data were implemented in an urban stream basin. As forecasting time increasing, the very short-term rainfall forecasting results show that the correlation coefficient was decreased and the root mean square error was increased and then the forecasting model accuracy was decreased. However, as a result of the correlation coefficient up to 60-minute forecasting time is maintained 0.5 or higher was obtained. As a result of storm water forecasting in an urban area, the reduction in peak flow and outflow volume with increasing forecasting time occurs, the peak time was analyzed that relatively matched. In the application of storm water forecasting by radar rainfall forecast, the errors has occurred that we determined some of the external factors. In the future, we believed to be necessary to perform that the continuous algorithm improvement such as simulation of rapid generation and disappearance phenomenon by precipitation echo, the improvement of extreme rainfall forecasting in urban areas, and the rainfall-runoff model parameter optimizations. The results of this study, not only urban stream basin, but also we obtained the observed data, and expand the real-time flood alarm system over the ungaged basins. In addition, it is possible to take advantage of development of as multi-sensor based very short-term rainfall forecasting technology.

Skew Compensation and Text Extraction of The Traffic Sign in Natural Scenes (자연영상에서 교통 표지판의 기울기 보정 및 덱스트 추출)

  • Choi Gyu-Dam;Kim Sung-Dong;Choi Ki-Ho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.3 no.2 s.5
    • /
    • pp.19-28
    • /
    • 2004
  • This paper shows how to compensate the skew from the traffic sign included in the natural image and extract the text. The research deals with the Process related to the array image. Ail the process comprises four steps. In the first fart we Perform the preprocessing and Canny edge extraction for the edge in the natural image. In the second pan we perform preprocessing and postprocessing for Hough Transform in order to extract the skewed angle. In the third part we remove the noise images and the complex lines, and then extract the candidate region using the features of the text. In the last part after performing the local binarization in the extracted candidate region, we demonstrate the text extraction by using the differences of the features which appeared between the tett and the non-text in order to select the unnecessary non-text. After carrying out an experiment with the natural image of 100 Pieces that includes the traffic sign. The research indicates a 82.54 percent extraction of the text and a 79.69 percent accuracy of the extraction, and this improved more accurate text extraction in comparison with the existing works such as the method using RLS(Run Length Smoothing) or Fourier Transform. Also this research shows a 94.5 percent extraction in respect of the extraction on the skewed angle. That improved a 26 percent, compared with the way used only Hough Transform. The research is applied to giving the information of the location regarding the walking aid system for the blind or the operation of a driverless vehicle

  • PDF

Three-Dimensional Conversion of Two-Dimensional Movie Using Optical Flow and Normalized Cut (Optical Flow와 Normalized Cut을 이용한 2차원 동영상의 3차원 동영상 변환)

  • Jung, Jae-Hyun;Park, Gil-Bae;Kim, Joo-Hwan;Kang, Jin-Mo;Lee, Byoung-Ho
    • Korean Journal of Optics and Photonics
    • /
    • v.20 no.1
    • /
    • pp.16-22
    • /
    • 2009
  • We propose a method to convert a two-dimensional movie to a three-dimensional movie using normalized cut and optical flow. In this paper, we segment an image of a two-dimensional movie to objects first, and then estimate the depth of each object. Normalized cut is one of the image segmentation algorithms. For improving speed and accuracy of normalized cut, we used a watershed algorithm and a weight function using optical flow. We estimate the depth of objects which are segmented by improved normalized cut using optical flow. Ordinal depth is estimated by the change of the segmented object label in an occluded region which is the difference of absolute values of optical flow. For compensating ordinal depth, we generate the relational depth which is the absolute value of optical flow as motion parallax. A final depth map is determined by multiplying ordinal depth by relational depth, then dividing by average optical flow. In this research, we propose the two-dimensional/three-dimensional movie conversion method which is applicable to all three-dimensional display devices and all two-dimensional movie formats. We present experimental results using sample two-dimensional movies.

Generation of Feature Map for Improving Localization of Mobile Robot based on Stereo Camera (스테레오 카메라 기반 모바일 로봇의 위치 추정 향상을 위한 특징맵 생성)

  • Kim, Eun-Kyeong;Kim, Sung-Shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.1
    • /
    • pp.58-63
    • /
    • 2020
  • This paper proposes the method for improving the localization accuracy of the mobile robot based on the stereo camera. To restore the position information from stereo images obtained by the stereo camera, the corresponding point which corresponds to one pixel on the left image should be found on the right image. For this, there is the general method to search for corresponding point by calculating the similarity of pixel with pixels on the epipolar line. However, there are some disadvantages because all pixels on the epipolar line should be calculated and the similarity is calculated by only pixel value like RGB color space. To make up for this weak point, this paper implements the method to search for the corresponding point simply by calculating the gap of x-coordinate when the feature points, which are extracted by feature extraction and matched by feature matching method, are a pair and located on the same y-coordinate on the left/right image. In addition, the proposed method tries to preserve the number of feature points as much as possible by finding the corresponding points through the conventional algorithm in case of unmatched features. Because the number of the feature points has effect on the accuracy of the localization. The position of the mobile robot is compensated based on 3-D coordinates of the features which are restored by the feature points and corresponding points. As experimental results, by the proposed method, the number of the feature points are increased for compensating the position and the position of the mobile robot can be compensated more than only feature extraction.

Analysis of Relationships between Features Extracted from SAR Data and Land-cover Classes (SAR 자료에서 추출한 특징들과 토지 피복 항목 사이의 연관성 분석)

  • Park, No-Wook;Chi, Kwang-Hoon;Lee, Hoon-Yol
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.4
    • /
    • pp.257-272
    • /
    • 2007
  • This paper analyzed relationships between various features from SAR data with multiple acquisition dates and mode (frequency, polarization and incidence angles), and land-cover classes. Two typical types of features were extracted by considering acquisition conditions of currently available SAR data. First, coherence, temporal variability and principal component transform-based features were extracted from multi-temporal and single mode SAR data. C-band ERS-1/2, ENVISAT ASAR and Radarsat-1, and L-band JERS-1 SAR data were used for those features and different characteristics of different SAR sensor data were discussed in terms of land-cover discrimination capability. Overall, tandem coherence showed the best discrimination capability among various features. Long-term coherence from C-band SAR data provided a useful information on the discrimination of urban areas from other classes. Paddy fields showed the highest temporal variability values in all SAR sensor data. Features from principal component transform contained particular information relevant to specific land-cover class. As features for multiple mode SAR data acquired at similar dates, polarization ratio and multi-channel variability were also considered. VH/VV polarization ratio was a useful feature for the discrimination of forest and dry fields in which the distributions of coherence and temporal variability were significantly overlapped. It would be expected that the case study results could be useful information on improvement of classification accuracy in land-cover classification with SAR data, provided that the main findings of this paper would be confirmed by extensive case studies based on multi-temporal SAR data with various modes and ground-based SAR experiments.

A Quality Prediction Model for Ginseng Sprouts based on CNN (CNN을 활용한 새싹삼의 품질 예측 모델 개발)

  • Lee, Chung-Gu;Jeong, Seok-Bong
    • Journal of the Korea Society for Simulation
    • /
    • v.30 no.2
    • /
    • pp.41-48
    • /
    • 2021
  • As the rural population continues to decline and aging, the improvement of agricultural productivity is becoming more important. Early prediction of crop quality can play an important role in improving agricultural productivity and profitability. Although many researches have been conducted recently to classify diseases and predict crop yield using CNN based deep learning and transfer learning technology, there are few studies which predict postharvest crop quality early in the planting stage. In this study, a early quality prediction model is proposed for sprout ginseng, which is drawing attention as a healthy functional foods. For this end, we took pictures of ginseng seedlings in the planting stage and cultivated them through hydroponic cultivation. After harvest, quality data were labeled by classifying the quality of ginseng sprout. With this data, we build early quality prediction models using several pre-trained CNN models through transfer learning technology. And we compare the prediction performance such as learning period and accuracy between each model. The results show more than 80% prediction accuracy in all proposed models, especially ResNet152V2 based model shows the highest accuracy. Through this study, it is expected that it will be able to contribute to production and profitability by automating the existing seedling screening works, which primarily rely on manpower.

Development of an Intelligent Illegal Gambling Site Detection Model Based on Tag2Vec (Tag2vec 기반의 지능형 불법 도박 사이트 탐지 모형 개발)

  • Song, ChanWoo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.211-227
    • /
    • 2022
  • Illegal gambling through online gambling sites has become a significant social problem. The development of Internet technology and the spread of smartphones have led to the proliferation of illegal gambling sites, so now illegal online gambling has become accessible to anyone. In order to mitigate its negative effect, the Korean government is trying to detect illegal gambling sites by using self-monitoring agents or reporting systems such as 'Nuricops.' However, it is difficult to detect all illegal sites due to limitations such as a lack of staffing. Accordingly, several scholars have proposed intelligent illegal gambling site detection techniques. Xu et al. (2019) found that fake or illegal websites generally have unique features in the HTML tag structure. It implies that the HTML tag structure can be important for detecting illegal sites. However, prior studies to improve the model's performance by utilizing the HTML tag structure in the illegal site detection model are rare. Against this background, our study aimed to improve the model's performance by utilizing the HTML tag structure and proposes Tag2Vec, a modified version of Doc2Vec, as a methodology to vectorize the HTML tag structure properly. To validate the proposed model, we perform the empirical analysis using a data set consisting of the list of harmful sites from 'The Cheat' and normal sites through Google search. As a result, it was confirmed that the Tag2Vec-based detection model proposed in this study showed better classification accuracy, recall, and F1_Score than the URL-based detection model-a comparative model. The proposed model of this study is expected to be effectively utilized to improve the health of our society through intelligent technology.

Accuracy Evaluation of CT-Based Attenuation Correction in SPECT with Different Energy of Radioisotopes (SPECT/CT에서 CT를 기반으로 한 Attenuation Correction의 정확도 평가)

  • Kim, Seung Jeong;Kim, Jae Il;Kim, Jung Soo;Kim, Tae Yeop;Kim, Soo Mee;Woo, Jae Ryong;Lee, Jae Sung;Kim, Yoo Kyeong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.17 no.1
    • /
    • pp.25-29
    • /
    • 2013
  • Purpose: In this study, we evaluated the accuracy of CT-based attenuation correction (AC) under the conventional CT protocol (140 kVp, on average 50-60 keV) by comparing the SPECT image qualities of different energy of radioisotopes, $^{201}Tl,\;^{99m}Tc$ and $^{131}I$. Materials and Methods: Using a cylindrical phantom, three different SPECT scans of $^{201}Tl$ (70 keV, 55.5 MBq), $^{99m}Tc$ (140 keV, 281.2 MBq) and $^{131}I$ (364 keV, 96.2 MBq) were performed. The CT image was obtained with 140 kVp and 2.5 mA in GE Hawkeye 4. The OSEM reconstruction algorithm was performed with 2 iterations and 10 subsets. The experiments were performed in the 4 different conditions; non-AC and non-scatter correction (SC), only AC, only SC, AC and SC in terms of uniformity and center to peripheral ratio (CPR). Results: The uniformity was calculated from the uniform whole region in the reconstructed images. For $^{201}Tl$ and $^{99m}Tc$, the uniformities were improved by about 10-20% AC was applied, but these were decreased by about 2% as SC was applied. The uniformity of $^{131}I$ was slightly increased as both AC and SC were applied. The CPR of the reconstructed image was close to one, when AC was applied for $^{201}Tl$ and $^{99m}Tc$ scans and $^{131}I$ was distant from 1 and that is only AC. Conclusion: The image uniformity improved by AC on low energy likely to $^{201}Tl$ and $^{99m}Tc$. However, image uniformity of high energy such as $^{131}I$ was improved, when both AC and SC was applied.

  • PDF

Fast HEVC Encoding based on CU-Depth First Decision (CU 깊이 우선 결정 기반의 HEVC 고속 부호화 방법)

  • Yoo, Sung-Eun;Ahn, Yong-Jo;Sim, Dong-Gyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.3
    • /
    • pp.40-50
    • /
    • 2012
  • In this paper we propose the fast CU (Coding Unit) mode decision method. To reduce computational complexity and save encoding time of HEVC, we divided CU, PU (Prediction Unit) and TU (Transform Unit) decision process into two stages. In the first stage, because $2N{\times}2N$ PU mode is mostly selected among $2N{\times}2N$, $N{\times}2N$, $2N{\times}N$, $N{\times}N$ PU modes, proposed algorithm uses only $2N{\times}2N$ PU mode deciding depth of each CU in the LCU (Largest CU). And then, proposed method decides exact PU and TU modes at the depth level which is decided in the first stage. In addition, early skip decision rule is applied to the proposed method to obtain more efficient computational complexity reduction. The proposed method reduces computational complexity of the HEVC encoder by simplifying a CU depth decision method. We could obtain about 50% computational complexity reduction in comparison with HM 3.3 HEVC reference software while bitrate compressed by the proposed algorithm increases only 2%.