• Title/Summary/Keyword: 데이터-변환

Search Result 3,705, Processing Time 0.027 seconds

Digital Hologram Compression Technique By Hybrid Video Coding (하이브리드 비디오 코팅에 의한 디지털 홀로그램 압축기술)

  • Seo, Young-Ho;Choi, Hyun-Jun;Kang, Hoon-Jong;Lee, Seung-Hyun;Kim, Dong-Wook
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.29-40
    • /
    • 2005
  • According as base of digital hologram has been magnified, discussion of compression technology is expected as a international standard which defines the compression technique of 3D image and video has been progressed in form of 3DAV which is a part of MPEG. As we can identify in case of 3DAV, the coding technique has high possibility to be formed into the hybrid type which is a merged, refined, or mixid with the various previous technique. Therefore, we wish to present the relationship between various image/video coding techniques and digital hologram In this paper, we propose an efficient coding method of digital hologram using standard compression tools for video and image. At first, we convert fringe patterns into video data using a principle of CGH(Computer Generated Hologram), and then encode it. In this research, we propose a compression algorithm is made up of various method such as pre-processing for transform, local segmentation with global information of object image, frequency transform for coding, scanning to make fringe to video stream, classification of coefficients, and hybrid video coding. Finally the proposed hybrid compression algorithm is all of these methods. The tool for still image coding is JPEG2000, and the toots for video coding include various international compression algorithm such as MPEG-2, MPEG-4, and H.264 and various lossless compression algorithm. The proposed algorithm illustrated that it have better properties for reconstruction than the previous researches on far greater compression rate above from four times to eight times as much. Therefore we expect that the proposed technique for digital hologram coding is to be a good preceding research.

User-Perspective Issue Clustering Using Multi-Layered Two-Mode Network Analysis (다계층 이원 네트워크를 활용한 사용자 관점의 이슈 클러스터링)

  • Kim, Jieun;Kim, Namgyu;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.93-107
    • /
    • 2014
  • In this paper, we report what we have observed with regard to user-perspective issue clustering based on multi-layered two-mode network analysis. This work is significant in the context of data collection by companies about customer needs. Most companies have failed to uncover such needs for products or services properly in terms of demographic data such as age, income levels, and purchase history. Because of excessive reliance on limited internal data, most recommendation systems do not provide decision makers with appropriate business information for current business circumstances. However, part of the problem is the increasing regulation of personal data gathering and privacy. This makes demographic or transaction data collection more difficult, and is a significant hurdle for traditional recommendation approaches because these systems demand a great deal of personal data or transaction logs. Our motivation for presenting this paper to academia is our strong belief, and evidence, that most customers' requirements for products can be effectively and efficiently analyzed from unstructured textual data such as Internet news text. In order to derive users' requirements from textual data obtained online, the proposed approach in this paper attempts to construct double two-mode networks, such as a user-news network and news-issue network, and to integrate these into one quasi-network as the input for issue clustering. One of the contributions of this research is the development of a methodology utilizing enormous amounts of unstructured textual data for user-oriented issue clustering by leveraging existing text mining and social network analysis. In order to build multi-layered two-mode networks of news logs, we need some tools such as text mining and topic analysis. We used not only SAS Enterprise Miner 12.1, which provides a text miner module and cluster module for textual data analysis, but also NetMiner 4 for network visualization and analysis. Our approach for user-perspective issue clustering is composed of six main phases: crawling, topic analysis, access pattern analysis, network merging, network conversion, and clustering. In the first phase, we collect visit logs for news sites by crawler. After gathering unstructured news article data, the topic analysis phase extracts issues from each news article in order to build an article-news network. For simplicity, 100 topics are extracted from 13,652 articles. In the third phase, a user-article network is constructed with access patterns derived from web transaction logs. The double two-mode networks are then merged into a quasi-network of user-issue. Finally, in the user-oriented issue-clustering phase, we classify issues through structural equivalence, and compare these with the clustering results from statistical tools and network analysis. An experiment with a large dataset was performed to build a multi-layer two-mode network. After that, we compared the results of issue clustering from SAS with that of network analysis. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The sample dataset contains 150 million transaction logs and 13,652 news articles of 5,000 panels over one year. User-article and article-issue networks are constructed and merged into a user-issue quasi-network using Netminer. Our issue-clustering results applied the Partitioning Around Medoids (PAM) algorithm and Multidimensional Scaling (MDS), and are consistent with the results from SAS clustering. In spite of extensive efforts to provide user information with recommendation systems, most projects are successful only when companies have sufficient data about users and transactions. Our proposed methodology, user-perspective issue clustering, can provide practical support to decision-making in companies because it enhances user-related data from unstructured textual data. To overcome the problem of insufficient data from traditional approaches, our methodology infers customers' real interests by utilizing web transaction logs. In addition, we suggest topic analysis and issue clustering as a practical means of issue identification.

Data issue and Improvement Direction for Marine Spatial Planning (해양공간계획 지원을 위한 정보 현안 및 개선 방향 연구)

  • CHANG, Min-Chol;PARK, Byung-Moon;CHOI, Yun-Soo;CHOI, Hee-Jung;KIM, Tae-Hoon;LEE, Bang-Hee
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.4
    • /
    • pp.175-190
    • /
    • 2018
  • Recently, policy of the marine advanced countries were switched from the preemption using ocean to post-project development. In this study, we suggest improvement and the pending issues when are deducted to the database of the marine spatial information is constructed over the GIS system for the Korean Marine Spatial Planning (KMSP). More than 250 spatial information in the seas of Korea were processed in order of data collection, GIS transformation, data analysis and processing, data grouping, and space mapping. It's process had some problem occurred to error of coordinate system, digitizing process for lack of the spatial information, performed by overlapping for the original marine spatial information, and so on. Moreover, solution is needed to data processing methods excluding personal information which is necessary when produce the spatial data for analysis of the used marine status and minimized method for different between the spatial information based GIS system and the based real information. Therefore, collection and securing system of lacking marine spatial information is enhanced for marine spatial planning. it is necessary to link and expand marine fisheries survey system. It is needed to the marine spatial planning. The marine spatial planning is required to the evaluation index of marine spatial and detailed marine spatial map. In addition, Marine spatial planning is needed to standard guideline and system of quality management. This standard guideline generate to phase for production, processing, analysis, and utilization. Also, the quality management system improve for the information quality of marine spatial information. Finally, we suggest necessity need for the depths study which is considered as opening extension of the marine spatial information and deduction on application model.

Lip Contour Detection by Multi-Threshold (다중 문턱치를 이용한 입술 윤곽 검출 방법)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.431-438
    • /
    • 2020
  • In this paper, the method to extract lip contour by multiple threshold is proposed. Spyridonos et. el. proposed a method to extract lip contour. First step is get Q image from transform of RGB into YIQ. Second step is to find lip corner points by change point detection and split Q image into upper and lower part by corner points. The candidate lip contour can be obtained by apply threshold to Q image. From the candidate contour, feature variance is calculated and the contour with maximum variance is adopted as final contour. The feature variance 'D' is based on the absolute difference near the contour points. The conventional method has 3 problems. The first one is related to lip corner point. Calculation of variance depends on much skin pixels and therefore the accuracy decreases and have effect on the split for Q image. Second, there is no analysis for color systems except YIQ. YIQ is a good however, other color systems such as HVS, CIELUV, YCrCb would be considered. Final problem is related to selection of optimal contour. In selection process, they used maximum of average feature variance for the pixels near the contour points. The maximum of variance causes reduction of extracted contour compared to ground contours. To solve the first problem, the proposed method excludes some of skin pixels and got 30% performance increase. For the second problem, HSV, CIELUV, YCrCb coordinate systems are tested and found there is no relation between the conventional method and dependency to color systems. For the final problem, maximum of total sum for the feature variance is adopted rather than the maximum of average feature variance and got 46% performance increase. By combine all the solutions, the proposed method gives 2 times in accuracy and stability than conventional method.

Very short-term rainfall prediction based on radar image learning using deep neural network (심층신경망을 이용한 레이더 영상 학습 기반 초단시간 강우예측)

  • Yoon, Seongsim;Park, Heeseong;Shin, Hongjoon
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.12
    • /
    • pp.1159-1172
    • /
    • 2020
  • This study applied deep convolution neural network based on U-Net and SegNet using long period weather radar data to very short-term rainfall prediction. And the results were compared and evaluated with the translation model. For training and validation of deep neural network, Mt. Gwanak and Mt. Gwangdeoksan radar data were collected from 2010 to 2016 and converted to a gray-scale image file in an HDF5 format with a 1km spatial resolution. The deep neural network model was trained to predict precipitation after 10 minutes by using the four consecutive radar image data, and the recursive method of repeating forecasts was applied to carry out lead time 60 minutes with the pretrained deep neural network model. To evaluate the performance of deep neural network prediction model, 24 rain cases in 2017 were forecast for rainfall up to 60 minutes in advance. As a result of evaluating the predicted performance by calculating the mean absolute error (MAE) and critical success index (CSI) at the threshold of 0.1, 1, and 5 mm/hr, the deep neural network model showed better performance in the case of rainfall threshold of 0.1, 1 mm/hr in terms of MAE, and showed better performance than the translation model for lead time 50 minutes in terms of CSI. In particular, although the deep neural network prediction model performed generally better than the translation model for weak rainfall of 5 mm/hr or less, the deep neural network prediction model had limitations in predicting distinct precipitation characteristics of high intensity as a result of the evaluation of threshold of 5 mm/hr. The longer lead time, the spatial smoothness increase with lead time thereby reducing the accuracy of rainfall prediction The translation model turned out to be superior in predicting the exceedance of higher intensity thresholds (> 5 mm/hr) because it preserves distinct precipitation characteristics, but the rainfall position tends to shift incorrectly. This study are expected to be helpful for the improvement of radar rainfall prediction model using deep neural networks in the future. In addition, the massive weather radar data established in this study will be provided through open repositories for future use in subsequent studies.

Quantitative Assessment Technology of Small Animal Myocardial Infarction PET Image Using Gaussian Mixture Model (다중가우시안혼합모델을 이용한 소동물 심근경색 PET 영상의 정량적 평가 기술)

  • Woo, Sang-Keun;Lee, Yong-Jin;Lee, Won-Ho;Kim, Min-Hwan;Park, Ji-Ae;Kim, Jin-Su;Kim, Jong-Guk;Kang, Joo-Hyun;Ji, Young-Hoon;Choi, Chang-Woon;Lim, Sang-Moo;Kim, Kyeong-Min
    • Progress in Medical Physics
    • /
    • v.22 no.1
    • /
    • pp.42-51
    • /
    • 2011
  • Nuclear medicine images (SPECT, PET) were widely used tool for assessment of myocardial viability and perfusion. However it had difficult to define accurate myocardial infarct region. The purpose of this study was to investigate methodological approach for automatic measurement of rat myocardial infarct size using polar map with adaptive threshold. Rat myocardial infarction model was induced by ligation of the left circumflex artery. PET images were obtained after intravenous injection of 37 MBq $^{18}F$-FDG. After 60 min uptake, each animal was scanned for 20 min with ECG gating. PET data were reconstructed using ordered subset expectation maximization (OSEM) 2D. To automatically make the myocardial contour and generate polar map, we used QGS software (Cedars-Sinai Medical Center). The reference infarct size was defined by infarction area percentage of the total left myocardium using TTC staining. We used three threshold methods (predefined threshold, Otsu and Multi Gaussian mixture model; MGMM). Predefined threshold method was commonly used in other studies. We applied threshold value form 10% to 90% in step of 10%. Otsu algorithm calculated threshold with the maximum between class variance. MGMM method estimated the distribution of image intensity using multiple Gaussian mixture models (MGMM2, ${\cdots}$ MGMM5) and calculated adaptive threshold. The infarct size in polar map was calculated as the percentage of lower threshold area in polar map from the total polar map area. The measured infarct size using different threshold methods was evaluated by comparison with reference infarct size. The mean difference between with polar map defect size by predefined thresholds (20%, 30%, and 40%) and reference infarct size were $7.04{\pm}3.44%$, $3.87{\pm}2.09%$ and $2.15{\pm}2.07%$, respectively. Otsu verse reference infarct size was $3.56{\pm}4.16%$. MGMM methods verse reference infarct size was $2.29{\pm}1.94%$. The predefined threshold (30%) showed the smallest mean difference with reference infarct size. However, MGMM was more accurate than predefined threshold in under 10% reference infarct size case (MGMM: 0.006%, predefined threshold: 0.59%). In this study, we was to evaluate myocardial infarct size in polar map using multiple Gaussian mixture model. MGMM method was provide adaptive threshold in each subject and will be a useful for automatic measurement of infarct size.

Suggestion of Urban Regeneration Type Recommendation System Based on Local Characteristics Using Text Mining (텍스트 마이닝을 활용한 지역 특성 기반 도시재생 유형 추천 시스템 제안)

  • Kim, Ikjun;Lee, Junho;Kim, Hyomin;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.149-169
    • /
    • 2020
  • "The Urban Renewal New Deal project", one of the government's major national projects, is about developing underdeveloped areas by investing 50 trillion won in 100 locations on the first year and 500 over the next four years. This project is drawing keen attention from the media and local governments. However, the project model which fails to reflect the original characteristics of the area as it divides project area into five categories: "Our Neighborhood Restoration, Housing Maintenance Support Type, General Neighborhood Type, Central Urban Type, and Economic Base Type," According to keywords for successful urban regeneration in Korea, "resident participation," "regional specialization," "ministerial cooperation" and "public-private cooperation", when local governments propose urban regeneration projects to the government, they can see that it is most important to accurately understand the characteristics of the city and push ahead with the projects in a way that suits the characteristics of the city with the help of local residents and private companies. In addition, considering the gentrification problem, which is one of the side effects of urban regeneration projects, it is important to select and implement urban regeneration types suitable for the characteristics of the area. In order to supplement the limitations of the 'Urban Regeneration New Deal Project' methodology, this study aims to propose a system that recommends urban regeneration types suitable for urban regeneration sites by utilizing various machine learning algorithms, referring to the urban regeneration types of the '2025 Seoul Metropolitan Government Urban Regeneration Strategy Plan' promoted based on regional characteristics. There are four types of urban regeneration in Seoul: "Low-use Low-Level Development, Abandonment, Deteriorated Housing, and Specialization of Historical and Cultural Resources" (Shon and Park, 2017). In order to identify regional characteristics, approximately 100,000 text data were collected for 22 regions where the project was carried out for a total of four types of urban regeneration. Using the collected data, we drew key keywords for each region according to the type of urban regeneration and conducted topic modeling to explore whether there were differences between types. As a result, it was confirmed that a number of topics related to real estate and economy appeared in old residential areas, and in the case of declining and underdeveloped areas, topics reflecting the characteristics of areas where industrial activities were active in the past appeared. In the case of the historical and cultural resource area, since it is an area that contains traces of the past, many keywords related to the government appeared. Therefore, it was possible to confirm political topics and cultural topics resulting from various events. Finally, in the case of low-use and under-developed areas, many topics on real estate and accessibility are emerging, so accessibility is good. It mainly had the characteristics of a region where development is planned or is likely to be developed. Furthermore, a model was implemented that proposes urban regeneration types tailored to regional characteristics for regions other than Seoul. Machine learning technology was used to implement the model, and training data and test data were randomly extracted at an 8:2 ratio and used. In order to compare the performance between various models, the input variables are set in two ways: Count Vector and TF-IDF Vector, and as Classifier, there are 5 types of SVM (Support Vector Machine), Decision Tree, Random Forest, Logistic Regression, and Gradient Boosting. By applying it, performance comparison for a total of 10 models was conducted. The model with the highest performance was the Gradient Boosting method using TF-IDF Vector input data, and the accuracy was 97%. Therefore, the recommendation system proposed in this study is expected to recommend urban regeneration types based on the regional characteristics of new business sites in the process of carrying out urban regeneration projects."

Mapping and estimating forest carbon absorption using time-series MODIS imagery in South Korea (시계열 MODIS 영상자료를 이용한 산림의 연간 탄소 흡수량 지도 작성)

  • Cha, Su-Young;Pi, Ung-Hwan;Park, Chong-Hwa
    • Korean Journal of Remote Sensing
    • /
    • v.29 no.5
    • /
    • pp.517-525
    • /
    • 2013
  • Time-series data of Normal Difference Vegetation Index (NDVI) obtained by the Moderate-resolution Imaging Spectroradiometer(MODIS) satellite imagery gives a waveform that reveals the characteristics of the phenology. The waveform can be decomposed into harmonics of various periods by the Fourier transformation. The resulting $n^{th}$ harmonics represent the amount of NDVI change in a period of a year divided by n. The values of each harmonics or their relative relation have been used to classify the vegetation species and to build a vegetation map. Here, we propose a method to estimate the annual amount of carbon absorbed on the forest from the $1^{st}$ harmonic NDVI value. The $1^{st}$ harmonic value represents the amount of growth of the leaves. By the allometric equation of trees, the growth of leaves can be considered to be proportional to the total amount of carbon absorption. We compared the $1^{st}$ harmonic NDVI values of the 6220 sample points with the reference data of the carbon absorption obtained by the field survey in the forest of South Korea. The $1^{st}$ harmonic values were roughly proportional to the amount of carbon absorption irrespective of the species and ages of the vegetation. The resulting proportionality constant between the carbon absorption and the $1^{st}$ harmonic value was 236 tCO2/5.29ha/year. The total amount of carbon dioxide absorption in the forest of South Korea over the last ten years has been estimated to be about 56 million ton, and this coincides with the previous reports obtained by other methods. Considering that the amount of the carbon absorption becomes a kind of currency like carbon credit, our method is very useful due to its generality.

Perfusion MR Imaging of the Brain Tumor: Preliminary Report (뇌종야의 관류 자기공명영상: 예비보고)

  • 김홍대;장기현;성수옥;한문희;한만청
    • Investigative Magnetic Resonance Imaging
    • /
    • v.1 no.1
    • /
    • pp.119-124
    • /
    • 1997
  • Purpose: To assess the utility of magnetic resonance(MR) cerebral blood volume (CBV) map in the evaluation of brain tumors. Materials and Methods: We performed perfusion MR imaing preoperatively in the consecutive IS patients with intracranial masses(3 meningiomas, 2 glioblastoma multiformes, 3 low grade gliomas, 1 lymphoma, 1 germinoma, 1 neurocytoma, 1 metastasis, 2 abscesses, 1 radionecrosis). The average age of the patients was 42 years (22yr -68yr), composed of 10 males and S females. All MR images were obtained at l.ST imager(Signa, CE Medical Systems, Milwaukee, Wisconsin). The regional CBV map was obtained on the theoretical basis of susceptibility difference induced by first pass circulation of contrast media. (contrast media: IScc of gadopentate dimeglumine, about 2ml/sec by hand, starting at 10 second after first baseline scan). For each patient, a total of 480 images (6 slices, 80 images/slice in 160 sec) were obtained by using gradient echo(CE) single shot echo-planar image(EPI) sequence (TR 2000ms, TE SOms, flip angle $90^{\circ}$, FOV $240{\times}240mm,{\;}matrix{\;}128{\times}128$, slice-thick/gap S/2.S). After data collection, the raw data were transferred to CE workstation and rCBV maps were generated from the numerical integration of ${\Delta}R2^{*} on a voxel by voxel basis, with home made software (${\Delta}R2^{*}=-ln (S/SO)/TE). For easy visual interpretation, relative RCB color coding with reference to the normal white matter was applied and color rCBV maps were obtained. The findings of perfusion MR image were retrospectively correlated with Cd-enhanced images with focus on the degree and extent of perfusion and contrast enhancement. Results: Two cases of glioblastoma multiforme with rim enhancement on Cd-enhanced Tl weighted image showed increased perfusion in the peripheral rim and decreased perfusion in the central necrosis portion. The low grade gliomas appeared as a low perfusion area with poorly defined margin. In 2 cases of brain abscess, the degree of perfusion was similar to that of the normal white matter in the peripheral enhancing rim and was low in the central portion. All meningiomas showed diffuse homogeneous increased perfusion of moderate or high degree. One each of lymphoma and germinoma showed homogenously decreased perfusion with well defined margin. The central neurocytoma showed multifocal increased perfusion areas of moderate or high degree. A few nodules of the multiple metastasis showed increased perfusion of moderate degree. One radionecrosis revealed multiple foci of increased perfusion within the area of decreased perfusion. Conclusion: The rCBV map appears to correlate well with the perfusion state of brain tumor, and may be helpful in discrimination between low grade and high grade gliomas. The further study is needed to clarify the role of perfusion MR image in the evaluation of brain tumor.

  • PDF

Speech Recognition Using Linear Discriminant Analysis and Common Vector Extraction (선형 판별분석과 공통벡터 추출방법을 이용한 음성인식)

  • 남명우;노승용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.4
    • /
    • pp.35-41
    • /
    • 2001
  • This paper describes Linear Discriminant Analysis and common vector extraction for speech recognition. Voice signal contains psychological and physiological properties of the speaker as well as dialect differences, acoustical environment effects, and phase differences. For these reasons, the same word spelled out by different speakers can be very different heard. This property of speech signal make it very difficult to extract common properties in the same speech class (word or phoneme). Linear algebra method like BT (Karhunen-Loeve Transformation) is generally used for common properties extraction In the speech signals, but common vector extraction which is suggested by M. Bilginer et at. is used in this paper. The method of M. Bilginer et al. extracts the optimized common vector from the speech signals used for training. And it has 100% recognition accuracy in the trained data which is used for common vector extraction. In spite of these characteristics, the method has some drawback-we cannot use numbers of speech signal for training and the discriminant information among common vectors is not defined. This paper suggests advanced method which can reduce error rate by maximizing the discriminant information among common vectors. And novel method to normalize the size of common vector also added. The result shows improved performance of algorithm and better recognition accuracy of 2% than conventional method.

  • PDF