• Title/Summary/Keyword: Feature mapping

Search Result 334, Processing Time 0.03 seconds

Feature Analysis of Metadata Schemas for Records Management and Archives from the Viewpoint of Records Lifecycle (기록 생애주기 관점에서 본 기록관리 메타데이터 표준의 특징 분석)

  • Baek, Jae-Eun;Sugimoto, Shigeo
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.10 no.2
    • /
    • pp.75-99
    • /
    • 2010
  • Digital resources are widely used in our modern society. However, we are facing fundamental problems to maintain and preserve digital resources over time. Several standard methods for preserving digital resources have been developed and are in use. It is widely recognized that metadata is one of the most important components for digital archiving and preservation. There are many metadata standards for archiving and preservation of digital resources, where each standard has its own feature in accordance with its primary application. This means that each schema has to be appropriately selected and tailored in accordance with a particular application. And, in some cases, those schemas are combined in a larger frame work and container metadata such as the DCMI application framework and METS. There are many metadata standards for archives of digital resources. We used the following metadata standards in this study for the feature analysis me metadata standards - AGLS Metadata which is defined to improve search of both digital resources and non-digital resources, ISAD(G) which is a commonly used standard for archives, EAD which is well used for digital archives, OAIS which defines a metadata framework for preserving digital objects, and PREMIS which is designed primarily for preservation of digital resources. In addition, we extracted attributes from the decision tree defined for digital preservation process by Digital Preservation Coalition (DPC) and compared the set of attributes with these metadata standards. This paper shows the features of these metadata standards obtained through the feature analysis based on the records lifecycle model. The features are shown in a single frame work which makes it easy to relate the tasks in the lifecycle to metadata elements of these standards. As a result of the detailed analysis of the metadata elements, we clarified the features of the standards from the viewpoint of relationships between the elements and the lifecycle stages. Mapping between metadata schemas is often required in the long-term preservation process because different schemes are used in the records lifecycle. Therefore, it is crucial to build a unified framework to enhance interoperability of these schemes. This study presents a basis for the interoperability of different metadata schemas used in digital archiving and preservation.

A Comparative Study about Industrial Structure Feature between TL Carriers and LTL Carriers (구역화물운송업과 노선화물운송업의 산업구조 특성 비교)

  • 민승기
    • Journal of Korean Society of Transportation
    • /
    • v.19 no.1
    • /
    • pp.101-114
    • /
    • 2001
  • Transportation enterprises should maintain constant and qualitative operation. Thus, in short period, transportation enterprises don't change supply in accordance with demand. In the result, transportation enterprises don't reduce operation in spite of management deficit at will. In freight transportation type, less-than-truckload(LTL) has more relation with above transportation feature than truckload(TL) does. Because freight transportation supply of TL is more flexible than that of LTL in correspondence of freight transportation demand. Relating to above mention, it appears that shortage of road and freight terminal of LTL is larger than that of TL. Especially in road and freight terminal comparison, shortage of freight terminal is larger than that of road. Shortage of road is the largest in 1990, and improved after-ward. But shortage of freight terminal is serious lately. So freight terminal needs more expansion than road, and shows better investment condition than road. Freight terminal expansion brings road expansion in LTL, on the contrary, freight terminal expansion substitutes freight terminal for road in TL. In transportation revenue, freight terminal's contribution to LTL is larger than that to TL. However, when we adjust quasi-fixed factor - road and freight terminal - to optimal level in the long run, in TL, diseconomies of scale becomes large, but in LTL, economies of scale becomes large. Consequently, it is necessary for TL to make counterplans to activate management of small size enterprises and owner drivers. And LTL should make use of economies of scale by solving the problem, such as nonprofit route, excess of rental freight handling of office, insufficiency of freight terminal, shortage of driver, and unpreparedness of freight insurance.

  • PDF

A Methodology for Consistent Design of User Interaction (일관성 있는 사용자 인터랙션 설계를 위한 방법론 개발)

  • Kim, Dong-San;Yoon, Wan-Chul
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.961-970
    • /
    • 2009
  • Over the last decade, interactive devices such as mobile phones have become complicated drastically mainly because of feature creep, the tendency for the number of features in a product to rise with each release of the product. One of the ways to reduce the complexity of a multi-functional device is to design it consistently. Although the definition of consistency is elusive and it is sometimes beneficial to be inconsistent, in general, consistently designed systems are easier to learn, easier to remember, and causing less errors. In practice, however, it is often not easy to design the user interaction or interface of a multi-functional device consistently. Since the interaction design of a multi-functional device should deal with a large number of design variables and relations among them, solving this problem might be very time-consuming and error-prone. Therefore, there is a strong need for a well-developed methodology that supports the complex design process. This study has developed an effective and efficient methodology, called CUID (Consistent Design of User Interaction), which focuses on logical consistency rather than physical or visual consistency. CUID deals with three main problems in interaction design: procedure design for each task, decisions of available operations(or functions) for each system state, and the mapping of available operations(functions) and interface controls. It includes a process for interaction design and a software tool for supporting the process. This paper also demonstrates how CUID supports the consistent design of user interaction by presenting a case study. It shows that the logical inconsistencies of a multi-functional device can be resolved by using the CUID methodology.

  • PDF

News Video Shot Boundary Detection using Singular Value Decomposition and Incremental Clustering (특이값 분해와 점증적 클러스터링을 이용한 뉴스 비디오 샷 경계 탐지)

  • Lee, Han-Sung;Im, Young-Hee;Park, Dai-Hee;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.2
    • /
    • pp.169-177
    • /
    • 2009
  • In this paper, we propose a new shot boundary detection method which is optimized for news video story parsing. This new news shot boundary detection method was designed to satisfy all the following requirements: 1) minimizing the incorrect data in data set for anchor shot detection by improving the recall ratio 2) detecting abrupt cuts and gradual transitions with one single algorithm so as to divide news video into shots with one scan of data set; 3) classifying shots into static or dynamic, therefore, reducing the search space for the subsequent stage of anchor shot detection. The proposed method, based on singular value decomposition with incremental clustering and mercer kernel, has additional desirable features. Applying singular value decomposition, the noise or trivial variations in the video sequence are removed. Therefore, the separability is improved. Mercer kernel improves the possibility of detection of shots which is not separable in input space by mapping data to high dimensional feature space. The experimental results illustrated the superiority of the proposed method with respect to recall criteria and search space reduction for anchor shot detection.

DENSE MOLECULAR CLOUDS IN THE GALACTIC CENTER REGION II. H13CN (J=1-0) DATA AND PHYSICAL PROPERTIES OF THE CLOUDS

  • Lee, Chang-Won;Lee, Hyung-Mok
    • Journal of The Korean Astronomical Society
    • /
    • v.36 no.4
    • /
    • pp.271-282
    • /
    • 2003
  • We present results of a $H^{13}CN$ J=1-0 mapping survey of molecular clouds toward the Galactic Center (GC) region of $-1.6^{\circ}{\le}{\iota}{\le}2^{\circ}$ and $-0.23^{\circ}{\le}b{\le}0.30^{\circ}$ with 2' grid resolution. The $H^{13}CN$ emissions show similar distribution and velocity structures to those of the $H^{12}CN$ emissions, but are found to better trace the feature saturated with $H^{12}CN$ (1-0). The bright components among multi-components of $H^{12}CN$ line profiles usually appear in the $H^{13}CN$ line while most of the dynamically forbidden, weak $H^{12}CN$ components are seldom detected in the $H^{13}CN$ line. We also present results of other complementary observations in $^{12}CO$ (J=1-0) and $^{13}CO$ (J=1-0) lines to estimate physical quantities of the GC clouds, such as fractional abundance of HCN isotopes and mass of the GC cloud complexes. We confirm that the GC has very rich chemistry. The overall fractional abundance of $H^{12}CN$ and $H^{13}CN$ relative to $H_2$ in the GC region is found to be significantly higher than those of any other regions, such as star forming region and dark cloud. Especially cloud complexes nearer to the GC tend to have various higher abundance of HCN. Total mass of the HCN molecular clouds within $[{\iota}]{\le}6^{\circ}$ is estimated to be ${\~}2 {\times}10^7\;M_{\bigodot}$ using the abundances of HCN isotopes, which is fairly consistent with previous other estimates. Masses of four main complexes in the GC range from a few $10^5$ to ${\~}10^7\;M_{\bigodot}$ All the HCN spectra with multi-components for the four main cloud complexes were investigated to compare the line widths of the complexes. The largest mode (45 km $s^{-1}$) of the FWHM distributions among the complexes is in the Clump 2. The value of the mode tends to be smaller at the farther complexes from the GC.

Pharmacophore Mapping and Virtual Screening for SIRT1 Activators

  • Sakkiah, Sugunadevi;Krishnamoorthy, Navaneethakrishnan;Gajendrarao, Poornima;Thangapandian, Sundarapandian;Lee, Yun-O;Kim, Song-Mi;Suh, Jung-Keun;Kim, Hyong-Ha;Lee, Keun-Woo
    • Bulletin of the Korean Chemical Society
    • /
    • v.30 no.5
    • /
    • pp.1152-1156
    • /
    • 2009
  • Silent information regulator 2 (Sir2) or sirtuins are NAD(+)-dependent deacetylases, which hydrolyze the acetyllysine residues. In mammals, sirtuins are classified into seven different classes (SIRT1-7). SIRT1 was reported to be involved in age related disorders like obesity, metabolic syndrome, type II diabetes mellitus and Parkinson’s disease. Activation of SIRT1 is one of the promising approaches to treat these age related diseases. In this study, we have used HipHop module of CATALYST to identify a series of pharmacophore models to screen SIRT1 enhancing molecules. Three molecules from Sirtris Pharmaceuticals were selected as training set and 607 sirtuin activator molecules were used as test set. Five different hypotheses were developed and then validated using the training set and the test set. The results showed that the best pharmacophore model has four features, ring aromatic, positive ionization and two hydrogen-bond acceptors. The best hypothesis from our study, Hypo2, screened high number of active molecules from the test set. Thus, we suggest that this four feature pharmacophore model could be helpful to screen novel SIRT1 activator molecules. Hypo2-virtual screening against Maybridge database reveals seven molecules, which contains all the critical features. Moreover, two new scaffolds were identified from this study. These scaffolds may be a potent lead for the SIRT1 activation.

Speaker-Independent Korean Digit Recognition Using HCNN with Weighted Distance Measure (가중 거리 개념이 도입된 HCNN을 이용한 화자 독립 숫자음 인식에 관한 연구)

  • 김도석;이수영
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.10
    • /
    • pp.1422-1432
    • /
    • 1993
  • Nonlinear mapping function of the HCNN( Hidden Control Neural Network ) can change over time to model the temporal variability of a speech signal by combining the nonlinear prediction of conventional neural networks with the segmentation capability of HMM. We have two things in this paper. first, we showed that the performance of the HCNN is better than that of HMM. Second, the HCNN with its prediction error measure given by weighted distance is proposed to use suitable distance measure for the HCNN, and then we showed that the superiority of the proposed system for speaker-independent speech recognition tasks. Weighted distance considers the differences between the variances of each component of the feature vector extraced from the speech data. Speaker-independent Korean digit recognition experiment showed that the recognition rate of 95%was obtained for the HCNN with Euclidean distance. This result is 1.28% higher than HMM, and shows that the HCNN which models the dynamical system is superior to HMM which is based on the statistical restrictions. And we obtained 97.35% for the HCNN with weighted distance, which is 2.35% better than the HCNN with Euclidean distance. The reason why the HCNN with weighted distance shows better performance is as follows : it reduces the variations of the recognition error rate over different speakers by increasing the recognition rate for the speakers who have many misclassified utterances. So we can conclude that the HCNN with weighted distance is more suit-able for speaker-independent speech recognition tasks.

  • PDF

Development of Suspended Sediment Concentration Measurement Technique Based on Hyperspectral Imagery with Optical Variability (분광 다양성을 고려한 초분광 영상 기반 부유사 농도 계측 기법 개발)

  • Kwon, Siyoon;Seo, Il Won
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.116-116
    • /
    • 2021
  • 자연 하천에서의 부유사 농도 계측은 주로 재래식 채집방식을 활용한 직접계측 방식에 의존하여 비용과 시간이 많이 소요되며 점 계측 방식으로 고해상도의 시공간 자료를 측정하기엔 한계가 존재한다. 이러한 한계점을 극복하기 위해 최근 위성영상과 드론을 활용하여 촬영된 다분광 혹은 초분광 영상을 통해 고해상도의 부유사 농도 시공간분포를 측정하는 기법에 대한 연구가 활발히 진행되고 있다. 하지만, 다른 하천 물리량 계측에 비해 부유사 계측 연구는 하천에 따라 부유사가 비균질적으로 분포하여 원격탐사를 통해 정확하고 전역적인 농도 분포를 재현하기는 어려운 실정이다. 이러한 부유사의 비균질성은 부유사의 입도분포, 광물특성, 침강성 등이 하천에서 다양하게 분포하기 때문이며 이로 인해 부유사는 지역별로 다양한 분광특성을 가지게 된다. 따라서, 본 연구에서는 이러한 영향을 고려한 전역적인 부유사 농도 예측 모형을 개발하기 위해 실내 실험을 통해 부유사 특성별 고유 분광 라이브러리를 구축하고 실규모 수로에서 다양한 부유사 조건에 대한 초분광 스펙트럼과 부유사 농도를 측정하는 실험을 수행하였다. 실제 부유사 농도는 광학 기반 센서인 LISST-200X와 샘플링을 통한 실험실 분석을 통해 계측되었으며, 초분광 스펙트럼 자료는 초분광 카메라를 통해 촬영한 영상에서 부유사 계측 지점에 대한 픽셀의 스펙트럼을 추출하여 구축하였다. 이렇게 생성된 자료들의 분광 다양성을 주성분 분석(Principle Component Analysis; PCA)를 통해 분석하였으며, 부유사의 입도 분포, 부유사 종류, 수온 등과의 상관관계를 통해 분광 특성과 가장 상관관계가 높은 물리적 인자를 규명하였다. 더불어 구축된 자료를 바탕으로 기계학습 기반 주요 특징 선택 알고리즘인 재귀적 특징 제거법 (Recursive Feature Elimination)과 기계학습기반 회귀 모형인 Support Vector Regression을 결합하여 초분광 영상 기반 부유사 농도 예측 모형을 개발하였으며, 이 결과를 원격탐사 계측 연구에서 일반적으로 사용되어 오던 최적 밴드비 분석 (Optimal Band Ratio Analysis; OBRA) 방법으로 도출된 회귀식과 비교하였다. 그 결과, 기존의 OBRA 기반 방법은 비선형성을 증가시켜도 좁은 영역의 파장대만을 고려하는 한계점으로 인해 부유사의 다양한 분광 특성을 반영하지 못하였으며, 본 연구에서 제시한 기계학습 기반 예측 모형은 420 nm~1000 nm에 걸쳐 폭 넓은 파장대를 고려함과 동시에 높은 정확도를 산출하였다. 최종적으로 개발된 모형을 적용해 다양한 유사 조건에 대한 부유사 시공간 분포를 매핑한 결과, 시공간적으로 고해상도의 부유사 농도 분포를 산출하는 것으로 밝혀졌다.

  • PDF

Mapping Burned Forests Using a k-Nearest Neighbors Classifier in Complex Land Cover (k-Nearest Neighbors 분류기를 이용한 복합 지표 산불피해 영역 탐지)

  • Lee, Hanna ;Yun, Konghyun;Kim, Gihong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.6
    • /
    • pp.883-896
    • /
    • 2023
  • As human activities in Korea are spread throughout the mountains, forest fires often affect residential areas, infrastructure, and other facilities. Hence, it is necessary to detect fire-damaged areas quickly to enable support and recovery. Remote sensing is the most efficient tool for this purpose. Fire damage detection experiments were conducted on the east coast of Korea. Because this area comprises a mixture of forest and artificial land cover, data with low resolution are not suitable. We used Sentinel-2 multispectral instrument (MSI) data, which provide adequate temporal and spatial resolution, and the k-nearest neighbor (kNN) algorithm in this study. Six bands of Sentinel-2 MSI and two indices of normalized difference vegetation index (NDVI) and normalized burn ratio (NBR) were used as features for kNN classification. The kNN classifier was trained using 2,000 randomly selected samples in the fire-damaged and undamaged areas. Outliers were removed and a forest type map was used to improve classification performance. Numerous experiments for various neighbors for kNN and feature combinations have been conducted using bi-temporal and uni-temporal approaches. The bi-temporal classification performed better than the uni-temporal classification. However, the uni-temporal classification was able to detect severely damaged areas.

PCA­based Waveform Classification of Rabbit Retinal Ganglion Cell Activity (주성분분석을 이용한 토끼 망막 신경절세포의 활동전위 파형 분류)

  • 진계환;조현숙;이태수;구용숙
    • Progress in Medical Physics
    • /
    • v.14 no.4
    • /
    • pp.211-217
    • /
    • 2003
  • The Principal component analysis (PCA) is a well-known data analysis method that is useful in linear feature extraction and data compression. The PCA is a linear transformation that applies an orthogonal rotation to the original data, so as to maximize the retained variance. PCA is a classical technique for obtaining an optimal overall mapping of linearly dependent patterns of correlation between variables (e.g. neurons). PCA provides, in the mean-squared error sense, an optimal linear mapping of the signals which are spread across a group of variables. These signals are concentrated into the first few components, while the noise, i.e. variance which is uncorrelated across variables, is sequestered in the remaining components. PCA has been used extensively to resolve temporal patterns in neurophysiological recordings. Because the retinal signal is stochastic process, PCA can be used to identify the retinal spikes. With excised rabbit eye, retina was isolated. A piece of retina was attached with the ganglion cell side to the surface of the microelectrode array (MEA). The MEA consisted of glass plate with 60 substrate integrated and insulated golden connection lanes terminating in an 8${\times}$8 array (spacing 200 $\mu$m, electrode diameter 30 $\mu$m) in the center of the plate. The MEA 60 system was used for the recording of retinal ganglion cell activity. The action potentials of each channel were sorted by off­line analysis tool. Spikes were detected with a threshold criterion and sorted according to their principal component composition. The first (PC1) and second principal component values (PC2) were calculated using all the waveforms of the each channel and all n time points in the waveform, where several clusters could be separated clearly in two dimension. We verified that PCA-based waveform detection was effective as an initial approach for spike sorting method.

  • PDF