• Title/Summary/Keyword: DataBases

Search Result 523, Processing Time 0.026 seconds

The Modeling of the Optimal Data Format for JPEG2000 CODEC on the Fixed Compression Ratio (고정 압축률에서의 JPEG2000 코덱을 위한 최적의 데이터 형식 모델링)

  • Kang, Chang-Soo;Seo, Choon-Weon
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.1257-1260
    • /
    • 2005
  • This paper is related to optimization in the image data format, which can make a great effect in performance of data compression and is based on the wavelet transform and JPEG2000. This paper established a criterion to decide the data format to be used in wavelet transform, which is on the bases of the data errors in frequency transform and quantization. This criterion has been used to extract the optimal data format experimentally. The result were (1, 9) of 10-bit fixed-point format for filter coefficients and (9, 7) of 16-bit fixed-point data format for wavelet coefficients and their optimality was confirmed.

  • PDF

A Neural Network Combining a Competition Learning Model and BP ALgorithm for Data Mining (데이터 마이닝을 위한 경쟁학습모텔과 BP알고리즘을 결합한 하이브리드형 신경망)

  • 강문식;이상용
    • Journal of Information Technology Applications and Management
    • /
    • v.9 no.2
    • /
    • pp.1-16
    • /
    • 2002
  • Recently, neural network methods have been studied to find out more valuable information in data bases. But the supervised learning methods of neural networks have an overfitting problem, which leads to errors of target patterns. And the unsupervised learning methods can distort important information in the process of regularizing data. Thus they can't efficiently classify data, To solve the problems, this paper introduces a hybrid neural networks HACAB(Hybrid Algorithm combining a Competition learning model And BP Algorithm) combining a competition learning model and 8P algorithm. HACAB is designed for cases which there is no target patterns. HACAB makes target patterns by adopting a competition learning model and classifies input patterns using the target patterns by BP algorithm. HACAB is evaluated with random input patterns and Iris data In cases of no target patterns, HACAB can classify data more effectively than BP algorithm does.

  • PDF

Development of Bin Weather Data for Simplified Energy Calculations (간역열부하계산용(簡易熱負荷計算用) Bin기상(氣象)데이터)

  • Kim, Doo Chun;Choi, Jin Hee
    • The Magazine of the Society of Air-Conditioning and Refrigerating Engineers of Korea
    • /
    • v.17 no.1
    • /
    • pp.28-43
    • /
    • 1988
  • The purpose of this research is to produce bin weather data for Seoul from Standard Weather Data. The intended use of these data is for input to recently developed models for simplified energy calculations and for generating variable-base degree-day information. The data produced under this study include $3^{\circ}C$ bin data covering the full range of dry-bulb temperatures with mean coincident wet-bulb and daytime coincident solar radiation, wet-bulb bins down to freezing temperature, wind speed bins with prevailing directions, and heating and cooling degree hours to nine different temperature bases. All of these data are tabulated in six separate time periods and total daily categories for monthly and annual periods.

  • PDF

A Study on Estimation of Cooling Load Using Forecasted Weather Data (기상 예보치를 이용한 냉방부하 예측 기법에 관한 연구)

  • Han, Kyu-Hyun;Yoo, Seong-Yeon;Lee, Je-Myo
    • Proceedings of the SAREK Conference
    • /
    • 2008.06a
    • /
    • pp.937-942
    • /
    • 2008
  • In this paper, new methodology is proposed to estimate the cooling load using design parameters of building and predicted weather data. Only two parameters such as maximum and minimum temperature are necessary to obtain hourly distribution of cooling load for the next day. The maximum and minimum temperature that are used for input parameters can be obtained from forecasted weather data. Benchmarking building(research building) is selected to validate the performance of the proposed method, and the estimated cooling loads in hourly bases are calculated and compared with the measured data for benchmarking building. The estimated results show fairly good agreement with the measured data for benchmarking building.

  • PDF

Automatic Training Corpus Generation Method of Named Entity Recognition Using Knowledge-Bases (개체명 인식 코퍼스 생성을 위한 지식베이스 활용 기법)

  • Park, Youngmin;Kim, Yejin;Kang, Sangwoo;Seo, Jungyun
    • Korean Journal of Cognitive Science
    • /
    • v.27 no.1
    • /
    • pp.27-41
    • /
    • 2016
  • Named entity recognition is to classify elements in text into predefined categories and used for various departments which receives natural language inputs. In this paper, we propose a method which can generate named entity training corpus automatically using knowledge bases. We apply two different methods to generate corpus depending on the knowledge bases. One of the methods attaches named entity labels to text data using Wikipedia. The other method crawls data from web and labels named entities to web text data using Freebase. We conduct two experiments to evaluate corpus quality and our proposed method for generating Named entity recognition corpus automatically. We extract sentences randomly from two corpus which called Wikipedia corpus and Web corpus then label them to validate both automatic labeled corpus. We also show the performance of named entity recognizer trained by corpus generated in our proposed method. The result shows that our proposed method adapts well with new corpus which reflects diverse sentence structures and the newest entities.

  • PDF

A Feature Selection for the Recognition of Handwritten Characters based on Two-Dimensional Wavelet Packet (2차원 웨이브렛 패킷에 기반한 필기체 문자인식의 특징선택방법)

  • Kim, Min-Soo;Back, Jang-Sun;Lee, Guee-Sang;Kim, Soo-Hyung
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.8
    • /
    • pp.521-528
    • /
    • 2002
  • We propose a new approach to the feature selection for the classification of handwritten characters using two-dimensional(2D) wavelet packet bases. To extract key features of an image data, for the dimension reduction Principal Component Analysis(PCA) has been most frequently used. However PCA relies on the eigenvalue system, it is not only sensitive to outliers and perturbations, but has a tendency to select only global features. Since the important features for the image data are often characterized by local information such as edges and spikes, PCA does not provide good solutions to such problems. Also solving an eigenvalue system usually requires high cost in its computation. In this paper, the original data is transformed with 2D wavelet packet bases and the best discriminant basis is searched, from which relevant features are selected. In contrast to PCA solutions, the fast selection of detailed features as well as global features is possible by virtue of the good properties of wavelets. Experiment results on the recognition rates of PCA and our approach are compared to show the performance of the proposed method.

An Approach to Estimate Daily Maximum Mixing Height(DMMH) in Pohang, Osan, and Kwangju Areas -Analysis of 10 years data from 1983 to 1992- (포항, 오산, 광주지역의 일최대 혼합고 추정 -1983~1992년의 10년간 자료의 분석-)

  • 최진수;백성옥
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.14 no.4
    • /
    • pp.379-385
    • /
    • 1998
  • The Holzworth's method was applied to estimate the daily maximum mixing height (DMMH) in Pohang, Osan and Kwangju areas. The data-base were established with meteorological data collected at air bases in these areas during the period 1983∼1992. It was investigated the seasonality, monthly trends and occurrence frequencies of the estimated DMMH data in each area. The estimated mean DMMH were found in the range of 1,100 m (winter) to 1,450m (spring). These mean DMMH data showed a typical seasonality in which higher values are commonly seen during spring and fall, while lower values during summer and winter seasons. An occurrence of estimated mean DMMH which in the range of 1,000∼2,000m altitude was appeared to be about 60%.

  • PDF

Financial Performance Evaluation using Self-Organizing Maps: The Case of Korean Listed Companies (자기조직화 지도를 이용한 한국 기업의 재무성과 평가)

  • 민재형;이영찬
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.26 no.3
    • /
    • pp.1-20
    • /
    • 2001
  • The amount of financial information in sophisticated large data bases is huge and makes interfirm performance comparisons very difficult or at least very time consuming. The purpose of this paper is to investigate whether neural networks in the form of self-organizing maps (SOM) can be successfully employed to manage the complexity for competitive financial benchmarking. SOM is known to be very effective to visualize results by projecting multi-dimensional financial data into two-dimensional output space. Using the SOM, we overcome the problems of finding an appropriate underlying distribution and the functional form of data when structuring and analyzing a large data base, and show an efficient procedure of competitive financial benchmarking through clustering firms on two-dimensional visual space according to their respective financial competitiveness. For the empirical purpose, we analyze the data base of annual reports of 100 Korean listed companies over the years 1998, 1999, and 2000.

  • PDF

Design of Management Structure Measuring Integrated Monitoring System Based on Linked Open Data

  • Min, Byung-Won;Okazaki, Yasuhisa;Oh, Yong-Sun
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2016.05a
    • /
    • pp.255-256
    • /
    • 2016
  • In this paper, we analyze the operations and/or status of our structure which builds the management structure measuring integrated monitoring system based on linked open data in a short term or long term bases. We have applied a novel analyzing method of linked open data to expect what movements can be occurred in the structure, and we improve the monitoring system using an integrated design to solve the drawbacks of conventional types of monitoring. And collecting data through cloud and their reliability can be proved by evaluation of soundness of data amount and their confidence.

  • PDF

Ontology-lexicon-based question answering over linked data

  • Jabalameli, Mehdi;Nematbakhsh, Mohammadali;Zaeri, Ahmad
    • ETRI Journal
    • /
    • v.42 no.2
    • /
    • pp.239-246
    • /
    • 2020
  • Recently, Linked Open Data has become a large set of knowledge bases. Therefore, the need to query Linked Data using question answering (QA) techniques has attracted the attention of many researchers. A QA system translates natural language questions into structured queries, such as SPARQL queries, to be executed over Linked Data. The two main challenges in such systems are lexical and semantic gaps. A lexical gap refers to the difference between the vocabularies used in an input question and those used in the knowledge base. A semantic gap refers to the difference between expressed information needs and the representation of the knowledge base. In this paper, we present a novel method using an ontology lexicon and dependency parse trees to overcome lexical and semantic gaps. The proposed technique is evaluated on the QALD-5 benchmark and exhibits promising results.