• Title/Summary/Keyword: Database Management Systems

Search Result 1,025, Processing Time 0.029 seconds

Upper Garment Sizing System for Obese School Boys Based on Somatotype Analysis (학령후기 비만 남아의 체형 분석에 따른 plus-size 남자 아동복 상의 치수 규격 제안)

  • Park, Soon-Jee
    • Journal of the Korean Home Economics Association
    • /
    • v.46 no.9
    • /
    • pp.99-112
    • /
    • 2008
  • The increasing rate of obesity in school aged children has become a conspicuous social phenomenon in Korea. This has been linked to greater economic growth, increasingly westernized dietary habits, and a consumer driven society. Given that obesity can lead to social exclusion or unfavorable attention by other students in a school setting, the design of plus-size garments have become important for effective appearance management skills. This research aimed to establish a somatotype database for obese school boys, aged 10 to 12, in order to develop a sizing system for plus-size upper garments. In order to measure somatotype of average and obese school boys, five categories were recorded; height, obesity, length of trunk, thickness of neck and chest. For obese boys, subcutaneous fat thickness and position of B.P/shoulder point factors were recorded. Obesity factor was subdivided into overall and specific ones, and while the deviation of obese body types was severe compared to the average type. Obese body type showed significantly higher measurements in width, girth, thickness. This is linked to the fact that the frequency ratio of obesity increases with age. Stature and chest were chosen as control dimensions for boys' wear. As crosstabulation of stature(5cm interval) and chest girth(2, 3 and 4cm), and stature(5cm interval)/chest girth(3cm interval) sizing system showed, the most effective cover ratio and adaptability to the data distribution $25{\sim}75$ quartile. Based on the findings, 10 sizes were formulated for average body type, while 18 sizes were formulated for obese type, whose size cover ratios were 48% and 62.9%, respectively. The primary ranges of stature were $145cm{\sim}150cm$, while those of chest girth were $79{\sim}82cm$. Each size was declared as "chest-somatotype{A(average)/O(obesity)-stature". This study proposed a plus-size upper garment sizing systems for obese boys, accompanied with reference measurements for suit, casual wear and underwear. The finding showed that the two systems were totally separate and not overlapping, meaning that plus-size sizing system is essential for obese school boys. The obesity type system had more size and wider range specs.

The Efficient Method of Parallel Genetic Algorithm using MapReduce of Big Data (빅 데이터의 MapReduce를 이용한 효율적인 병렬 유전자 알고리즘 기법)

  • Hong, Sung-Sam;Han, Myung-Mook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.5
    • /
    • pp.385-391
    • /
    • 2013
  • Big Data is data of big size which is not processed, collected, stored, searched, analyzed by the existing database management system. The parallel genetic algorithm using the Hadoop for BigData technology is easily realized by implementing GA(Genetic Algorithm) using MapReduce in the Hadoop Distribution System. The previous study that the genetic algorithm using MapReduce is proposed suitable transforming for the GA by MapReduce. However, they did not show good performance because of frequently occurring data input and output. In this paper, we proposed the MRPGA(MapReduce Parallel Genetic Algorithm) using improvement Map and Reduce process and the parallel processing characteristic of MapReduce. The optimal solution can be found by using the topology, migration of parallel genetic algorithm and local search algorithm. The convergence speed of the proposal method is 1.5 times faster than that of the existing MapReduce SGA, and is the optimal solution can be found quickly by the number of sub-generation iteration. In addition, the MRPGA is able to improve the processing and analysis performance of Big Data technology.

Linkage Base of Geo-based Processing Service and Open PaaS Cloud (오픈소스 PaaS 클라우드와 공간정보 처리서비스 연계 기초)

  • KIM, Kwang-Seob;LEE, KI-Won
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.20 no.4
    • /
    • pp.24-38
    • /
    • 2017
  • The awareness and demand for technological elements in the field of cloud computing and their application models have increased. Cloud-based service information systems are being expanded for use in many applications. Advancements in information technology are directly related to spatial information. PaaS is an important platform for implementing a substantial cloud ecosystem to develop geo-based application services. For this reason, it is necessary to analyze the PaaS cloud technology prior to the development of SaaS. The PaaS cloud supports sharing of related extensions, database operations and management, and application development and deployment. The development of geo-spatial information systems or services based on PaaS in ranging the domestic and overseas range is in the initial stages of both research and application. In this study, state-of-the-art cloud computing is reviewed and a conceptual design for geo-based applications is presented. The proposed model is based on container methods, which are the core elements of PaaS cloud technology based on open source. It is thought that these technologies contribute to the applicability and scalability of the geo-spatial information industry that addresses cloud computing. It is expected that the results of this study will provide a technological base for practical service implementation and experimentation for geo-based applications.

Design and Implementation of A Distributed Information Integration System based on Metadata Registry (메타데이터 레지스트리 기반의 분산 정보 통합 시스템 설계 및 구현)

  • Kim, Jong-Hwan;Park, Hea-Sook;Moon, Chang-Joo;Baik, Doo-Kwon
    • The KIPS Transactions:PartD
    • /
    • v.10D no.2
    • /
    • pp.233-246
    • /
    • 2003
  • The mediator-based system integrates heterogeneous information systems with the flexible manner. But it does not give much attention on the query optimization issues, especially for the query reusing. The other thing is that it does not use standardized metadata for schema matching. To improve this two issues, we propose mediator-based Distributed Information Integration System (DIIS) which uses query caching regarding performance and uses ISO/IEC 11179 metadata registry in terms of standardization. The DIIS is designed to provide decision-making support, which logically integrates the distributed heterogeneous business information systems based on the Web environment. We designed the system in the aspect of three-layer expression formula architecture using the layered pattern to improve the system reusability and to facilitate the system maintenance. The functionality and flow of core components of three-layer architecture are expressed in terms of process line diagrams and assembly line diagrams of Eriksson Penker Extension Model (EPEM), a methodology of an extension of UML. For the implementation, Supply Chain Management (SCM) domain is used. And we used the Web-based environment for user interface. The DIIS supports functions of query caching and query reusability through Query Function Manager (QFM) and Query Function Repository (QFR) such that it enhances the query processing speed and query reusability by caching the frequently used queries and optimizing the query cost. The DIIS solves the diverse heterogeneity problems by mapping MetaData Registry (MDR) based on ISO/IEC 11179 and Schema Repository (SCR).

Development of an Object-Oriented Framework Data Update System (객체 기반의 기본지리정보 갱신시스템 개발)

  • Lee, Jin-Soo;Choi, Yun-Soo;Seo, Chang-Wan;Jeon, Chang-Dong
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.11 no.1
    • /
    • pp.31-44
    • /
    • 2008
  • The 1st phase framework data implementation of National Geographic Information Systems (NGIS) used 1:5,000 digital map with 5 years updating period which is lacking in the latest information. This is a significant factor which hinders the use of framework data. This study proposed the efficient technical method of a location based object data management and system implementation for updating framework data. First, we did an object-oriented data modeling and database design using a location based features identifier(UFID: Unique Feature IDentifier). The second, we developed the system with various functions such as a location based UFID creation, input and output, a spatial and attribute data editing, an object based data processing using UML(Unified Modeling Language). Finally, we applied the system to the study area and got high quality data of 99% accuracy and 35% benefit effect of personnel expenses compare to the previous method. We expect that this study can contribute to the maintenance of national framework data as well as the revitalization of various GIS markets by providing user the latest framework data and that we can develop the methods of a feature-change modeling and monitoring using an object based data management.

  • PDF

Wavelet Thresholding Techniques to Support Multi-Scale Decomposition for Financial Forecasting Systems

  • Shin, Taeksoo;Han, Ingoo
    • Proceedings of the Korea Database Society Conference
    • /
    • 1999.06a
    • /
    • pp.175-186
    • /
    • 1999
  • Detecting the features of significant patterns from their own historical data is so much crucial to good performance specially in time-series forecasting. Recently, a new data filtering method (or multi-scale decomposition) such as wavelet analysis is considered more useful for handling the time-series that contain strong quasi-cyclical components than other methods. The reason is that wavelet analysis theoretically makes much better local information according to different time intervals from the filtered data. Wavelets can process information effectively at different scales. This implies inherent support fer multiresolution analysis, which correlates with time series that exhibit self-similar behavior across different time scales. The specific local properties of wavelets can for example be particularly useful to describe signals with sharp spiky, discontinuous or fractal structure in financial markets based on chaos theory and also allows the removal of noise-dependent high frequencies, while conserving the signal bearing high frequency terms of the signal. To date, the existing studies related to wavelet analysis are increasingly being applied to many different fields. In this study, we focus on several wavelet thresholding criteria or techniques to support multi-signal decomposition methods for financial time series forecasting and apply to forecast Korean Won / U.S. Dollar currency market as a case study. One of the most important problems that has to be solved with the application of the filtering is the correct choice of the filter types and the filter parameters. If the threshold is too small or too large then the wavelet shrinkage estimator will tend to overfit or underfit the data. It is often selected arbitrarily or by adopting a certain theoretical or statistical criteria. Recently, new and versatile techniques have been introduced related to that problem. Our study is to analyze thresholding or filtering methods based on wavelet analysis that use multi-signal decomposition algorithms within the neural network architectures specially in complex financial markets. Secondly, through the comparison with different filtering techniques' results we introduce the present different filtering criteria of wavelet analysis to support the neural network learning optimization and analyze the critical issues related to the optimal filter design problems in wavelet analysis. That is, those issues include finding the optimal filter parameter to extract significant input features for the forecasting model. Finally, from existing theory or experimental viewpoint concerning the criteria of wavelets thresholding parameters we propose the design of the optimal wavelet for representing a given signal useful in forecasting models, specially a well known neural network models.

  • PDF

Susceptibility Mapping of Umyeonsan Using Logistic Regression (LR) Model and Post-validation through Field Investigation (로지스틱 회귀 모델을 이용한 우면산 산사태 취약성도 제작 및 현장조사를 통한 사후검증)

  • Lee, Sunmin;Lee, Moung-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_2
    • /
    • pp.1047-1060
    • /
    • 2017
  • In recent years, global warming has been continuing and abnormal weather phenomena are occurring frequently. Especially in the 21st century, the intensity and frequency of hydrological disasters are increasing due to the regional trend of water. Since the damage caused by disasters in urban areas is likely to be extreme, it is necessary to prepare a landslide susceptibility maps to predict and prepare the future damage. Therefore, in this study, we analyzed the landslide vulnerability using the logistic model and assessed the management plan after the landslide through the field survey. The landslide area was extracted from aerial photographs and interpretation of the field survey data at the time of the landslides by local government. Landslide-related factors were extracted topographical maps generated from aerial photographs and forest map. Logistic regression (LR) model has been used to identify areas where landslides are likely to occur in geographic information systems (GIS). A landslide susceptibility map was constructed by applying a LR model to a spatial database constructed through a total of 13 factors affecting landslides. The validation accuracy of 77.79% was derived by using the receiver operating characteristic (ROC) curve for the logistic model. In addition, a field investigation was performed to validate how landslides were managed after the landslide. The results of this study can provide a scientific basis for urban governments for policy recommendations on urban landslide management.

Building the Data Mart on Antibiotic Usage for Infection Control (감염관리를 위한 항생제 사용량 데이터마트의 구축)

  • Rheem, Insoo
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.48 no.4
    • /
    • pp.348-354
    • /
    • 2016
  • Data stored in hospital information systems has a great potential to improve adequacy assessment and quality management. Moreover, an establishment of a data warehouse has been known to improve quality management and to offer help to clinicians. This study constructed a data mart that can be used to analyze antibiotic usage as a part of systematic and effective data analysis of infection control information. Metadata was designed by using the XML DTD method after selecting components and evaluation measures for infection control. OLAP-a multidimensional analysis tool-for antibiotic usage analysis was developed by building a data mart through modeling. Experimental data were obtained from data on antibiotic usage at a university hospital in Cheonan area for one month in July of 1997. The major components of infection control metadata were antibiotic resistance information, antibiotic usage information, infection information, laboratory test information, patient information, and infection related costs. Among them, a data mart was constructed by designing a database to apply antibiotic usage information to a star schema. In addition, OLAP was demonstrated by calculating the statistics of antibiotic usage for one month. This study reports the development of a data mart on antibiotic usage for infection control through the implementation of XML and OLAP techniques. Building a conceptual, structured data mart would allow for a rapid delivery and diverse analysis of infection control information.

Developmental disability Diagnosis Assessment Systems Implementation using Multimedia Authorizing Tool (멀티미디어 저작도구를 이용한 발달장애 진단.평가 시스템 구현연구)

  • Byun, Sang-Hea;Lee, Jae-Hyun
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.3 no.1
    • /
    • pp.57-72
    • /
    • 2008
  • Serve and do so that graft together specialists' view application field of computer and developmental disability diagnosis estimation data to construct developmental disability diagnosis estimation system in this Paper and constructed developmental disability diagnosis estimation system. Developmental disability diagnosis estimation must supply information of specification area that specialists are having continuously. Developmental disability diagnosis estimation specialist system need multimedia data processing that is specialized little more for developmental disability classification diagnosis and decision-making and is atomized for this. Characteristic of developmental disability diagnosis estimation system that study in this paper can supply quick feedback about result, and can reduce mistake on recording and calculation as well as can shorten examination's enforcement time, and background of training is efficient system fairly in terms of nonprofessional who is not many can use easily. But, as well as when multimedia information that is essential data of system construction for developmental disability diagnosis estimation is having various kinds attribute and a person must achieve description about all developmental disability diagnosis estimation informations, great amount of work done is accompanied, technology about equal data can become different according to management. Because of these problems, applied search technology of contents base (Content-based) that search connection information by contents of edit target data for developmental disability diagnosis estimation data processing multimedia data processing technical development. In the meantime, typical access way for conversation style data processing to support fast image search, after draw special quality of data by N-dimension vector, store to database regarding this as value of N dimension and used data structure of Tree techniques to use index structure that search relevant data based on this costs. But, these are not coincided correctly in purpose of developmental disability diagnosis estimation because is developed focusing in application field that use data of low dimension such as original space DataBase or geography information system. Therefore, studied save structure and index mechanism of new way that support fast search to search bulky good physician data.

  • PDF

Characteristics of Exposure to High-Risk Substances in the Electronics Industry Using the Work Environment Survey and Work Environment Measurement Database (2018~2022) in South Korea -Dichloromethane, Trichloromethane, and Tetramethylammonium Hydroxide- (작업환경실태조사 및 작업환경측정자료(2018~2022) 결과를 활용한 우리나라 전자산업에서의 고위험물질 노출 특성 -디클로로메탄, 트리클로로메탄, 수산화테트라메틸암모늄 중심으로-)

  • Sung Ho Hwang;Seunhon Ham;Hyoung-Ryoul Kim;Hyunchul Ryu;Jinsoo An;JinHa Yoon;Chungsik Yoon;Naeun Lee;Sangman Lee;Jaehwan Lee;Se Young Kwon;Jaepil Chang;Kwonchul Ha
    • Journal of Environmental Health Sciences
    • /
    • v.50 no.3
    • /
    • pp.221-228
    • /
    • 2024
  • Background: Social interest is increasing due to frequent accidents caused by chemicals in the electronics industry. Objectives: The purpose of this study is to present a management plan by evaluating the exposure characteristics of dichloromethane (DCM), trichloromethane (TCM), and tetramethyl ammonium hydroxide (TMAH), which are high-risk substances to which people may be exposed in the electronics industry in South Korea. Methods: To investigate the handling companies and status of the hazardous chemicals DCM, TCM, and TMAH, the handling status of the three substances was classified based on electronics industry-related codes from the 2019 Work Environment Survey (Chemical Handling and Manufacturing) data with work environment measurement results for five years. Results: DCM, TCM, and TMAH are commonly used as cleaning agents in the electronics industry. For DCM, it was found that all work environment measurement results from 2018 to 2021 but not 2022 exceeded the exposure standard. Conclusions: Identifying the distribution channels of hazardous chemicals is an intervention point that can reduce exposure to hazardous chemicals. It requires management through tracking systems such as unique verification numbers at the import and manufacturing stages, and proper cultivation of and related support for handling chemicals by business managers.