• Title/Summary/Keyword: Auto-Management

Search Result 473, Processing Time 0.021 seconds

The Study of Prevalence Rate of Refractive Error among the Primary Students in Jeollanamdo (전남지역 초등학생의 굴절이상 유병률에 관한 연구)

  • Jang, Jung Un;Park, Inn-Jee
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.20 no.3
    • /
    • pp.311-318
    • /
    • 2015
  • Purpose: This study was designed to investigate the prevalence rate of refractive error with gender and age presenting visual acuity of primary student in Jeonnam. Methods: Subjective refraction, objective refraction and visual acuity test were examined on 735 primary school children who ages of 8~13 years lived in Jenman. Presenting visual acuity test was using Han's visual acuity chart and objective refraction was carried out using auto-refractometer. Results: The presenting visual acuity was 0.1 worse in the eye of 54(7.3%) students and 49(7.3%) of them wearing the glasses. The rate of the wearing glasses were 79.3% in 0.125~0.25 visual acuity, 64.2% in 0.3~0.5 visual acuity and 61.6% in 0.6~0.8 visual acuity. It was appeared that 269(36.6%) of them were emmetropia, 321(43.7%) of them were myopia and 56(7.6%) of them were hyperopia, and 89(12.1%) of them were astigmatism. The prevalence rate of myopia was the highest and followed by the prevalence rate of astigmatism. The low degree of myopia occupied the most and the medium degree of myopia showed the tendency of increase as the students get olds. The prevalence rate of the hyperopia showed the tendency of decrease as the students get olds. The prevalence rate of with the rule astigmatism were 50.6% students and against the rule astigmatism were 48.3% of students. Conclusions: The present study reveals the considerable prevalence rates, 466(63.4%) of included subjects, of refractive errors among primary students in Jeonnam province. The rate of the wearing glasses were 313(42.6%). The prevalence of myopia increases as the students get older. Therefore students of visual management is considered necessary through the visual acuity test and refractive examination.

Vertical Distribution of Vascular Plant Species along an Elevational Gradients in the Gyebangsan Area of Odaesan National Park (오대산국립공원 계방산지구 관속식물의 고도별 수직분포)

  • An, Ji-Hong;Park, Hwan-Jun;Nam, Gi-Heum;Lee, Byoung-Yoon;Park, Chan-Ho;Kim, Jung-Hyun
    • Korean Journal of Ecology and Environment
    • /
    • v.50 no.4
    • /
    • pp.381-402
    • /
    • 2017
  • In order to investigate distribution of vascular plants along elevational gradient in the Nodong valley of Gyebangsan, vascular plants of eight sections with 100-meter-high were surveyed from the Auto-camping site (800 m) to the top of a mountain (1,577 m). There were a total of 382 taxa: 89 families, 234 genera, 339 species, 7 subspecies, 34 varieties, and 2 forms. As a result of analyzing the pattern of species richness, it showed a reversed hump-shaped with minimum richness at mid-high elevation. As a result of analyzing habitat affinity types, the proportion of forest species increased with increasing elevation. But, the ruderal species decreased with increasing elevation, and then increased at the top of a mountain. As for the proportion of life forms, the annual herbs gradually decreased with increasing elevation, but it did not appear between 1,300 m and 1,500 m and then increased at the top of a mountain. The trees gradually increased with elevation and decreased from 1,300~1,400 m. The vascular plants divided into four groups by using DCA. The arrangement of each stands was arranged in order from right to left on the I axis according to the elevation. The distribution of vascular plants is determined by their own optimal ranges of vegetation. Also, rise in temperature due to climate change affects the distribution of vascular plants, composition, and diversity. Therefore, continuous monitoring is necessary to confirm ecological and environmental characteristics of vegetation, distribution ranges, changes of habitat. Furthermore, plans for conservation and management based on these data should be prepared according to climate change.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.