• Title/Summary/Keyword: Current Minimization

Search Result 236, Processing Time 0.023 seconds

A Study on the Development of Mobile APP Usability Evaluation Tool for Time Management (시간관리 모바일 앱 사용성 평가도구 개발에 관한 연구)

  • Lee, Saem;Nam, Won-Suk
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.12
    • /
    • pp.19-29
    • /
    • 2021
  • Although several industrial revolutions and technological developments have provided time for modern people, more than 75% of modern people are currently suffering from the problem of "lack of time". Therefore, in this paper, it was judged that it was necessary to improve the mobile APP service for efficient time management of modern people, and a study was conducted on the development of usability evaluation tools. First, the current status was analyzed through case studies of six time management apps with different purposes, and then usability studies were conducted to establish a total of five evaluation principles with operability, cognition, user suitability, minimization of work errors, and sharing. Based on this, a draft of the time management mobile app usability evaluation item was prepared, and a total of 44 usability evaluation tool items were derived through three Delphi surveys of design experts over 5 years and less than 10 years. As a result, intuitive screen and menu composition and user's perception of content were evaluated as the most important factors, and conclusions such as the need to improve the structure and function of the main content screen for user error prevention and usability were drawn. This study will not only contribute to improving the usability of existing time management mobile apps, but will also be used as a material for designing an integrated time management platform for efficient time management and purpose promotion in the future, and is expected to be used as an academic reference.

Optimization of Agri-Food Supply Chain in a Sustainable Way Using Simulation Modeling

  • Vostriakova, Viktorija;Kononova, Oleksandra;Kravchenko, Sergey;Ruzhytskyi, Andriy;Sereda, Nataliia
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.3
    • /
    • pp.245-256
    • /
    • 2021
  • Poor logistical infrastructure and agri-food supply chain management leads to significant food waste in logistic system. The concept of the sustainable value added agri-food chains requires defined approach to the analysis of the existing situation, possible improving strategies and also assessment of these changes impact on further development. The purpose of research is to provide scientific substantiation of theoretical and methodological principles and develop practical recommendations for the improvement of the agri-food logistics distribution system. A case study methodology is used in this article. The research framework is based on 4 steps: Value Stream Mapping (VSM), Gap and Process Analysis, Validation and Improvement Areas Definition and Imitation Modelling. This paper presents the appropriateness of LEAN logistics tools using, in particular, Value Stream Mapping (VSM) for minimizing logistic losses and Simulation Modeling of possible logistics distribution system improvement results. The algorithm of VSM analysis of the agri-food supply chain, which involves its optimization by implementing the principles of sustainable development at each stage, is proposed. The methodical approach to the analysis of possible ways for optimizing the operation of the logistics system of the agri-food distribution is developed. It involves the application of Value Stream Mapping, i.e. designing of stream maps of the creation of the added value in the agri-food supply chain for the current and future state based on the minimization of logistic losses. Simulation modeling of the investment project on time optimization in the agri-food supply chain and economic effect of proposed improvements in logistics product distribution system functioning at the level of the investigated agricultural enterprise has been determined. Improvement of logistics planning and coordination of operations in the supply chain and the innovative pre-cooling system proposed to be introduced have a 3-year payback period and almost 75-80% probability. Based on the conducted VSM analysis of losses in the agri-food supply chain, there have been determined the main points, where it is advisable to conduct optimization changes for the achievement of positive results and the significant economic effect from the proposed measures has been confirmed. In further studies, it is recommended to focus on identifying the synergistic effect of the agri-food supply chain optimization on the basis of sustainable development.

A Study on the Land-Use Related Assessment Factors in Korean Environmental Impact Assessment (환경영향평가 토지환경 분야의 토지이용 평가항목 고찰 연구)

  • Park, Sang-Jin;Lee, Dong Kun;Jeong, Seulgi
    • Journal of Environmental Impact Assessment
    • /
    • v.30 no.5
    • /
    • pp.297-304
    • /
    • 2021
  • The environmental impact assessment(EIA) project in Korea has undergone changes and revisions in various evaluation items for about 30 years after the introduction of the Environmental Conservation Act (1997). However, despite the importance of land use evaluation items under the current EIA Act, there are insufficient studies to consider. Therefore, this study focused on the land-use evaluation items based on the EIA guidelines, reviewed 90 of the evaluation documents and consultation documents, and tried to suggest implications and supplementary points forthe domestic EIA land-use evaluation items. As a result, the paradigm was changing from land efficiency centered on development in the past to land efficiency centered on the natural environment and resource conservation. However, in spite of the manual for fitting the paradigm change, opinions on the conservation of the natural environment are still being drawn in the consultation document, so it needs improvement. Two improvements in the impact assessment process suggested in this study are the establishment of standardized spatial data and a quantitative impact and reduction method evaluation tool based on it. In particular, there is a need for a plan evaluation tool for land use arrangement and distribution that can solve the needs of minimizing damage to the natural environment and securing green space and a green network.

The Necessity of Resetting the Filter Criteria for the Minimization of Dose Creep in Digital Imaging Systems (디지털 영상 시스템에서 선량 크리프 최소화를 위한 부가 필터 두께 권고 기준의 재설정에 대한 연구)

  • Kim, Kyo Tae;Kim, Kum Bae;Kang, Sang Sik;Park, Ji Koon
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.5
    • /
    • pp.757-763
    • /
    • 2019
  • Recently, Following the recent development of flat panel detector with wide dynamic ranges, increasing numbers of healthcare providers have begun to use digital radiography. As a result, filter thickness standards should be reestablished, as current clinical practice requires the use of thicknesses recommended by the National Council on Radiation Protection and Measurements, which are based on information, acquired using conventional analog systems. Here we investigated the possibility of minimizing dose creep and optimizing patient dose using Al filters in digital radiography. The use of thicker Al filters resulted in a maximum 19.3% reduction in the entrance skin exposure dose when medical images with similar sharpness values were compared. However, resolution, which is a critical factor in imaging, had a significant change of 1.01 lp/mm. This change in resolution is thought to be due to the increased amount of scattered rays generated from the object due to the X-ray beam hardening effect. The increase in the number of scattered rays was verified using the scattering degradation factor. However, the FPD, which has recently been developed and is widely used in various areas, has greater response to radiation than analog devices and has a wide dynamic range. Therefore, the FPD is expected to maintain an appropriate level of resolution corresponding to the increase in the scattered-ray content ratio, which depends on filter thickness. Use of the FPD is also expected to minimize dose creep by reducing the exposure dose.

Changes and Improvements of the Standardized Eddy Covariance Data Processing in KoFlux (표준화된 KoFlux 에디 공분산 자료 처리 방법의 변화와 개선)

  • Kang, Minseok;Kim, Joon;Lee, Seung-Hoon;Kim, Jongho;Chun, Jung-Hwa;Cho, Sungsik
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.20 no.1
    • /
    • pp.5-17
    • /
    • 2018
  • The standardized eddy covariance flux data processing in KoFlux has been updated, and its database has been amended accordingly. KoFlux data users have not been informed properly regarding these changes and the likely impacts on their analyses. In this paper, we have documented how the current structure of data processing in KoFlux has been established through the changes and improvements to ensure transparency, reliability and usability of the KoFlux database. Due to increasing diversity and complexity of flux site instrumentation and organization, we have re-implemented the previously ignored or simplified procedures in data processing (e.g., frequency response correction, stationarity test), and added new methods for $CH_4$ flux gap-filling and $CO_2$ flux correction and partitioning. To evaluate the effects of the changes, we processed the data measured at a flat and homogeneous paddy field (i.e., HPK) and a deciduous forest in complex and heterogeneous topography (i.e., GDK), and quantified the differences. Based on the results from our overall assessment, it is confirmed that (1) the frequency response correction (HPK: 11~18% of biases for annually integrated values, GDK: 6~10%) and the stationarity test (HPK: 4~19% of biases for annually integrated values, GDK: 9~23%) are important for quality control and (2) the minimization of the missing data and the choice of the appropriate driver (rather than the choice of the gap-filling method) are important to reduce the uncertainty in gap-filled fluxes. These results suggest the future directions for the data processing technology development to ensure the continuity of the long-term KoFlux database.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.