• Title/Summary/Keyword: Optimized algorithm

Search Result 1,815, Processing Time 0.026 seconds

A Comparison between Multiple Satellite AOD Products Using AERONET Sun Photometer Observations in South Korea: Case Study of MODIS,VIIRS, Himawari-8, and Sentinel-3 (우리나라에서 AERONET 태양광도계 자료를 이용한 다종위성 AOD 산출물 비교평가: MODIS, VIIRS, Himawari-8, Sentinel-3의 사례연구)

  • Kim, Seoyeon;Jeong, Yemin;Youn, Youjeong;Cho, Subin;Kang, Jonggu;Kim, Geunah;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.543-557
    • /
    • 2021
  • Because aerosols have different spectral characteristics according to the size and composition of the particle and to the satellite sensors, a comparative analysis of aerosol products from various satellite sensors is required. In South Korea, however, a comprehensive study for the comparison of various official satellite AOD (Aerosol Optical Depth) products for a long period is not easily found. In this paper, we aimed to assess the performance of the AOD products from MODIS (Moderate Resolution Imaging Spectroradiometer), VIIRS (Visible Infrared Imaging Radiometer Suite), Himawari-8, and Sentinel-3 by referring to the AERONET (Aerosol Robotic Network) sun photometer observations for the period between January 2015 and December 2019. Seasonal and geographical characteristics of the accuracy of satellite AOD were also analyzed. The MODIS products, which were accumulated for a long time and optimized by the new MAIAC (Multiangle Implementation of Atmospheric Correction) algorithm, showed the best accuracy (CC=0.836) and were followed by the products from VIIRS and Himawari-8. On the other hand, Sentinel-3 AOD did not appear to have a good quality because it was recently launched and not sufficiently optimized yet, according to ESA (European Space Agency). The AOD of MODIS, VIIRS, and Himawari-8 did not show a significant difference in accuracy according to season and to urban vs. non-urban regions, but the mixed pixel problem was partly found in a few coastal regions. Because AOD is an essential component for atmospheric correction, the result of this study can be a reference to the future work for the atmospheric correction for the Korean CAS (Compact Advanced Satellite) series.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Measurement of 18GHz Radio Propagation Characteristics in Subway Tunnel for Train-Wayside Multimedia Transmission (지하철 터널에서의 18GHz 무선영상신호 전파특성 측정)

  • Choi, Kyu-Hyoung;Seo, Myung-Sik
    • Journal of the Korean Society for Railway
    • /
    • v.15 no.4
    • /
    • pp.364-369
    • /
    • 2012
  • This paper presents an experimental study on the radio propagation characteristics in subway tunnel at 18GHz frequency band which has been assigned to video transmission between train and wayside. The radio propagation tests are carried out in the subway tunnel of Seoul Metro using the antenna and communication devices of the prototype video transmission system. The measurement results show that 18GHz radio propagation in subway tunnel has smaller path loss than that of general outdoor radio environment. It is also cleared that the arch-type tunnels have smaller radio propagation losses than rectangular tunnels, and single track tunnels have smaller pass loss than double track tunnels. From the measurements, the radio propagation coverage is worked out as 520 meters. The curved tunnels which cannot have LOS communication between transmitter and receiver have large pass losses and fluctuation profile along distance. The radio propagation coverage along curved tunnels is worked out as 300 meters. These investigation results can be used to design the 18GHz radio transmission system for subway tunnel by providing the optimized wayside transmitter locations and handover algorithm customized to the radio propagation characteristics in subway tunnels.

The Selection of Optimal Distributions for Distributed Hydrological Models using Multi-criteria Calibration Techniques (다중최적화기법을 이용한 분포형 수문모형의 최적 분포형 선택)

  • Kim, Yonsoo;Kim, Taegyun
    • Journal of Wetlands Research
    • /
    • v.22 no.1
    • /
    • pp.15-23
    • /
    • 2020
  • The purpose of this study is to investigate how the degree of distribution influences the calibration of snow and runoff in distributed hydrological models using a multi-criteria calibration method. The Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM) developed by NOAA-National Weather Service (NWS) is employed to estimate optimized parameter sets. We have 3 scenarios depended on the model complexity for estimating best parameter sets: Lumped, Semi-Distributed, and Fully-Distributed. For the case study, the Durango River Basin, Colorado is selected as a study basin to consider both snow and water balance components. This study basin is in the mountainous western U.S. area and consists of 108 Hydrologic Rainfall Analysis Project (HRAP) grid cells. 5 and 13 parameters of snow and water balance models are calibrated with the Multi-Objective Shuffled Complex Evolution Metropolis (MOSCEM) algorithm. Model calibration and validation are conducted on 4km HRAP grids with 5 years (2001-2005) meteorological data and observations. Through case study, we show that snow and streamflow simulations are improved with multiple criteria calibrations without considering model complexity. In particular, we confirm that semi- and fully distributed models are better performances than those of lumped model. In case of lumped model, the Root Mean Square Error (RMSE) values improve by 35% on snow average and 42% on runoff from a priori parameter set through multi-criteria calibrations. On the other hand, the RMSE values are improved by 40% and 43% for snow and runoff on semi- and fully-distributed models.

Optimization of Multi-reservoir Operation with a Hedging Rule: Case Study of the Han River Basin (Hedging Rule을 이용한 댐 연계 운영 최적화: 한강수계 사례연구)

  • Ryu, Gwan-Hyeong;Chung, Gun-Hui;Lee, Jung-Ho;Kim, Joong-Hoon
    • Journal of Korea Water Resources Association
    • /
    • v.42 no.8
    • /
    • pp.643-657
    • /
    • 2009
  • The major reason to construct large dams is to store surplus water during rainy seasons and utilize it for water supply in dry seasons. Reservoir storage has to meet a pre-defined target to satisfy water demands and cope with a dry season when the availability of water resources are limited temporally as well as spatially. In this study, a Hedging rule that reduces total reservoir outflow as drought starts is applied to alleviate severe water shortages. Five stages for reducing outflow based on the current reservoir storage are proposed as the Hedging rule. The objective function is to minimize the total discrepancies between the target and actual reservoir storage, water supply and demand, and required minimum river discharge and actual river flow. Mixed Integer Linear Programming (MILP) is used to develop a multi-reservoir operation system with the Hedging rule. The developed system is applied for the Han River basin that includes four multi-purpose dams and one water supplying reservoir. One of the fours dams is primarily for power generation. Ten-day-based runoff from subbasins and water demand in 2003 and water supply plan to water users from the reservoirs are used from "Long Term Comprehensive Plan for Water Resources in Korea" and "Practical Handbook of Dam Operation in Korea", respectively. The model was optimized by GAMS/CPLEX which is LP/MIP solver using a branch-and-cut algorithm. As results, 99.99% of municipal demand, 99.91% of agricultural demand and 100.00% of minimum river discharge were satisfied and, at the same time, dam storage compared to the storage efficiency increased 10.04% which is a real operation data in 2003.

Uni-directional 8X8 Intra Prediction for H.264 Coding Efficiency (H.264에서 성능향상을 위한 Uni-directional 8X8 인트라 예측)

  • Kook, Seung-Ryong;Park, Gwang-Hoon;Lee, Yoon-Jin;Sim, Dong-Gyu;Jung, Kwang-Soo;Choi, Hae-Chul;Choi, Jin-Soo;Lim, Sung-Chang
    • Journal of Broadcast Engineering
    • /
    • v.14 no.5
    • /
    • pp.589-600
    • /
    • 2009
  • This paper is ready to change a trend of a ultra high definition (UHD) video image, and it will contribute to improve the performance of the latest H.264 through the Uni-directional $8{\times}8$ intra-prediction idea which is based on developing a intra prediction compression. The Uni-directional $8{\times}8$ intra prediction is focused on a $8{\times}8$ block intra prediction using $4{\times}4$ block based prediction which is using the same direction of intra prediction. This paper describes that the uni-directional $8{\times}8$ intra-prediction gets a improvement around 7.3% BDBR only in the $8{\times}8$ block size, and it gets a improvement around 1.3% BDBR in the H.264 applied to the multi block size structures. In the case of a larger image size, it can be changed to a good algorithm. Because the video codec which is optimized for UHD resolution can be used a different block size which is bigger than before(currently a minimum of $4{\times}4$ blocks of units).

Automatic Clustering on Trained Self-organizing Feature Maps via Graph Cuts (그래프 컷을 이용한 학습된 자기 조직화 맵의 자동 군집화)

  • Park, An-Jin;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.9
    • /
    • pp.572-587
    • /
    • 2008
  • The Self-organizing Feature Map(SOFM) that is one of unsupervised neural networks is a very powerful tool for data clustering and visualization in high-dimensional data sets. Although the SOFM has been applied in many engineering problems, it needs to cluster similar weights into one class on the trained SOFM as a post-processing, which is manually performed in many cases. The traditional clustering algorithms, such as t-means, on the trained SOFM however do not yield satisfactory results, especially when clusters have arbitrary shapes. This paper proposes automatic clustering on trained SOFM, which can deal with arbitrary cluster shapes and be globally optimized by graph cuts. When using the graph cuts, the graph must have two additional vertices, called terminals, and weights between the terminals and vertices of the graph are generally set based on data manually obtained by users. The Proposed method automatically sets the weights based on mode-seeking on a distance matrix. Experimental results demonstrated the effectiveness of the proposed method in texture segmentation. In the experimental results, the proposed method improved precision rates compared with previous traditional clustering algorithm, as the method can deal with arbitrary cluster shapes based on the graph-theoretic clustering.

News Video Shot Boundary Detection using Singular Value Decomposition and Incremental Clustering (특이값 분해와 점증적 클러스터링을 이용한 뉴스 비디오 샷 경계 탐지)

  • Lee, Han-Sung;Im, Young-Hee;Park, Dai-Hee;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.2
    • /
    • pp.169-177
    • /
    • 2009
  • In this paper, we propose a new shot boundary detection method which is optimized for news video story parsing. This new news shot boundary detection method was designed to satisfy all the following requirements: 1) minimizing the incorrect data in data set for anchor shot detection by improving the recall ratio 2) detecting abrupt cuts and gradual transitions with one single algorithm so as to divide news video into shots with one scan of data set; 3) classifying shots into static or dynamic, therefore, reducing the search space for the subsequent stage of anchor shot detection. The proposed method, based on singular value decomposition with incremental clustering and mercer kernel, has additional desirable features. Applying singular value decomposition, the noise or trivial variations in the video sequence are removed. Therefore, the separability is improved. Mercer kernel improves the possibility of detection of shots which is not separable in input space by mapping data to high dimensional feature space. The experimental results illustrated the superiority of the proposed method with respect to recall criteria and search space reduction for anchor shot detection.

A Study on Asthmatic Occurrence Using Deep Learning Algorithm (딥러닝 알고리즘을 활용한 천식 환자 발생 예측에 대한 연구)

  • Sung, Tae-Eung
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.7
    • /
    • pp.674-682
    • /
    • 2020
  • Recently, the problem of air pollution has become a global concern due to industrialization and overcrowding. Air pollution can cause various adverse effects on human health, among which respiratory diseases such as asthma, which have been of interest in this study, can be directly affected. Previous studies have used clinical data to identify how air pollutant affect diseases such as asthma based on relatively small samples. This is high likely to result in inconsistent results for each collection samples, and has significant limitations in that research is difficult for anyone other than the medical profession. In this study, the main focus was on predicting the actual asthmatic occurrence, based on data on the atmospheric environment data released by the government and the frequency of asthma outbreaks. First of all, this study verified the significant effects of each air pollutant with a time lag on the outbreak of asthma through the time-lag Pearson Correlation Coefficient. Second, train data built on the basis of verification results are utilized in Deep Learning algorithms, and models optimized for predicting the asthmatic occurrence are designed. The average error rate of the model was about 11.86%, indicating superior performance compared to other machine learning-based algorithms. The proposed model can be used for efficiency in the national insurance system and health budget management, and can also provide efficiency in the deployment and supply of medical personnel in hospitals. And it can also contribute to the promotion of national health through early warning of the risk of outbreak by atmospheric environment for chronic asthma patients.

A Case Study of Profit Optimization System Integration with Enhanced Security (관리보안이 강화된 수익성 최적화 시스템구축 사례연구)

  • Kim, Hyoung-Tae;Yoon, Ki-Chang;Yu, Seung-Hun
    • Journal of Distribution Science
    • /
    • v.13 no.11
    • /
    • pp.123-130
    • /
    • 2015
  • Purpose - Due to highly elevated levels of competition, many companies today have to face the problem of decreasing profits even when their actual sales volume is increasing. This is a common phenomenon that is seen occurring among companies that focus heavily on quantitative growth rather than qualitative growth. These two aspects of growth should be well balanced for a company to create a sustainable business model. For supply chain management (SCM) planners, the optimized, quantified flow of resources used to be of major interest for decades. However, this trend is rapidly changing so that managers can put the appropriate balance between sales volume and sales quality, which can be evaluated from the profit margin. Profit optimization is a methodology for companies to use to achieve solutions focused more on profitability than sales volume. In this study, we attempt to provide executional insight for companies considering implementation of the profit optimization system to enhance their business profitability. Research design, data, and methodology - In this study, we present a comprehensive explanation of the subject of profit optimization, including the fundamental concepts, the most common profit optimization logic algorithm -linear programming -the business functional scope of the profit optimization system, major key success factors for implementing the profit optimization system at a business organization, and weekly level detailed business processes to actively manage effective system performance in achieving the goals of the system. Additionally, for the purpose of providing more realistic and practical information, we carefully investigate a profit optimization system implementation case study project fulfilled for company S. The project duration was about eight months, with four full-time system development consultants deployed for the period. To guarantee the project's success, the organization adopted a proven system implementation methodology, supply chain management (SCM) six-sigma. SCM six-sigma was originally developed by a group of talented consultants within Samsung SDS through focused efforts and investment in synthesizing SCM and six-sigma to improve and innovate their SCM operations across the entire Samsung Organization. Results - Profit optimization can enable a company to create sales and production plans focused on more profitable products and customers, resulting in sustainable growth. In this study, we explain the concept of profit optimization and prerequisites for successful implementation of the system. Furthermore, the efficient way of system security administration, one of the hottest topics today, is also addressed. Conclusion - This case study can benefit numerous companies that are eagerly searching for ways to break-through current profitability levels. We cannot guarantee that the decision to deploy the profit optimization system will bring success, but we can guarantee that with the help of our study, companies trying to implement profit optimization systems can minimize various possible risks across various system implementation phases. The actual system implementation case of the profit optimization project at company S introduced here can provide valuable lessons for both business organizations and research communities.