• Title/Summary/Keyword: 성능정보 제공

Search Result 4,879, Processing Time 0.034 seconds

Performance Test of Hypocenter Determination Methods under the Assumption of Inaccurate Velocity Models: A case of surface microseismic monitoring (부정확한 속도 모델을 가정한 진원 결정 방법의 성능평가: 지표면 미소지진 모니터링 사례)

  • Woo, Jeong-Ung;Rhie, Junkee;Kang, Tae-Seob
    • Geophysics and Geophysical Exploration
    • /
    • v.19 no.1
    • /
    • pp.1-10
    • /
    • 2016
  • The hypocenter distribution of microseismic events generated by hydraulic fracturing for shale gas development provides essential information for understanding characteristics of fracture network. In this study, we evaluate how inaccurate velocity models influence the inversion results of two widely used location programs, hypoellipse and hypoDD, which are developed based on an iterative linear inversion. We assume that 98 stations are densely located inside the circle with a radius of 4 km and 5 artificial hypocenter sets (S0 ~ S4) are located from the center of the network to the south with 1 km interval. Each hypocenter set contains 25 events placed on the plane. To quantify accuracies of the inversion results, we defined 6 parameters: difference between average hypocenters of assumed and inverted locations, $d_1$; ratio of assumed and inverted areas estimated by hypocenters, r; difference between dip of the reference plane and the best fitting plane for determined hypocenters, ${\theta}$; difference between strike of the reference plane and the best fitting plane for determined hypocenters, ${\phi}$; root-mean-square distance between hypocenters and the best fitting plane, $d_2$; root-mean-square error in horizontal direction on the best fitting plane, $d_3$. Synthetic travel times are calculated for the reference model having 1D layered structure and the inaccurate velocity model for the inversion is constructed by using normal distribution with standard deviations of 0.1, 0.2, and 0.3 km/s, respectively, with respect to the reference model. The parameters $d_1$, r, ${\theta}$, and $d_2$ show positive correlation with the level of velocity perturbations, but the others are not sensitive to the perturbations except S4, which is located at the outer boundary of the network. In cases of S0, S1, S2, and S3, hypoellipse and hypoDD provide similar results for $d_1$. However, for other parameters, hypoDD shows much better results and errors of locations can be reduced by about several meters regardless of the level of perturbations. In light of the purpose to understand the characteristics of hydraulic fracturing, $1{\sigma}$ error of velocity structure should be under 0.2 km/s in hypoellipse and 0.3 km/s in hypoDD.

Monitoring Ground-level SO2 Concentrations Based on a Stacking Ensemble Approach Using Satellite Data and Numerical Models (위성 자료와 수치모델 자료를 활용한 스태킹 앙상블 기반 SO2 지상농도 추정)

  • Choi, Hyunyoung;Kang, Yoojin;Im, Jungho;Shin, Minso;Park, Seohui;Kim, Sang-Min
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1053-1066
    • /
    • 2020
  • Sulfur dioxide (SO2) is primarily released through industrial, residential, and transportation activities, and creates secondary air pollutants through chemical reactions in the atmosphere. Long-term exposure to SO2 can result in a negative effect on the human body causing respiratory or cardiovascular disease, which makes the effective and continuous monitoring of SO2 crucial. In South Korea, SO2 monitoring at ground stations has been performed, but this does not provide spatially continuous information of SO2 concentrations. Thus, this research estimated spatially continuous ground-level SO2 concentrations at 1 km resolution over South Korea through the synergistic use of satellite data and numerical models. A stacking ensemble approach, fusing multiple machine learning algorithms at two levels (i.e., base and meta), was adopted for ground-level SO2 estimation using data from January 2015 to April 2019. Random forest and extreme gradient boosting were used as based models and multiple linear regression was adopted for the meta-model. The cross-validation results showed that the meta-model produced the improved performance by 25% compared to the base models, resulting in the correlation coefficient of 0.48 and root-mean-square-error of 0.0032 ppm. In addition, the temporal transferability of the approach was evaluated for one-year data which were not used in the model development. The spatial distribution of ground-level SO2 concentrations based on the proposed model agreed with the general seasonality of SO2 and the temporal patterns of emission sources.

Digital Hologram Compression Technique By Hybrid Video Coding (하이브리드 비디오 코팅에 의한 디지털 홀로그램 압축기술)

  • Seo, Young-Ho;Choi, Hyun-Jun;Kang, Hoon-Jong;Lee, Seung-Hyun;Kim, Dong-Wook
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.29-40
    • /
    • 2005
  • According as base of digital hologram has been magnified, discussion of compression technology is expected as a international standard which defines the compression technique of 3D image and video has been progressed in form of 3DAV which is a part of MPEG. As we can identify in case of 3DAV, the coding technique has high possibility to be formed into the hybrid type which is a merged, refined, or mixid with the various previous technique. Therefore, we wish to present the relationship between various image/video coding techniques and digital hologram In this paper, we propose an efficient coding method of digital hologram using standard compression tools for video and image. At first, we convert fringe patterns into video data using a principle of CGH(Computer Generated Hologram), and then encode it. In this research, we propose a compression algorithm is made up of various method such as pre-processing for transform, local segmentation with global information of object image, frequency transform for coding, scanning to make fringe to video stream, classification of coefficients, and hybrid video coding. Finally the proposed hybrid compression algorithm is all of these methods. The tool for still image coding is JPEG2000, and the toots for video coding include various international compression algorithm such as MPEG-2, MPEG-4, and H.264 and various lossless compression algorithm. The proposed algorithm illustrated that it have better properties for reconstruction than the previous researches on far greater compression rate above from four times to eight times as much. Therefore we expect that the proposed technique for digital hologram coding is to be a good preceding research.

Charaterization of Biomass Production and Wastewater Treatability by High-Lipid Algal Species under Municial Wastewater Condition (실제 하수조건에서 고지질 함량 조류자원의 생체생성과 하수처리 특성 분석)

  • Lee, Jang-Ho;Park, Joon-Hong
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.4
    • /
    • pp.333-340
    • /
    • 2010
  • Wastewater treatment using algal communities and biodiesel production from wastewater-cultivated algal biomass is a promising green growth technology. In literature, there are many studies providing information on algal species producing high content of lipid. However, very little is known about adaptability and wastewater treatability of such high-lipid algal species. In this study, we attempted to characterize algal biomass production and wastewater treatability of high-lipid algal species under municipal wastewater condition. For this, four known high-lipid algal strains including Chlorella vulgaris AG 10032, Ankistrodesmus gracilis SAG 278-2, Scenedesmus quadricauda, and Botryococcus braunii UTEX 572 were individually inoculated into municipal wastewater where its indigenuous algal populations were removed prior to the inoculation, and the algae-inoculated wastewater was incubated in the presence of light source (80${\mu}E$) for 9 days in laboratory batch reactors. During the incubations, algal biomass production (dry weight) and the removals of dissolved organics (COD), nitrogen and phosphorous were measured in laboratory batch reactors. According to algal growth results, C. vulgaris, A. gracilis and S. quadricauda exhibited faster growth than indigenuous wastewater algal populations while B. braunii did not. The wastewater-growing strains exhibited efficient removals of total-N, ${NH_4}^+$-N, Total-P and ${PO_4}^{3-}$-P which satisfy the Korea water quality standards for effluent from municipal wastewater treatment plants. A. gracilis and S. quadricauda exhibited efficient and stable treatability of COD but C. vulgaris showed unstable treatability. Taken together with the results, A. gracilis and S. quadricauda were found to be suitable species for biomass production and wastewater treatment under municipal wastewater condition.

Analyze for the Quality Control of General X-ray Systems in Capital region (수도권지역 일반촬영 장비의 정도관리 분석)

  • Kang, Byung-Sam;Lee, Kang-Min;Shim, Woo-Yong;Park, Soon-Chul;Choi, Hak-Dong;Cho, Yong-Kwon
    • Journal of radiological science and technology
    • /
    • v.35 no.2
    • /
    • pp.93-102
    • /
    • 2012
  • Thanks to the rapid increase of the interest in the quality control of the General X-ray systems, this research proposes the direction of the quality control through comparing and inspecting the actual condition of the respective quality control in the Clinic, the educational institution and the hospital. The subjects of the investigation are diagnostic radiation equipment's in the clinic, the educational institution and the hospital around the capital. A test of kVp, mR/mAs out put test and reproducibility of the exposure dose, half value layer, an accordance between the light field and the beam alignment test, and lastly reproducibility of the exposure time. Then the mean difference of the percentage, the CV (Coefficient of Variation, CV) and the attenuated curve which are respectively resulted from the above tests are computed. After that we have evaluated the values according to the regulations on the Diagnostic Radiation Equipment Safety Administration regulations. In the case of the clinic and the educational institution, there were 22 general X-ray devices. And 18.2% of the kVp test, 13.6% of the reproducibility of exposure dose test, 9.1% of the mR/mAs out put test, and 13.6% of the HVL (Half Value Layer) test appeared to be improper. In the case of the hospital, however, there were 28 devices. And 7.1% of the reproducibility of exposure dose, 7.1% of the difference in the light field/ beam alignment, and 7.1% of the reproducibility of the exposure time appeared to be improper. According to the investigation, the hospital's quality control condition is better than the condition in the clinic and the educational institution. The quality control condition of the general X-ray devices in the clinic is unsatisfactory compared to the hospital. Thus, it is considered that realizing the importance of the quality control is necessary.

An Analysis of Environmental Factors of Abandoned Paddy Wetlands as References and Changes in Land Cover Types in the Influence Area (묵논습지 환경요인 및 생태영향권 내 토지피복유형 변화 분석)

  • Park, MiOk;Kwon, SoonHyo;Back, SeungJun;Seo, JooYoung;Koo, BonHak
    • Journal of Wetlands Research
    • /
    • v.24 no.4
    • /
    • pp.331-344
    • /
    • 2022
  • This study analyzed the characteristics of the soil and hydrological environment of abandoned paddy wetlands examined the changes in land cover type in the ecological affect area, analyzed the environmental factors of abandoned paddy wetlands, and examined the changes in land cover type in the ecological impact area. The ecological environment characteristics of the reference abandoned paddy wetlands were investigated through literature research, environmental spatial information service, and preliminary exploration of the abandoned paddy wetlands, and the basic data for the restoration of abandoned paddy wetlands ware provided by examining the changes in land cover type in the ecological impact area for 40 years. Through this study, it will be possible to manage the rapidly increasing number of abandoned farmland to be converted into wetlands so that it can perform functions equivalent to or greater than that of natural wetlands. In particular, as we checked the clues that abandoned paddy wetlands could spread to surrounding ecological influences through land cover changes, the study sites are highly likely to be reference wetlands, and if the topography, soil, water circulation system, and carbon reduction performance are analyzed carefully, it will be possible to standardize the development process. In addition, through the change in land cover, clues were confirmed that the abandoned paddy wetlands could spread to the surrounding ecological affect areas. The land cover type in the ecological impact area, forests was mainly distributed, but generally decreased rapidly in the last 10-20 years, and forests were changing from coniferous forests to broad-leaved forests, mixed forests, or grassland. It has not yet been fully called to the wetland, and it is found that it has maintained the form of barren or grassland, and as can be seen in the case of natural wetlands after more than 30 years after abandoned, it is expected that the transition will gradually proceed to wetlands that are structurally and functionally similar to natural wetlands.

Product Evaluation Criteria Extraction through Online Review Analysis: Using LDA and k-Nearest Neighbor Approach (온라인 리뷰 분석을 통한 상품 평가 기준 추출: LDA 및 k-최근접 이웃 접근법을 활용하여)

  • Lee, Ji Hyeon;Jung, Sang Hyung;Kim, Jun Ho;Min, Eun Joo;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.97-117
    • /
    • 2020
  • Product evaluation criteria is an indicator describing attributes or values of products, which enable users or manufacturers measure and understand the products. When companies analyze their products or compare them with competitors, appropriate criteria must be selected for objective evaluation. The criteria should show the features of products that consumers considered when they purchased, used and evaluated the products. However, current evaluation criteria do not reflect different consumers' opinion from product to product. Previous studies tried to used online reviews from e-commerce sites that reflect consumer opinions to extract the features and topics of products and use them as evaluation criteria. However, there is still a limit that they produce irrelevant criteria to products due to extracted or improper words are not refined. To overcome this limitation, this research suggests LDA-k-NN model which extracts possible criteria words from online reviews by using LDA and refines them with k-nearest neighbor. Proposed approach starts with preparation phase, which is constructed with 6 steps. At first, it collects review data from e-commerce websites. Most e-commerce websites classify their selling items by high-level, middle-level, and low-level categories. Review data for preparation phase are gathered from each middle-level category and collapsed later, which is to present single high-level category. Next, nouns, adjectives, adverbs, and verbs are extracted from reviews by getting part of speech information using morpheme analysis module. After preprocessing, words per each topic from review are shown with LDA and only nouns in topic words are chosen as potential words for criteria. Then, words are tagged based on possibility of criteria for each middle-level category. Next, every tagged word is vectorized by pre-trained word embedding model. Finally, k-nearest neighbor case-based approach is used to classify each word with tags. After setting up preparation phase, criteria extraction phase is conducted with low-level categories. This phase starts with crawling reviews in the corresponding low-level category. Same preprocessing as preparation phase is conducted using morpheme analysis module and LDA. Possible criteria words are extracted by getting nouns from the data and vectorized by pre-trained word embedding model. Finally, evaluation criteria are extracted by refining possible criteria words using k-nearest neighbor approach and reference proportion of each word in the words set. To evaluate the performance of the proposed model, an experiment was conducted with review on '11st', one of the biggest e-commerce companies in Korea. Review data were from 'Electronics/Digital' section, one of high-level categories in 11st. For performance evaluation of suggested model, three other models were used for comparing with the suggested model; actual criteria of 11st, a model that extracts nouns by morpheme analysis module and refines them according to word frequency, and a model that extracts nouns from LDA topics and refines them by word frequency. The performance evaluation was set to predict evaluation criteria of 10 low-level categories with the suggested model and 3 models above. Criteria words extracted from each model were combined into a single words set and it was used for survey questionnaires. In the survey, respondents chose every item they consider as appropriate criteria for each category. Each model got its score when chosen words were extracted from that model. The suggested model had higher scores than other models in 8 out of 10 low-level categories. By conducting paired t-tests on scores of each model, we confirmed that the suggested model shows better performance in 26 tests out of 30. In addition, the suggested model was the best model in terms of accuracy. This research proposes evaluation criteria extracting method that combines topic extraction using LDA and refinement with k-nearest neighbor approach. This method overcomes the limits of previous dictionary-based models and frequency-based refinement models. This study can contribute to improve review analysis for deriving business insights in e-commerce market.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Assessment of Bone Metastasis using Nuclear Medicine Imaging in Breast Cancer : Comparison between PET/CT and Bone Scan (유방암 환자에서 골전이에 대한 핵의학적 평가)

  • Cho, Dae-Hyoun;Ahn, Byeong-Cheol;Kang, Sung-Min;Seo, Ji-Hyoung;Bae, Jin-Ho;Lee, Sang-Woo;Jeong, Jin-Hyang;Yoo, Jeong-Soo;Park, Ho-Young;Lee, Jae-Tae
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.1
    • /
    • pp.30-41
    • /
    • 2007
  • Purpose: Bone metastasis in breast cancer patients are usually assessed by conventional Tc-99m methylene diphosphonate whole-body bone scan, which has a high sensitivity but a poor specificity. However, positron emission tomography with $^{18}F-2-deoxyglucose$ (FDG-PET) can offer superior spatial resolution and improved specificity. FDG-PET/CT can offer more information to assess bone metastasis than PET alone, by giving a anatomical information of non-enhanced CT image. We attempted to evaluate the usefulness of FDG-PET/CT for detecting bone metastasis in breast cancer and to compare FDG-PET/CT results with bone scan findings. Materials and Methods: The study group comprised 157 women patients (range: $28{\sim}78$ years old, $mean{\pm}SD=49.5{\pm}8.5$) with biopsy-proven breast cancer who underwent bone scan and FDG-PET/CT within 1 week interval. The final diagnosis of bone metastasis was established by histopathological findings, radiological correlation, or clinical follow-up. Bone scan was acquired over 4 hours after administration of 740 MBq Tc-99m MDP. Bone scan image was interpreted as normal, low, intermediate or high probability for osseous metastasis. FDG PET/CT was performed after 6 hours fasting. 370 MBq F-18 FDG was administered intravenously 1 hour before imaging. PET data was obtained by 3D mode and CT data, used as transmission correction database, was acquired during shallow respiration. PET images were evaluated by visual interpretation, and quantification of FDG accumulation in bone lesion was performed by maximal SUV(SUVmax) and relative SUV(SUVrel). Results: Six patients(4.4%) showed metastatic bone lesions. Four(66.6%) of 6 patients with osseous metastasis was detected by bone scan and all 6 patients(100%) were detected by PET/CT. A total of 135 bone lesions found on either FDG-PET or bone scan were consist of 108 osseous metastatic lesion and 27 benign bone lesions. Osseous metastatic lesion had higher SUVmax and SUVrel compared to benign bone lesion($4.79{\pm}3.32$ vs $1.45{\pm}0.44$, p=0.000, $3.08{\pm}2.85$ vs $0.30{\pm}0.43$, p=0.000). Among 108 osseous metastatic lesions, 76 lesions showed as abnormal uptake on bone scan, and 76 lesions also showed as increased FDG uptake on PET/CT scan. There was good agreement between FDG uptake and abnormal bone scan finding (Kendall tau-b : 0.689, p=0.000). Lesion showed increased bone tracer uptake had higher SUVmax and SUVrel compared to lesion showed no abnormal bone scan finding ($6.03{\pm}3.12$ vs $1.09{\pm}1.49$, p=0.000, $4.76{\pm}3.31$ vs $1.29{\pm}0.92$, p=0.000). The order of frequency of osseous metastatic site was vertebra, pelvis, rib, skull, sternum, scapula, femur, clavicle, and humerus. Metastatic lesion on skull had highest SUVmax and metastatic lesion on rib had highest SUVrel. Osteosclerotic metastatic lesion had lowest SUVmax and SUVrel. Conclusion: These results suggest that FDG-PET/CT is more sensitive to detect breast cancer patients with osseous metastasis. CT scan must be reviewed cautiously skeleton with bone window, because osteosclerotic metastatic lesion did not showed abnormal FDG accumulation frequently.