• Title/Summary/Keyword: 불확실도 분석

Search Result 2,393, Processing Time 0.029 seconds

Monitoring of Atmospheric Aerosol using GMS-5 Satellite Remote Sensing Data (GMS-5 인공위성 원격탐사 자료를 이용한 대기 에어러솔 모니터링)

  • Lee, Kwon Ho;Kim, Jeong Eun;Kim, Young Jun;Suh, Aesuk;Ahn, Myung Hwan
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.5 no.2
    • /
    • pp.1-15
    • /
    • 2002
  • Atmospheric aerosols interact with sunlight and affect the global radiation balance that can cause climate change through direct and indirect radiative forcing. Because of the spatial and temporal uncertainty of aerosols in atmosphere, aerosol characteristics are not considered through GCMs (General Circulation Model). Therefor it is important physical and optical characteristics should be evaluated to assess climate change and radiative effect by atmospheric aerosols. In this study GMS-5 satellite data and surface measurement data were analyzed using a radiative transfer model for the Yellow Sand event of April 7~8, 2000 in order to investigate the atmospheric radiative effects of Yellow Sand aerosols, MODTRAN3 simulation results enable to inform the relation between satellite channel albedo and aerosol optical thickness(AOT). From this relation AOT was retreived from GMS-5 visible channel. The variance observations of satellite images enable remote sensing of the Yellow Sand particles. Back trajectory analysis was performed to track the air mass from the Gobi desert passing through Korean peninsular with high AOT value measured by ground based measurement. The comparison GMS-5 AOT to ground measured RSR aerosol optical depth(AOD) show that for Yellow Sand aerosols, the albedo measured over ocean surfaces can be used to obtain the aerosol optical thickness using appropriate aerosol model within an error of about 10%. In addition, LIDAR network measurements and backward trajectory model showed characteristics and appearance of Yellow Sand during Yellow Sand events. These data will be good supporting for monitoring of Yellow Sand aerosols.

  • PDF

Analyzing the characteristics of mathematics achievement in Korea through linking NAEA and PISA (국가수준 학업성취도 평가와 국제 학업성취도 평가의 연계를 통한 우리나라 학생들의 수학 성취 특성 분석)

  • Rim, Hae-Mee;Kim, Su-Jin;Kim, Kyung-Hee
    • Journal of Educational Research in Mathematics
    • /
    • v.22 no.1
    • /
    • pp.1-22
    • /
    • 2012
  • The purpose of this study is to understand Korea students' characteristics as well as to give important information of improving our education using comparative analysis of framework, test booklets, test results between PISA 2009 and NAEA 2009. PISA 2009 was administered on May of 2009 and NAEA was administered on October of same year. The summary of the results of comparing two assessment is as follows First, cut score of NAEA Advance level is bigger than the cut score of level 5, which is considered as high achievement level. The cut score of Basic level of NAEA is also higher than the level 2 of PISA, which is considered as basic achievement level. This phenomenon can show that NAEA achievement level is set little bit higher than the achievement level of PISA in mathematics domain. Second, the percentage of female students on higher level was higher than that of male students. In suburban area, the percentage of high level was small and the percentage of low level was big. Third, students of Advanced level are distributed concentrating in PISA levels 4~6, Proficient achievement level concentrating in PISA levels 3~5, Basic achievement level concentrating in PISA levels 2~4, and below basic achievement levels concentrating in below level 1 and level 3 of PISA. Fourth, the correlation between NAEA 2009 and PISA 2009 achievement scores are significantly positive. However, the correlation of subscales were low. Fifth, analysis of non-equivalent group, 11 items located in 'change and relationship', 'uncertainty', 'connection cluster' domains found to be significantly different. The percent correct showed very big difference. The analysis results presents the implication of mathematics curriculum, teaching and learning methods as well as National Assessment of Educational Achievement.

  • PDF

Diagnosis of Pigs Producing PSE Meat using DNA Analysis (DNA검사기법을 이용한 PSE 돈육 생산 돼지 진단)

  • Chung Eui-Ryong;Chung Ku-Young
    • Food Science of Animal Resources
    • /
    • v.24 no.4
    • /
    • pp.349-354
    • /
    • 2004
  • Stress-susceptible pigs have been known as the porcine stress syndrome (PSS), swine PSS, also known as malignant hyperthermia (MH), is characterized as sudden death and production of poor meat quality such as PSE (pale, soft and exudative) meat after slaughtering. PSS and PSE meat cause major economic losses in the pig industry. A point mutation in the gene coding for the ryanodine receptor (RYR1) in porcine skeletal muscle, also known calcium (Ca$^{2+}$) release channel, has been associated with swine PSS and halothane sensitivity. We used the PCR-RFLP(restriction fragment length polymorphism) and PCR-SSCP (single strand conformation polymorphism) methods to detect the PSS gene mutation (C1843T) in the RYR1 gene and to estimate genotype frequencies of PSS gene in Korean pig breed populations. In PCR-RFLP and SSCP analyses, three genotypes of homozygous normal (N/M), heterozygous carrier (N/n) and homozygous recessive mutant (n/n) were detected using agarose or polyacrylamide gel electrophoresis, respectively. The proportions of normal, carrier and PSS pigs were 57.1, 35.7 and 7.1% for Landrace, 82.5, 15.8 and 1.7% far L. Yorkshire, 95.2, 4.8 and 0.0% for Duroc and 72.0, 22.7 and 5.3% for Crossbreed. Consequently, DNA-based diagnosis for the identification of stress-susceptible pigs of PSS and pigs producing PSE meat is a powerful technique. Especially, PCR-SSCP method may be useful as a rapid, sensitive and inexpensive test for the large-scale screening of PSS genotypes and pigs with PSE meat in the pork industry.y.

Water Balance Projection Using Climate Change Scenarios in the Korean Peninsula (기후변화 시나리오를 활용한 미래 한반도 물수급 전망)

  • Kim, Cho-Rong;Kim, Young-Oh;Seo, Seung Beom;Choi, Su-Woong
    • Journal of Korea Water Resources Association
    • /
    • v.46 no.8
    • /
    • pp.807-819
    • /
    • 2013
  • This study proposes a new methodology for future water balance projection considering climate change by assigning a weight to each scenario instead of inputting future streamflows based on GCMs into a water balance model directly. K-nearest neighbor algorithm was employed to assign weights and streamflows in non-flood period (October to the following June) was selected as the criterion for assigning weights. GCM-driven precipitation was input to TANK model to simulate future streamflow scenarios and Quantile Mapping was applied to correct bias between GCM hindcast and historical data. Based on these bias-corrected streamflows, different weights were assigned to each streamflow scenarios to calculate water shortage for the projection periods; 2020s (2010~2039), 2050s (2040~2069), and 2080s (2070~2099). As a result by applying the proposed methodology to project water shortage over the Korean Peninsula, average water shortage for 2020s is projected to increase to 10~32% comparing to the basis (1967~2003). In addition, according to getting decreased in streamflows in non-flood period gradually by 2080s, average water shortage for 2080s is projected to increase up to 97% (516.5 million $m^3/yr$) as maximum comparing to the basis. While the existing research on climate change gives radical increase in future water shortage, the results projected by the weighting method shows conservative change. This study has significance in the applicability of water balance projection regarding climate change, keeping the existing framework of national water resources planning and this lessens the confusion for decision-makers in water sectors.

A Study on the Field Data Applicability of Seismic Data Processing using Open-source Software (Madagascar) (오픈-소스 자료처리 기술개발 소프트웨어(Madagascar)를 이용한 탄성파 현장자료 전산처리 적용성 연구)

  • Son, Woohyun;Kim, Byoung-yeop
    • Geophysics and Geophysical Exploration
    • /
    • v.21 no.3
    • /
    • pp.171-182
    • /
    • 2018
  • We performed the seismic field data processing using an open-source software (Madagascar) to verify if it is applicable to processing of field data, which has low signal-to-noise ratio and high uncertainties in velocities. The Madagascar, based on Python, is usually supposed to be better in the development of processing technologies due to its capabilities of multidimensional data analysis and reproducibility. However, this open-source software has not been widely used so far for field data processing because of complicated interfaces and data structure system. To verify the effectiveness of the Madagascar software on field data, we applied it to a typical seismic data processing flow including data loading, geometry build-up, F-K filter, predictive deconvolution, velocity analysis, normal moveout correction, stack, and migration. The field data for the test were acquired in Gunsan Basin, Yellow Sea using a streamer consisting of 480 channels and 4 arrays of air-guns. The results at all processing step are compared with those processed with Landmark's ProMAX (SeisSpace R5000) which is a commercial processing software. Madagascar shows relatively high efficiencies in data IO and management as well as reproducibility. Additionally, it shows quick and exact calculations in some automated procedures such as stacking velocity analysis. There were no remarkable differences in the results after applying the signal enhancement flows of both software. For the deeper part of the substructure image, however, the commercial software shows better results than the open-source software. This is simply because the commercial software has various flows for de-multiple and provides interactive processing environments for delicate processing works compared to Madagascar. Considering that many researchers around the world are developing various data processing algorithms for Madagascar, we can expect that the open-source software such as Madagascar can be widely used for commercial-level processing with the strength of expandability, cost effectiveness and reproducibility.

An Analysis of IT Trends Using Tweet Data (트윗 데이터를 활용한 IT 트렌드 분석)

  • Yi, Jin Baek;Lee, Choong Kwon;Cha, Kyung Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.143-159
    • /
    • 2015
  • Predicting IT trends has been a long and important subject for information systems research. IT trend prediction makes it possible to acknowledge emerging eras of innovation and allocate budgets to prepare against rapidly changing technological trends. Towards the end of each year, various domestic and global organizations predict and announce IT trends for the following year. For example, Gartner Predicts 10 top IT trend during the next year, and these predictions affect IT and industry leaders and organization's basic assumptions about technology and the future of IT, but the accuracy of these reports are difficult to verify. Social media data can be useful tool to verify the accuracy. As social media services have gained in popularity, it is used in a variety of ways, from posting about personal daily life to keeping up to date with news and trends. In the recent years, rates of social media activity in Korea have reached unprecedented levels. Hundreds of millions of users now participate in online social networks and communicate with colleague and friends their opinions and thoughts. In particular, Twitter is currently the major micro blog service, it has an important function named 'tweets' which is to report their current thoughts and actions, comments on news and engage in discussions. For an analysis on IT trends, we chose Tweet data because not only it produces massive unstructured textual data in real time but also it serves as an influential channel for opinion leading on technology. Previous studies found that the tweet data provides useful information and detects the trend of society effectively, these studies also identifies that Twitter can track the issue faster than the other media, newspapers. Therefore, this study investigates how frequently the predicted IT trends for the following year announced by public organizations are mentioned on social network services like Twitter. IT trend predictions for 2013, announced near the end of 2012 from two domestic organizations, the National IT Industry Promotion Agency (NIPA) and the National Information Society Agency (NIA), were used as a basis for this research. The present study analyzes the Twitter data generated from Seoul (Korea) compared with the predictions of the two organizations to analyze the differences. Thus, Twitter data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. To overcome these challenges, we used SAS IRS (Information Retrieval Studio) developed by SAS to capture the trend in real-time processing big stream datasets of Twitter. The system offers a framework for crawling, normalizing, analyzing, indexing and searching tweet data. As a result, we have crawled the entire Twitter sphere in Seoul area and obtained 21,589 tweets in 2013 to review how frequently the IT trend topics announced by the two organizations were mentioned by the people in Seoul. The results shows that most IT trend predicted by NIPA and NIA were all frequently mentioned in Twitter except some topics such as 'new types of security threat', 'green IT', 'next generation semiconductor' since these topics non generalized compound words so they can be mentioned in Twitter with other words. To answer whether the IT trend tweets from Korea is related to the following year's IT trends in real world, we compared Twitter's trending topics with those in Nara Market, Korea's online e-Procurement system which is a nationwide web-based procurement system, dealing with whole procurement process of all public organizations in Korea. The correlation analysis show that Tweet frequencies on IT trending topics predicted by NIPA and NIA are significantly correlated with frequencies on IT topics mentioned in project announcements by Nara market in 2012 and 2013. The main contribution of our research can be found in the following aspects: i) the IT topic predictions announced by NIPA and NIA can provide an effective guideline to IT professionals and researchers in Korea who are looking for verified IT topic trends in the following topic, ii) researchers can use Twitter to get some useful ideas to detect and predict dynamic trends of technological and social issues.

The Study on the Effect of Target Volume in DQA based on MLC log file (MLC 로그 파일 기반 DQA에서 타깃 용적에 따른 영향 연구)

  • Shin, Dong Jin;Jung, Dong Min;Cho, Kang Chul;Kim, Ji Hoon;Yoon, Jong Won;Cho, Jeong Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.32
    • /
    • pp.53-59
    • /
    • 2020
  • Purpose: The purpose of this study is to compare and analyze the difference between the MLC log file-based software (Mobius) and the conventional phantom-ionization chamber (ArcCheck) dose verification method according to the change of target volume. Material and method: Radius 0.25cm, 0.5cm, 1cm, 2cm, 3cm, 4cm, 5cm, 6cm, 7cm, 8cm, 9cm, 10cm with a Sphere-shaped target Twelve plans were created and dose verification using Mobius and ArcCheck was conducted three times each. The irradiated data were compared and analyzed using the point dose error value and the gamma passing rate (3%/3mm) as evaluation indicators. Result: Mobius point dose error values were -9.87% at a radius of 0.25cm and -4.39% at 0.5cm, and the error value was within 3% at the remaining target volume. The gamma passing rate was 95% at a radius of 9cm and 93.9% at 10cm, and a passing rate of more than 95% was shown in the remaining target volume. In ArcCheck, the average error value of the point dose was about 2% in all target volumes. The gamma passing rate also showed a pass rate of 98% or more in all target volumes. Conclusion: For small targets with a radius of 0.5cm or less or a large target with a radius of 9cm or more, considering the uncertainty of DQA based on MLC log files, phantom-ionized DQA is used in complementary ways to include point dose, gamma index, DVH, and target coverage. It is believed that it is desirable to verify the dose delivery through a comprehensive analysis.

Comparison of Models for Stock Price Prediction Based on Keyword Search Volume According to the Social Acceptance of Artificial Intelligence (인공지능의 사회적 수용도에 따른 키워드 검색량 기반 주가예측모형 비교연구)

  • Cho, Yujung;Sohn, Kwonsang;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.103-128
    • /
    • 2021
  • Recently, investors' interest and the influence of stock-related information dissemination are being considered as significant factors that explain stock returns and volume. Besides, companies that develop, distribute, or utilize innovative new technologies such as artificial intelligence have a problem that it is difficult to accurately predict a company's future stock returns and volatility due to macro-environment and market uncertainty. Market uncertainty is recognized as an obstacle to the activation and spread of artificial intelligence technology, so research is needed to mitigate this. Hence, the purpose of this study is to propose a machine learning model that predicts the volatility of a company's stock price by using the internet search volume of artificial intelligence-related technology keywords as a measure of the interest of investors. To this end, for predicting the stock market, we using the VAR(Vector Auto Regression) and deep neural network LSTM (Long Short-Term Memory). And the stock price prediction performance using keyword search volume is compared according to the technology's social acceptance stage. In addition, we also conduct the analysis of sub-technology of artificial intelligence technology to examine the change in the search volume of detailed technology keywords according to the technology acceptance stage and the effect of interest in specific technology on the stock market forecast. To this end, in this study, the words artificial intelligence, deep learning, machine learning were selected as keywords. Next, we investigated how many keywords each week appeared in online documents for five years from January 1, 2015, to December 31, 2019. The stock price and transaction volume data of KOSDAQ listed companies were also collected and used for analysis. As a result, we found that the keyword search volume for artificial intelligence technology increased as the social acceptance of artificial intelligence technology increased. In particular, starting from AlphaGo Shock, the keyword search volume for artificial intelligence itself and detailed technologies such as machine learning and deep learning appeared to increase. Also, the keyword search volume for artificial intelligence technology increases as the social acceptance stage progresses. It showed high accuracy, and it was confirmed that the acceptance stages showing the best prediction performance were different for each keyword. As a result of stock price prediction based on keyword search volume for each social acceptance stage of artificial intelligence technologies classified in this study, the awareness stage's prediction accuracy was found to be the highest. The prediction accuracy was different according to the keywords used in the stock price prediction model for each social acceptance stage. Therefore, when constructing a stock price prediction model using technology keywords, it is necessary to consider social acceptance of the technology and sub-technology classification. The results of this study provide the following implications. First, to predict the return on investment for companies based on innovative technology, it is most important to capture the recognition stage in which public interest rapidly increases in social acceptance of the technology. Second, the change in keyword search volume and the accuracy of the prediction model varies according to the social acceptance of technology should be considered in developing a Decision Support System for investment such as the big data-based Robo-advisor recently introduced by the financial sector.

Application of deep learning method for decision making support of dam release operation (댐 방류 의사결정지원을 위한 딥러닝 기법의 적용성 평가)

  • Jung, Sungho;Le, Xuan Hien;Kim, Yeonsu;Choi, Hyungu;Lee, Giha
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1095-1105
    • /
    • 2021
  • The advancement of dam operation is further required due to the upcoming rainy season, typhoons, or torrential rains. Besides, physical models based on specific rules may sometimes have limitations in controlling the release discharge of dam due to inherent uncertainty and complex factors. This study aims to forecast the water level of the nearest station to the dam multi-timestep-ahead and evaluate the availability when it makes a decision for a release discharge of dam based on LSTM (Long Short-Term Memory) of deep learning. The LSTM model was trained and tested on eight data sets with a 1-hour temporal resolution, including primary data used in the dam operation and downstream water level station data about 13 years (2009~2021). The trained model forecasted the water level time series divided by the six lead times: 1, 3, 6, 9, 12, 18-hours, and compared and analyzed with the observed data. As a result, the prediction results of the 1-hour ahead exhibited the best performance for all cases with an average accuracy of MAE of 0.01m, RMSE of 0.015 m, and NSE of 0.99, respectively. In addition, as the lead time increases, the predictive performance of the model tends to decrease slightly. The model may similarly estimate and reliably predicts the temporal pattern of the observed water level. Thus, it is judged that the LSTM model could produce predictive data by extracting the characteristics of complex hydrological non-linear data and can be used to determine the amount of release discharge from the dam when simulating the operation of the dam.

Analysis of Global Success Factors of K-pop Music (K-pop 음악의 글로벌 성공 요인 분석)

  • Lee, Kate Seung-Yeon;Chang, Min-Ho
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.4
    • /
    • pp.1-15
    • /
    • 2019
  • Psy's Gangnam style in 2012 showed K-pop's potential for global growth and BTS proved it by reaching three consecutive Billboard No.1. The success in the global music market brings tremendous economical and cultural power. This study is conducted for the continuous growth of K-pop music in the global music market by analyzing the musical factor of K-pop's global success. The top 20 most-viewed K-pop MV on Youtube is chosen as a research subject because Youtube is a worldwide platform that reflects global popularity. For the process of K-pop music creation, the role of the composer is expanded and many overseas producers participate in music creation. All 20 songs are created by the collective creation system and there is a consecutive collaboration between the main producers and certain artists. The top 20 most viewed K-pop songs have the musical characteristics of transnational genre convergence, hook songs, sophisticated sounds, frequent use of English lyrics, a reflection of the latest global trends, rhythm optimized for dance and clear concept. It makes the K-pop song easily remembered and familiar to overseas listeners. K-pop's healthy and fresh theme brings emotional empathy and reflects Korean sentiments. K-pop's global success is not a coincidence, but a result of continuous efforts to advance overseas. Some critics criticize K-pop's musical style is similar and it shows K-pop's limitation but K-pop progressed its musical evolution. By keeping the merits of K-pop's success factors and complementing its weak points, K-pop will continue its popularity and increase influence in the global music market.