• Title/Summary/Keyword: 시스템 성능평가

Search Result 7,465, Processing Time 0.04 seconds

CO2 Methanation Characteristics over Ni Catalyst in a Pressurized Bubbling Fluidized Bed Reactor (가압 기포 유동층 반응기에서의 Ni계 촉매 CO2 메탄화 특성 연구)

  • Son, Seong Hye;Seo, Myung Won;Hwang, Byung Wook;Park, Sung Jin;Kim, Jung Hwan;Lee, Do Yeon;Go, Kang Seok;Jeon, Sang Goo;Yoon, Sung Min;Kim, Yong Ku;Kim, Jae Ho;Ryu, Ho Jeong;Rhee, Young Woo
    • Korean Chemical Engineering Research
    • /
    • v.56 no.6
    • /
    • pp.871-877
    • /
    • 2018
  • Storing the surplus energy from renewable energy resource is one of the challenges related to intermittent and fluctuating nature of renewable energy electricity production. $CO_2$ methanation is well known reaction that as a renewable energy storage system. $CO_2$ methanation requires a catalyst to be active at relatively low temperatures ($250-500^{\circ}C$) and selectivity towards methane. In this study, the catalytic performance test was conducted using a pressurized bubbling fluidized bed reactor (Diameter: 0.025 m and Height: 0.35 m) with $Ni/{\gamma}-Al_2O_3$ (Ni70%, and ${\gamma}-Al_2O_3$30%) catalyst. The range of the reaction conditions were $H_2/CO_2$ mole ratio range of 4.0-6.0, temperature of $300-420^{\circ}C$, pressure of 1-9 bar, and gas velocity ($U_0/U_{mf}$) of 1-5. As the $H_2/CO_2$ mole ratio, temperature and pressure increased, $CO_2$ conversion increases at the experimental temperature range. However, $CO_2$ conversion decreases with increasing gas velocity due to poor mixing characteristics in the fluidized bed. The maximum $CO_2$ conversion of 99.6% was obtained with the operating condition as follows; $H_2/CO_2$ ratio of 5, temperature of $400^{\circ}C$, pressure of 9 bar, and $U_0/U_{mf}$ of 1.4-3.

Discovering Promising Convergence Technologies Using Network Analysis of Maturity and Dependency of Technology (기술 성숙도 및 의존도의 네트워크 분석을 통한 유망 융합 기술 발굴 방법론)

  • Choi, Hochang;Kwahk, Kee-Young;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.101-124
    • /
    • 2018
  • Recently, most of the technologies have been developed in various forms through the advancement of single technology or interaction with other technologies. Particularly, these technologies have the characteristic of the convergence caused by the interaction between two or more techniques. In addition, efforts in responding to technological changes by advance are continuously increasing through forecasting promising convergence technologies that will emerge in the near future. According to this phenomenon, many researchers are attempting to perform various analyses about forecasting promising convergence technologies. A convergence technology has characteristics of various technologies according to the principle of generation. Therefore, forecasting promising convergence technologies is much more difficult than forecasting general technologies with high growth potential. Nevertheless, some achievements have been confirmed in an attempt to forecasting promising technologies using big data analysis and social network analysis. Studies of convergence technology through data analysis are actively conducted with the theme of discovering new convergence technologies and analyzing their trends. According that, information about new convergence technologies is being provided more abundantly than in the past. However, existing methods in analyzing convergence technology have some limitations. Firstly, most studies deal with convergence technology analyze data through predefined technology classifications. The technologies appearing recently tend to have characteristics of convergence and thus consist of technologies from various fields. In other words, the new convergence technologies may not belong to the defined classification. Therefore, the existing method does not properly reflect the dynamic change of the convergence phenomenon. Secondly, in order to forecast the promising convergence technologies, most of the existing analysis method use the general purpose indicators in process. This method does not fully utilize the specificity of convergence phenomenon. The new convergence technology is highly dependent on the existing technology, which is the origin of that technology. Based on that, it can grow into the independent field or disappear rapidly, according to the change of the dependent technology. In the existing analysis, the potential growth of convergence technology is judged through the traditional indicators designed from the general purpose. However, these indicators do not reflect the principle of convergence. In other words, these indicators do not reflect the characteristics of convergence technology, which brings the meaning of new technologies emerge through two or more mature technologies and grown technologies affect the creation of another technology. Thirdly, previous studies do not provide objective methods for evaluating the accuracy of models in forecasting promising convergence technologies. In the studies of convergence technology, the subject of forecasting promising technologies was relatively insufficient due to the complexity of the field. Therefore, it is difficult to find a method to evaluate the accuracy of the model that forecasting promising convergence technologies. In order to activate the field of forecasting promising convergence technology, it is important to establish a method for objectively verifying and evaluating the accuracy of the model proposed by each study. To overcome these limitations, we propose a new method for analysis of convergence technologies. First of all, through topic modeling, we derive a new technology classification in terms of text content. It reflects the dynamic change of the actual technology market, not the existing fixed classification standard. In addition, we identify the influence relationships between technologies through the topic correspondence weights of each document, and structuralize them into a network. In addition, we devise a centrality indicator (PGC, potential growth centrality) to forecast the future growth of technology by utilizing the centrality information of each technology. It reflects the convergence characteristics of each technology, according to technology maturity and interdependence between technologies. Along with this, we propose a method to evaluate the accuracy of forecasting model by measuring the growth rate of promising technology. It is based on the variation of potential growth centrality by period. In this paper, we conduct experiments with 13,477 patent documents dealing with technical contents to evaluate the performance and practical applicability of the proposed method. As a result, it is confirmed that the forecast model based on a centrality indicator of the proposed method has a maximum forecast accuracy of about 2.88 times higher than the accuracy of the forecast model based on the currently used network indicators.

Implementation Strategy for the Elderly Care Solution Based on Usage Log Analysis: Focusing on the Case of Hyodol Product (사용자 로그 분석에 기반한 노인 돌봄 솔루션 구축 전략: 효돌 제품의 사례를 중심으로)

  • Lee, Junsik;Yoo, In-Jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.117-140
    • /
    • 2019
  • As the aging phenomenon accelerates and various social problems related to the elderly of the vulnerable are raised, the need for effective elderly care solutions to protect the health and safety of the elderly generation is growing. Recently, more and more people are using Smart Toys equipped with ICT technology for care for elderly. In particular, log data collected through smart toys is highly valuable to be used as a quantitative and objective indicator in areas such as policy-making and service planning. However, research related to smart toys is limited, such as the development of smart toys and the validation of smart toy effectiveness. In other words, there is a dearth of research to derive insights based on log data collected through smart toys and to use them for decision making. This study will analyze log data collected from smart toy and derive effective insights to improve the quality of life for elderly users. Specifically, the user profiling-based analysis and elicitation of a change in quality of life mechanism based on behavior were performed. First, in the user profiling analysis, two important dimensions of classifying the type of elderly group from five factors of elderly user's living management were derived: 'Routine Activities' and 'Work-out Activities'. Based on the dimensions derived, a hierarchical cluster analysis and K-Means clustering were performed to classify the entire elderly user into three groups. Through a profiling analysis, the demographic characteristics of each group of elderlies and the behavior of using smart toy were identified. Second, stepwise regression was performed in eliciting the mechanism of change in quality of life. The effects of interaction, content usage, and indoor activity have been identified on the improvement of depression and lifestyle for the elderly. In addition, it identified the role of user performance evaluation and satisfaction with smart toy as a parameter that mediated the relationship between usage behavior and quality of life change. Specific mechanisms are as follows. First, the interaction between smart toy and elderly was found to have an effect of improving the depression by mediating attitudes to smart toy. The 'Satisfaction toward Smart Toy,' a variable that affects the improvement of the elderly's depression, changes how users evaluate smart toy performance. At this time, it has been identified that it is the interaction with smart toy that has a positive effect on smart toy These results can be interpreted as an elderly with a desire to meet emotional stability interact actively with smart toy, and a positive assessment of smart toy, greatly appreciating the effectiveness of smart toy. Second, the content usage has been confirmed to have a direct effect on improving lifestyle without going through other variables. Elderly who use a lot of the content provided by smart toy have improved their lifestyle. However, this effect has occurred regardless of the attitude the user has toward smart toy. Third, log data show that a high degree of indoor activity improves both the lifestyle and depression of the elderly. The more indoor activity, the better the lifestyle of the elderly, and these effects occur regardless of the user's attitude toward smart toy. In addition, elderly with a high degree of indoor activity are satisfied with smart toys, which cause improvement in the elderly's depression. However, it can be interpreted that elderly who prefer outdoor activities than indoor activities, or those who are less active due to health problems, are hard to satisfied with smart toys, and are not able to get the effects of improving depression. In summary, based on the activities of the elderly, three groups of elderly were identified and the important characteristics of each type were identified. In addition, this study sought to identify the mechanism by which the behavior of the elderly on smart toy affects the lives of the actual elderly, and to derive user needs and insights.

Target-Aspect-Sentiment Joint Detection with CNN Auxiliary Loss for Aspect-Based Sentiment Analysis (CNN 보조 손실을 이용한 차원 기반 감성 분석)

  • Jeon, Min Jin;Hwang, Ji Won;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.1-22
    • /
    • 2021
  • Aspect Based Sentiment Analysis (ABSA), which analyzes sentiment based on aspects that appear in the text, is drawing attention because it can be used in various business industries. ABSA is a study that analyzes sentiment by aspects for multiple aspects that a text has. It is being studied in various forms depending on the purpose, such as analyzing all targets or just aspects and sentiments. Here, the aspect refers to the property of a target, and the target refers to the text that causes the sentiment. For example, for restaurant reviews, you could set the aspect into food taste, food price, quality of service, mood of the restaurant, etc. Also, if there is a review that says, "The pasta was delicious, but the salad was not," the words "steak" and "salad," which are directly mentioned in the sentence, become the "target." So far, in ABSA, most studies have analyzed sentiment only based on aspects or targets. However, even with the same aspects or targets, sentiment analysis may be inaccurate. Instances would be when aspects or sentiment are divided or when sentiment exists without a target. For example, sentences like, "Pizza and the salad were good, but the steak was disappointing." Although the aspect of this sentence is limited to "food," conflicting sentiments coexist. In addition, in the case of sentences such as "Shrimp was delicious, but the price was extravagant," although the target here is "shrimp," there are opposite sentiments coexisting that are dependent on the aspect. Finally, in sentences like "The food arrived too late and is cold now." there is no target (NULL), but it transmits a negative sentiment toward the aspect "service." Like this, failure to consider both aspects and targets - when sentiment or aspect is divided or when sentiment exists without a target - creates a dual dependency problem. To address this problem, this research analyzes sentiment by considering both aspects and targets (Target-Aspect-Sentiment Detection, hereby TASD). This study detected the limitations of existing research in the field of TASD: local contexts are not fully captured, and the number of epochs and batch size dramatically lowers the F1-score. The current model excels in spotting overall context and relations between each word. However, it struggles with phrases in the local context and is relatively slow when learning. Therefore, this study tries to improve the model's performance. To achieve the objective of this research, we additionally used auxiliary loss in aspect-sentiment classification by constructing CNN(Convolutional Neural Network) layers parallel to existing models. If existing models have analyzed aspect-sentiment through BERT encoding, Pooler, and Linear layers, this research added CNN layer-adaptive average pooling to existing models, and learning was progressed by adding additional loss values for aspect-sentiment to existing loss. In other words, when learning, the auxiliary loss, computed through CNN layers, allowed the local context to be captured more fitted. After learning, the model is designed to do aspect-sentiment analysis through the existing method. To evaluate the performance of this model, two datasets, SemEval-2015 task 12 and SemEval-2016 task 5, were used and the f1-score increased compared to the existing models. When the batch was 8 and epoch was 5, the difference was largest between the F1-score of existing models and this study with 29 and 45, respectively. Even when batch and epoch were adjusted, the F1-scores were higher than the existing models. It can be said that even when the batch and epoch numbers were small, they can be learned effectively compared to the existing models. Therefore, it can be useful in situations where resources are limited. Through this study, aspect-based sentiments can be more accurately analyzed. Through various uses in business, such as development or establishing marketing strategies, both consumers and sellers will be able to make efficient decisions. In addition, it is believed that the model can be fully learned and utilized by small businesses, those that do not have much data, given that they use a pre-training model and recorded a relatively high F1-score even with limited resources.

Natural Language Processing Model for Data Visualization Interaction in Chatbot Environment (챗봇 환경에서 데이터 시각화 인터랙션을 위한 자연어처리 모델)

  • Oh, Sang Heon;Hur, Su Jin;Kim, Sung-Hee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.11
    • /
    • pp.281-290
    • /
    • 2020
  • With the spread of smartphones, services that want to use personalized data are increasing. In particular, healthcare-related services deal with a variety of data, and data visualization techniques are used to effectively show this. As data visualization techniques are used, interactions in visualization are also naturally emphasized. In the PC environment, since the interaction for data visualization is performed with a mouse, various filtering for data is provided. On the other hand, in the case of interaction in a mobile environment, the screen size is small and it is difficult to recognize whether or not the interaction is possible, so that only limited visualization provided by the app can be provided through a button touch method. In order to overcome the limitation of interaction in such a mobile environment, we intend to enable data visualization interactions through conversations with chatbots so that users can check individual data through various visualizations. To do this, it is necessary to convert the user's query into a query and retrieve the result data through the converted query in the database that is storing data periodically. There are many studies currently being done to convert natural language into queries, but research on converting user queries into queries based on visualization has not been done yet. Therefore, in this paper, we will focus on query generation in a situation where a data visualization technique has been determined in advance. Supported interactions are filtering on task x-axis values and comparison between two groups. The test scenario utilized data on the number of steps, and filtering for the x-axis period was shown as a bar graph, and a comparison between the two groups was shown as a line graph. In order to develop a natural language processing model that can receive requested information through visualization, about 15,800 training data were collected through a survey of 1,000 people. As a result of algorithm development and performance evaluation, about 89% accuracy in classification model and 99% accuracy in query generation model was obtained.

Earthquake Monitoring : Future Strategy (지진관측 : 미래 발전 전략)

  • Chi, Heon-Cheol;Park, Jung-Ho;Kim, Geun-Young;Shin, Jin-Soo;Shin, In-Cheul;Lim, In-Seub;Jeong, Byung-Sun;Sheen, Dong-Hoon
    • Geophysics and Geophysical Exploration
    • /
    • v.13 no.3
    • /
    • pp.268-276
    • /
    • 2010
  • Earthquake Hazard Mitigation Law was activated into force on March 2009. By the law, the obligation to monitor the effect of earthquake on the facilities was extended to many organizations such as gas company and local governments. Based on the estimation of National Emergency Management Agency (NEMA), the number of free-surface acceleration stations would be expanded to more than 400. The advent of internet protocol and the more simplified operation have allowed the quick and easy installation of seismic stations. In addition, the dynamic range of seismic instruments has been continuously improved enough to evaluate damage intensity and to alert alarm directly for earthquake hazard mitigation. For direct visualization of damage intensity and area, Real Time Intensity COlor Mapping (RTICOM) is explained in detail. RTICOM would be used to retrieve the essential information for damage evaluation, Peak Ground Acceleration (PGA). Destructive earthquake damage is usually due to surface waves which just follow S wave. The peak amplitude of surface wave would be pre-estimated from the amplitude and frequency content of first arrival P wave. Earthquake Early Warning (EEW) system is conventionally defined to estimate local magnitude from P wave. The status of EEW is reviewed and the application of EEW to Odesan earthquake is exampled with ShakeMap in order to make clear its appearance. In the sense of rapidity, the earthquake announcement of Korea Meteorological Agency (KMA) might be dramatically improved by the adaption of EEW. In order to realize hazard mitigation, EEW should be applied to the local crucial facilities such as nuclear power plants and fragile semi-conduct plant. The distributed EEW is introduced with the application example of Uljin earthquake. Not only Nation-wide but also locally distributed EEW applications, all relevant information is needed to be shared in real time. The plan of extension of Korea Integrated Seismic System (KISS) is briefly explained in order to future cooperation of data sharing and utilization.

Target Word Selection Disambiguation using Untagged Text Data in English-Korean Machine Translation (영한 기계 번역에서 미가공 텍스트 데이터를 이용한 대역어 선택 중의성 해소)

  • Kim Yu-Seop;Chang Jeong-Ho
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.749-758
    • /
    • 2004
  • In this paper, we propose a new method utilizing only raw corpus without additional human effort for disambiguation of target word selection in English-Korean machine translation. We use two data-driven techniques; one is the Latent Semantic Analysis(LSA) and the other the Probabilistic Latent Semantic Analysis(PLSA). These two techniques can represent complex semantic structures in given contexts like text passages. We construct linguistic semantic knowledge by using the two techniques and use the knowledge for target word selection in English-Korean machine translation. For target word selection, we utilize a grammatical relationship stored in a dictionary. We use k- nearest neighbor learning algorithm for the resolution of data sparseness Problem in target word selection and estimate the distance between instances based on these models. In experiments, we use TREC data of AP news for construction of latent semantic space and Wail Street Journal corpus for evaluation of target word selection. Through the Latent Semantic Analysis methods, the accuracy of target word selection has improved over 10% and PLSA has showed better accuracy than LSA method. finally we have showed the relatedness between the accuracy and two important factors ; one is dimensionality of latent space and k value of k-NT learning by using correlation calculation.

Anisotrpic radar crosshole tomography and its applications (이방성 레이다 시추공 토모그래피와 그 응용)

  • Kim Jung-Ho;Cho Seong-Jun;Yi Myeong-Jong
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2005.09a
    • /
    • pp.21-36
    • /
    • 2005
  • Although the main geology of Korea consists of granite and gneiss, it Is not uncommon to encounter anisotropy Phenomena in crosshole radar tomography even when the basement is crystalline rock. To solve the anisotropy Problem, we have developed and continuously upgraded an anisotropic inversion algorithm assuming a heterogeneous elliptic anisotropy to reconstruct three kinds of tomograms: tomograms of maximum and minimum velocities, and of the direction of the symmetry axis. In this paper, we discuss the developed algorithm and introduce some case histories on the application of anisotropic radar tomography in Korea. The first two case histories were conducted for the construction of infrastructure, and their main objective was to locate cavities in limestone. The last two were performed In a granite and gneiss area. The anisotropy in the granite area was caused by fine fissures aligned in the same direction, while that in the gneiss and limestone area by the alignment of the constituent minerals. Through these case histories we showed that the anisotropic characteristic itself gives us additional important information for understanding the internal status of basement rock. In particular, the anisotropy ratio defined by the normalized difference between maximum and minimum velocities as well as the direction of maximum velocity are helpful to interpret the borehole radar tomogram.

  • PDF

Comparison of Characteristics on Electrolyzed Water Manufactured by Various Electrolytic Factors (전해인자에 따른 전기분해수의 특성 비교)

  • Kim, Myung-Ho;Jeong, Jin-Woong;Cho, Young-Je
    • Korean Journal of Food Science and Technology
    • /
    • v.36 no.3
    • /
    • pp.416-422
    • /
    • 2004
  • Efficacy of surface sterilization and physicochemical properties of electrolyzed water manufactured depending on electrolyte, materials, and type of electrolytic diaphragm used were investigated. Physical properties of electrolyzed water manufactured from diaphragm system showed the highest effectiveness under at distance between diaphragms of 1.0 mm and 20% NaCl supplying rate of 6 mL/min. ORP, HClO (should defined) content, and pH at above conditions were 1,170 mV, 100 ppm, and 2.5, respectively. Two-stage electrolyzed system was more effective than one-stage one. Electrolyzed water manufactured from non-diaphragm system at 4 mL/min supplying rate of 20% NaCl was similar to the most effective diaphragm system, whereas ORP, HClO content, and pH were 800 mV, 200 ppm, and 9, respectively. Sealed electrolyzed water could be preserved more than one month at room temperature with ORPs of 750 and 1,150 mV in non-diaphragm and diaphragm systems, respectively, and at HClO content of 100 ppm. Physicochemical properties of electrolyzed water manufactured from electrolytic diaphragm of $IrO_{2}$ and Pt+Ir were more effective than that of Pt. ORP and HClO contents of electrolyzed water manufactured from various electrolytes were high in order of NaCl>KCl>$CaCl_{2}$, whereas no differences were observed among electrolytes in sterilization efficacy. Twelve kinds of microorganisms tested (initial total count, $10^{5}-10^{6}CFU/mL$) were sterilized within 1-2 min by electrolyzed water.

Evaluation of the Simulated PM2.5 Concentrations using Air Quality Forecasting System according to Emission Inventories - Focused on China and South Korea (대기질 예보 시스템의 입력 배출목록에 따른 PM2.5 모의 성능 평가 - 중국 및 한국을 중심으로)

  • Choi, Ki-Chul;Lim, Yongjae;Lee, Jae-Bum;Nam, Kipyo;Lee, Hansol;Lee, Yonghee;Myoung, Jisu;Kim, Taehee;Jang, Limseok;Kim, Jeong Soo;Woo, Jung-Hun;Kim, Soontae;Choi, Kwang-Ho
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.34 no.2
    • /
    • pp.306-320
    • /
    • 2018
  • Emission inventory is the essential component for improving the performance of air quality forecasting system. This study evaluated the simulated daily mean $PM_{2.5}$ concentrations in South Korea and China for 1-year period (Sept. 2016~Aug. 2017) using air quality forecasting system which was applied by the emission inventory of E2015 (predicted CAPSS 2015 for South Korea and KORUS 2015 v1 for the other regions). To identify the impacts of emissions on the simulated $PM_{2.5}$, the emission inventory replaced by E2010 (CAPSS 2010 and MIX 2010) were also applied under the same forecasting conditions. These results showed that simulated daily mean $PM_{2.5}$ concentrations had generally suitable performance with both emission data-sets for China (IOA>0.87, R>0.87) and South Korea (IOA>0.84, R>0.76). The impacts of the changes in emission inventories on simulated daily mean $PM_{2.5}$ concentrations were quantitatively estimated. In China, normalized mean bias (NMB) showed 5.5% and 26.8% under E2010 and E2015, respectively. The tendency of overestimated concentrations was larger in North Central and Southeast China than other regions under both E2010 and E2015. Seasonal differences of NMB were higher in non-winter season (28.3% (E2010)~39.3% (E2015)) than winter season (-0.5% (E2010)~8.0% (E2015)). In South Korea, NMB showed -5.4% and 2.8% for all days, but -15.2% and -11.2% for days below $40{\mu}g/m^3$ to minimize the impacts of long-range transport under E2010 and E2015, respectively. For all days, simulated $PM_{2.5}$ concentrations were overestimated in Seoul, Incheon, Southern part of Gyeonggi and Daejeon, and underestimated in other regions such as Jeonbuk, Ulsan, Busan and Gyeongnam, regardless of what emission inventories were applied. Our results suggest that the updated emission inventory, which reflects current status of emission amounts and spatio-temporal allocations, is needed for improving the performance of air quality forecasting.