• Title/Summary/Keyword: 기술적 성능평가시스템

Search Result 1,336, Processing Time 0.034 seconds

Techniques for Acquisition of Moving Object Location in LBS (위치기반 서비스(LBS)를 위한 이동체 위치획득 기법)

  • Min, Gyeong-Uk;Jo, Dae-Su
    • The KIPS Transactions:PartD
    • /
    • v.10D no.6
    • /
    • pp.885-896
    • /
    • 2003
  • The typws of service using location Information are being various and extending their domain as wireless internet tochnology is developing and its application par is widespread, so it is prospected that LBS(Location-Based Services) will be killer application in wireless internet services. This location information is basic and high value-added information, and this information services make prior GIS(Geographic Information System) to be useful to anybody. The acquisition of this location information from moving object is very important part in LBS. Also the interfacing of acquisition of moving object between MODB and telecommunication network is being very important function in LBS. After this, when LBS are familiar to everybody, we can predict that LBS system load is so heavy for the acquisition of so many subscribers and vehicles. That is to say, LBS platform performance is fallen off because of overhead increment of acquiring moving object between MODB and wireless telecommunication network. So, to make stable of LBS platform, in this MODB system, acquisition of moving object location par as reducing the number of acquisition of unneccessary moving object location. We study problems in acquiring a huge number of moving objects location and design some acquisition model using past moving patternof each object to reduce telecommunication overhead. And after implementation these models, we estimate performance of each model.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Non-invasive Brain Stimulation and its Legal Regulation - Devices using Techniques of TMS and tDCS - (비침습적 뇌자극기술과 법적 규제 - TMS와 tDCS기술을 이용한 기기를 중심으로 -)

  • Choi, Min-Young
    • The Korean Society of Law and Medicine
    • /
    • v.21 no.2
    • /
    • pp.209-244
    • /
    • 2020
  • TMS and tDCS are non-invasive devices that treat the diseases of patients or individual users, and manage or improve their health by applying stimulation to a brain through magnetism and electricity. The effect and safety of these devices have proved to be valid in several diseases, but research in this area is still much going on. Despite increasing cases of their application, legislations directly regulating TMS and tDCS are hard to find. Legal regulation regarding TMS and tDCS in the United States, Germany and Japan reveals that while TMS has been approved as a medical device with a moderate risk, tDCS has not yet earned approval as a medical device. However, the recent FDA guidance, European MDR changes, recalls in the US, and relevant legal provisions of Germany and Japan, as well as recommendations from expert groups all show signs of tDCS growing closer to getting approved as a medical device. Of course, safety and efficacy of tDCS can still be regulated as a general product instead of as a medical device. Considering multiple potential impacts on a human brain, however, the need for independent regulation is urgent. South Korea also lacks legal provisions explicitly regulating TMS and tDCS, but they fall into the category of the grade 3 medical devices according to the notifications of the Korean Ministry of Food and Drug Safety. And safety and efficacy of TMS are to be evaluated in compliance with the US FDA guidance. But no specific guidelines exist for tDCS yet. Given that tDCS devices are used in some hospitals in reality, and also at home by individual buyers, such a regulatory gap must quickly be addressed. In a longer term, legal system needs to be in place capable of independently regulating non-invasive brain stimulating devices.

Earthquake Monitoring : Future Strategy (지진관측 : 미래 발전 전략)

  • Chi, Heon-Cheol;Park, Jung-Ho;Kim, Geun-Young;Shin, Jin-Soo;Shin, In-Cheul;Lim, In-Seub;Jeong, Byung-Sun;Sheen, Dong-Hoon
    • Geophysics and Geophysical Exploration
    • /
    • v.13 no.3
    • /
    • pp.268-276
    • /
    • 2010
  • Earthquake Hazard Mitigation Law was activated into force on March 2009. By the law, the obligation to monitor the effect of earthquake on the facilities was extended to many organizations such as gas company and local governments. Based on the estimation of National Emergency Management Agency (NEMA), the number of free-surface acceleration stations would be expanded to more than 400. The advent of internet protocol and the more simplified operation have allowed the quick and easy installation of seismic stations. In addition, the dynamic range of seismic instruments has been continuously improved enough to evaluate damage intensity and to alert alarm directly for earthquake hazard mitigation. For direct visualization of damage intensity and area, Real Time Intensity COlor Mapping (RTICOM) is explained in detail. RTICOM would be used to retrieve the essential information for damage evaluation, Peak Ground Acceleration (PGA). Destructive earthquake damage is usually due to surface waves which just follow S wave. The peak amplitude of surface wave would be pre-estimated from the amplitude and frequency content of first arrival P wave. Earthquake Early Warning (EEW) system is conventionally defined to estimate local magnitude from P wave. The status of EEW is reviewed and the application of EEW to Odesan earthquake is exampled with ShakeMap in order to make clear its appearance. In the sense of rapidity, the earthquake announcement of Korea Meteorological Agency (KMA) might be dramatically improved by the adaption of EEW. In order to realize hazard mitigation, EEW should be applied to the local crucial facilities such as nuclear power plants and fragile semi-conduct plant. The distributed EEW is introduced with the application example of Uljin earthquake. Not only Nation-wide but also locally distributed EEW applications, all relevant information is needed to be shared in real time. The plan of extension of Korea Integrated Seismic System (KISS) is briefly explained in order to future cooperation of data sharing and utilization.

A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

  • Kim, Hyung Su;Hong, Seung Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.111-126
    • /
    • 2020
  • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.

Design and Implementation of Unified Index for Moving Objects Databases (이동체 데이타베이스를 위한 통합 색인의 설계 및 구현)

  • Park Jae-Kwan;An Kyung-Hwan;Jung Ji-Won;Hong Bong-Hee
    • Journal of KIISE:Databases
    • /
    • v.33 no.3
    • /
    • pp.271-281
    • /
    • 2006
  • Recently the need for Location-Based Service (LBS) has increased due to the development and widespread use of the mobile devices (e.g., PDAs, cellular phones, labtop computers, GPS, and RFID etc). The core technology of LBS is a moving-objects database that stores and manages the positions of moving objects. To search for information quickly, the database needs to contain an index that supports both real-time position tracking and management of large numbers of updates. As a result, the index requires a structure operating in the main memory for real-time processing and requires a technique to migrate part of the index from the main memory to disk storage (or from disk storage to the main memory) to manage large volumes of data. To satisfy these requirements, this paper suggests a unified index scheme unifying the main memory and the disk as well as migration policies for migrating part of the index from the memory to the disk during a restriction in memory space. Migration policy determines a group of nodes, called the migration subtree, and migrates the group as a unit to reduce disk I/O. This method takes advantage of bulk operations and dynamic clustering. The unified index is created by applying various migration policies. This paper measures and compares the performance of the migration policies using experimental evaluation.

A SoC Design Synthesis System for High Performance Vehicles (고성능 차량용 SoC 설계 합성 시스템)

  • Chang, Jeong-Uk;Lin, Chi-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.3
    • /
    • pp.181-187
    • /
    • 2020
  • In this paper, we proposed a register allocation algorithm and resource allocation algorithm in the high level synthesis process for the SoC design synthesis system of high performance vehicles We have analyzed to the operator characteristics and structure of datapath in the most important high-level synthesis. We also introduced the concept of virtual operator for the scheduling of multi-cycle operations. Thus, we demonstrated the complexity to implement a multi-cycle operation of the operator, regardless of the type of operation that can be applied for commonly use in the resources allocation algorithm. The algorithm assigns the functional operators so that the number of connecting signal lines which are repeatedly used between the operators would be minimum. This algorithm provides regional graphs with priority depending on connected structure when the registers are allocated. The registers with connecting structure are allocated to the maximum cluster which is generated by the minimum cluster partition algorithm. Also, it minimize the connecting structure by removing the duplicate inputs for the multiplexor in connecting structure and arranging the inputs for the multiplexor which is connected to the operators. In order to evaluate the scheduling performance of the described algorithm, we demonstrate the utility of the proposed algorithm by executing scheduling on the fifth digital wave filter, a standard bench mark model.

An Experimental Study of Synthesis and Characterization of Vanadium Oxide Thin Films Coated on Metallic Bipolar Plates for Cold-Start Enhancement of Fuel Cell Vehicles (연료전지 차량의 냉시동성 개선을 위한 금속 분리판 표면의 바나듐 산화물 박막 제조 및 특성 분석에 관한 연구)

  • Jung, Hye-Mi;Noh, Jung-Hun;Im, Se-Joon;Lee, Jong-Hyun;Ahn, Byung-Ki;Um, Suk-Kee
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.35 no.6
    • /
    • pp.585-592
    • /
    • 2011
  • The enhancement of the cold-start capability of polymer electrolyte fuel cells is of great importance in terms of the durability and reliability of fuel-cell vehicles. In this study, vanadium oxide films deposited onto the flat surface of metallic bipolar plates were synthesized to investigate the feasibility of their use as an efficient self-heating source to expedite the temperature rise during startup at subzero temperatures. Samples were prepared through the dip-coating technique using the hydrolytic sol-gel route, and the chemical compositions and microstructures of the films were characterized by X-ray diffraction, X-ray photoelectron spectroscopy, and field-emission scanning electron microscopy. In addition, the electrical resistance hysteresis loop of the films was measured over a temperature range from -20 to $80^{\circ}C$ using a four-terminal technique. Experimentally, it was found that the thermal energy (Joule heating) resulting from self-heating of the films was sufficient to provide the substantial amount of energy required for thawing at subzero temperatures.

Development of a method for urban flooding detection using unstructured data and deep learing (비정형 데이터와 딥러닝을 활용한 내수침수 탐지기술 개발)

  • Lee, Haneul;Kim, Hung Soo;Kim, Soojun;Kim, Donghyun;Kim, Jongsung
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.12
    • /
    • pp.1233-1242
    • /
    • 2021
  • In this study, a model was developed to determine whether flooding occurred using image data, which is unstructured data. CNN-based VGG16 and VGG19 were used to develop the flood classification model. In order to develop a model, images of flooded and non-flooded images were collected using web crawling method. Since the data collected using the web crawling method contains noise data, data irrelevant to this study was primarily deleted, and secondly, the image size was changed to 224×224 for model application. In addition, image augmentation was performed by changing the angle of the image for diversity of image. Finally, learning was performed using 2,500 images of flooding and 2,500 images of non-flooding. As a result of model evaluation, the average classification performance of the model was found to be 97%. In the future, if the model developed through the results of this study is mounted on the CCTV control center system, it is judged that the respons against flood damage can be done quickly.

Review of Erosion and Piping in Compacted Bentonite Buffers Considering Buffer-Rock Interactions and Deduction of Influencing Factors (완충재-근계암반 상호작용을 고려한 압축 벤토나이트 완충재 침식 및 파이핑 연구 현황 및 주요 영향인자 도출)

  • Hong, Chang-Ho;Kim, Ji-Won;Kim, Jin-Seop;Lee, Changsoo
    • Tunnel and Underground Space
    • /
    • v.32 no.1
    • /
    • pp.30-58
    • /
    • 2022
  • The deep geological repository for high-level radioactive waste disposal is a multi barrier system comprised of engineered barriers and a natural barrier. The long-term integrity of the deep geological repository is affected by the coupled interactions between the individual barrier components. Erosion and piping phenomena in the compacted bentonite buffer due to buffer-rock interactions results in the removal of bentonite particles via groundwater flow and can negatively impact the integrity and performance of the buffer. Rapid groundwater inflow at the early stages of disposal can lead to piping in the bentonite buffer due to the buildup of pore water pressure. The physiochemical processes between the bentonite buffer and groundwater lead to bentonite swelling and gelation, resulting in bentonite erosion from the buffer surface. Hence, the evaluation of erosion and piping occurrence and its effects on the integrity of the bentonite buffer is crucial in determining the long-term integrity of the deep geological repository. Previous studies on bentonite erosion and piping failed to consider the complex coupled thermo-hydro-mechanical-chemical behavior of bentonite-groundwater interactions and lacked a comprehensive model that can consider the complex phenomena observed from the experimental tests. In this technical note, previous studies on the mechanisms, lab-scale experiments and numerical modeling of bentonite buffer erosion and piping are introduced, and the future expected challenges in the investigation of bentonite buffer erosion and piping are summarized.