• Title/Summary/Keyword: 데이터 확장성 문제

Search Result 425, Processing Time 0.03 seconds

A study on the estimation of the credibility in an extended Buhlmann-Straub model (확장된 뷸만-스트라웁 모형에서 신뢰도 추정 연구)

  • Yi, Min-Jeong;Go, Han-Na;Choi, Seung-Kyoung;Lee, Eui-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.6
    • /
    • pp.1181-1190
    • /
    • 2010
  • When an insurer develops an insurance product, it is very critical to determine reasonable premiums, which is directly related to insurer's profits. There are three methods to determine premiums. Frist, the insurer utilizes premiums paid to the similar cases to the current one. Second, the insurer calculates premiums based on policyholder's past records. The last method is to combine the first with the second one. Based on the three methods, there are two major theories determining premiums, Limited Fluctuation Credibility Theory not based on statistical models and Greatest Accuracy Credibility Theory based on statistical models. There are well-known methods derived from Greatest Accuracy Credibility Theory, such as, Buhlmann model and Buhlmann-Straub model. In this paper, we extend the Buhlmann-Straub model to accommodate the fact that variability grows according to the number of data in practice and suggest a new non-parametric method to estimate the premiums. The suggested estimation method is also applied to the data gained from simulation and compared with the existing estimation method.

Handoff Algorithm based on One-way Trip Time for Wireless Mesh Network (Wireless Mesh Network을 위한 OTT 기반 핸드오프 알고리즘)

  • Park, Cha-Hee;Yoo, Myung-Sik
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.44 no.5
    • /
    • pp.16-24
    • /
    • 2007
  • Wireless mesh network extends the limited coverage of conventional wireless service from one-hop access point to wider area. It is recognized as an important issue to provide seamless handoff in the wide coverage of wireless mesh network. A long handoff delay may cause high packet loss and disconnection of service, degrading the network performance. The handoff delay is mainly introduced from the channel scanning process obtaining information to make handoff decision and the unnecessary handoffs due to inaccurate handoff decision. In this paper, we propose the handoff algorithm based on one-way trip time (OTT) to expedite the handoff procedure. As compared to the handoff algorithm based on the receiving signal power, the proposed algorithm takes advantage of OTT to obtain the necessary handoff information in a relatively shorter time and reduce the unnecessary handoffs. The performance of proposed algorithm is evaluated through the simulations. It is verified that the proposed handoff algorithm can effectively enhance the handoff accuracy, and therefore reduce the handoff delay and unnecessary handoffs.

Speed Optimization Design of 3D Medical Image Reconstruction System Based on PC (PC 기반의 3차원 의료영상 재구성 시스템의 고속화 설계)

  • Bae, Su-Hyeon;Kim, Seon-Ho;Yu, Seon-Guk
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.2
    • /
    • pp.189-198
    • /
    • 1998
  • 3D medical image reconstruction techniques are useful to figure out complex 3D structures from the set of 2D sections. In the paper, 3D medical image reconstruction system is constructed under PC environment and programmed based on modular programming by using Visual C++ 4.2. The whole procedures are composed of data preparation, gradient estimation, classification, shading, transformation and ray-casting & compositing. Three speed optimization techniques are used for accelerating 3D medical image reconstruction technique. One is to reduce the rays when cast rays to reconstruct 3D medical image, another is to reduce the voxels to be calculated and the other is to apply early ray termination. To implement 3D medical image reconstruction system based on PC, speed optimization techniques are experimented and applied.

  • PDF

Improvements of Unit System for nationwide expansion of Early Warning Service for Agrometeorological Disaster (농업기상재해 조기경보시스템의 전국 확대를 위한 단위 시스템의 개선)

  • Park, Joo Hueon;Shin, Yong Soon;Shim, Kyo-Moon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.4
    • /
    • pp.356-365
    • /
    • 2021
  • The nationwide expansion of the agricultural early warning service for agrometeorological disaster would require assessment of geographical and agricultural environmental characteristics by individual region. The development of an efficient computing environment would facilitate such services for the area of study region to deal with various crops and varieties for many farms. In particular, the design of the computing environment would have a considerable impact on the service quality of agriculture meteorology when the scale of computing environments increases for extended service areas. The objectives of this study were to derive the issues on the current computing environment under which services are provided by each region and to seek the solutions to these problems. The self-evaluation through experimental operation for about a year indicated that integration of the early warning service system distributed over different regions would reduce redundant computing procedures and ensure efficient storage and comprehensive management of data. This suggested that the early warning service for agrometeorological disaster would become more stable even when the service areas are to be expanded to the national scale. This would contribute to higher quality services for individual farmers.

An Efficient Clustering Protocol with Mode Selection (모드 선택을 이용한 효율적 클러스터링 프로토콜)

  • Aries, Kusdaryono;Lee, Young Han;Lee, Kyoung Oh
    • Annual Conference of KIPS
    • /
    • 2010.11a
    • /
    • pp.925-928
    • /
    • 2010
  • Wireless sensor networks are composed of a large number of sensor nodes with limited energy resources. One critical issue in wireless sensor networks is how to gather sensed information in an energy efficient way since the energy is limited. The clustering algorithm is a technique used to reduce energy consumption. It can improve the scalability and lifetime of wireless sensor network. In this paper, we introduce a clustering protocol with mode selection (CPMS) for wireless sensor networks. Our scheme improves the performance of BCDCP (Base Station Controlled Dynamic Clustering Protocol) and BIDRP (Base Station Initiated Dynamic Routing Protocol) routing protocol. In CPMS, the base station constructs clusters and makes the head node with highest residual energy send data to base station. Furthermore, we can save the energy of head nodes using modes selection method. The simulation results show that CPMS achieves longer lifetime and more data messages transmissions than current important clustering protocol in wireless sensor networks.

Design and Development of Personal Healthcare System Based on IEEE 11073/HL7 Standards Using Smartphone (스마트폰을 이용한 IEEE 11073/HL7 기반의 개인 건강관리 시스템 설계 및 구현)

  • Nam, Jae-Choong;Seo, Won-Kyeong;Bae, Jae-Seung;Cho, You-Ze
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.12B
    • /
    • pp.1556-1564
    • /
    • 2011
  • The increased life expectancy of human due to the advance of medical techniques has led to many social problems such as rapidly aging populations, increased medical expenses and a lack of medical specialists. Thus, studies on improving the quality of life with the least amount of expense have been going on by incorporating advanced technologies, especially for Personal Health Devices (PHDs), into the medical service market. However, compatibility and extensibility among manufacturers of PHDs have not been taken into account in most of the researches done on the development of PHDs because most of them have been supported by individual medical organizations. The interoperability among medical organizations can not be guaranteed because each medical organization uses different format of the messages. Therefore, in this paper, an expansion module that can enable commercially-available non-standard PHDs to support the IEEE 11073, and a smart-phone-based manager that can support easy and comprehensive management on receiving and transmitting the collected data from each PHD using IEEE 11073 standard were developed. In addition, a u-health system that can transmit the data collected in the manager using the standard data format HL 7 to medical center for real-time medical service from every medical institutions that support this standard was designed and developed.

NoSQL-based Sensor Web System for Fine Particles Analysis Services (미세먼지 분석 서비스를 위한 NoSQL 기반 센서 웹 시스템)

  • Kim, Jeong-Joon;Kwak, Kwang-Jin;Park, Jeong-Min
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.2
    • /
    • pp.119-125
    • /
    • 2019
  • Recently, it has become a social problem due to fine particles. There are more people wearing masks, weather alerts and disaster notices. Research and policy are actively underway. Meteorologically, the biggest damage caused by fine particles is the inversion layer phenomenon. In this study, we designed a system to warn fine Particles by analyzing inversion layer and wind direction. This weather information system proposes a system that can efficiently perform scalability and parallel processing by using OGC sensor web enablement system and NoSQL storage for sensor control and data exchange.

메타버스의 진화에 따른 ID 관리 기술 현황

  • Jeong, Soo Yong;Seo, Chang Ho;CHO, in-Man;Jin, Seung-Hun;Kim, Soo Hyung
    • Review of KIISC
    • /
    • v.32 no.4
    • /
    • pp.49-59
    • /
    • 2022
  • 메타버스는 가상, 초월을 의미하는 '메타(meta)'와 세계, 우주를 의미하는 '유니버스(universe)'의 합성어로 현실 세계를 초월한 디지털 세계라고 정의할 수 있다. 이러한 메타버스는 현실 세계와 평행한 디지털 세계의 구축을 시작으로 블록체인(Blockchain), 인공지능(AI) 등의 기술과 고성능 웨어러블 디바이스(Wearable Device) 기반의 높은 몰입감을 제공하여 현실과 상호 작용하는 디지털 세계로 진화하고 있다. 이에 따라, 현재의 메타버스는 기존의 디지털 세계를 구축하고 활용하는 다양한 서비스가 포함된 개념으로 확장되고 있으며, 최종적으로는 현실과 디지털 세계의 경계가 없는 초현실적인 세계로 발전할 것이다. 이러한 메타버스 발전의 뒤에는 많은 보안 기술들이 필요하며, 실제 개인의 프라이버시 문제 및 보안 위협에 대한 우려가 증가하고 있다. 특히, 높은 몰입감을 제공하기 위해 이전보다 더욱 다양한 생체정보를 포함한 개인정보가 사용될 것이며, 이러한 데이터는 개인을 특정하는 ID(Identity)로 활용될 수 있다. 이에, 개인정보에 대한 보안 위협은 더욱 다양해질 것이고, 동시에 안전한 개인정보 활용이 가능한 ID 관리 기술개발의 필요성도 높아질 것이다. 따라서, 본 논문에서는 메타버스의 개념과 함께 진화 과정을 제시하고, 메타버스의 진화에 따라 다양해지는 ID 관련 보안 위협 및 대응 기술을 분석을 통해 ID 관리 기술의 현황을 정리한다.

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

A comparison of imputation methods using nonlinear models (비선형 모델을 이용한 결측 대체 방법 비교)

  • Kim, Hyein;Song, Juwon
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.4
    • /
    • pp.543-559
    • /
    • 2019
  • Data often include missing values due to various reasons. If the missing data mechanism is not MCAR, analysis based on fully observed cases may an estimation cause bias and decrease the precision of the estimate since partially observed cases are excluded. Especially when data include many variables, missing values cause more serious problems. Many imputation techniques are suggested to overcome this difficulty. However, imputation methods using parametric models may not fit well with real data which do not satisfy model assumptions. In this study, we review imputation methods using nonlinear models such as kernel, resampling, and spline methods which are robust on model assumptions. In addition, we suggest utilizing imputation classes to improve imputation accuracy or adding random errors to correctly estimate the variance of the estimates in nonlinear imputation models. Performances of imputation methods using nonlinear models are compared under various simulated data settings. Simulation results indicate that the performances of imputation methods are different as data settings change. However, imputation based on the kernel regression or the penalized spline performs better in most situations. Utilizing imputation classes or adding random errors improves the performance of imputation methods using nonlinear models.