• Title/Summary/Keyword: Amount of Information

Search Result 7,049, Processing Time 0.041 seconds

Performance Analysis of Intelligence Pain Nursing Intervention U-health System (지능형 통증 간호중재 유헬스 시스템 성능분석)

  • Jung, Hoill;Hyun, Yoo;Chung, Kyung-Yong;Lee, Young-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.4
    • /
    • pp.1-7
    • /
    • 2013
  • A personalized recommendation system is a recommendation system that recommends goods to users' taste by using an automated information filtering technology. A collaborative filtering method in this technology is a method that discriminates certain types, which represent similar patterns. Thus, it is possible to estimate the pain strength based on the data of the patients who have the past similar types and extract related conditions according to the similarity in classified patients. A representative method using the Pearson correlation coefficient for extracting the similarity weight may represent inexact results as the sample data is small according to the amount of data. Also, it has a disadvantage that it is not possible to fast draw results due to the increase in calculations as a square scale as the sample data is large. In this paper, the excellency of the intelligence pain nursing intervention u-health system implemented by comparing the scale and similarity group of the sample data for extracting significant data is verified through the evaluation of MAE and Raking scoring. Based on the results of this verification, it is possible to present basic data and guidelines of the pain of patients recognized by nurses and that leads to improve the welfare of patients.

An Analysis of Global Solar Radiation using the GWNU Solar Radiation Model and Automated Total Cloud Cover Instrument in Gangneung Region (강릉 지역에서 자동 전운량 장비와 GWNU 태양 복사 모델을 이용한 지표면 일사량 분석)

  • Park, Hye-In;Zo, Il-Sung;Kim, Bu-Yo;Jee, Joon-Bum;Lee, Kyu-Tae
    • Journal of the Korean earth science society
    • /
    • v.38 no.2
    • /
    • pp.129-140
    • /
    • 2017
  • Global solar radiation was calculated in this research using ground-base measurement data, meteorological satellite data, and GWNU (Gangneung-Wonju National University) solar radiation model. We also analyzed the accuracy of the GWNU model by comparing the observed solar radiation according to the total cloud cover. Our research was based on the global solar radiation of the GWNU radiation site in 2012, observation data such as temperature and pressure, humidity, aerosol, total ozone amount data from the Ozone Monitoring Instrument (OMI) sensor, and Skyview data used for evaluation of cloud mask and total cloud cover. On a clear day when the total cloud cover was 0 tenth, the calculated global solar radiations using the GWNU model had a high correlation coefficient of 0.98 compared with the observed solar radiation, but root mean square error (RMSE) was relatively high, i.e., $36.62Wm^{-2}$. The Skyview equipment was unable to determine the meteorological condition such as thin clouds, mist, and haze. On a cloudy day, regression equations were used for the radiation model to correct the effect of clouds. The correlation coefficient was 0.92, but the RMSE was high, i.e., $99.50Wm^{-2}$. For more accurate analysis, additional analysis of various elements including shielding of the direct radiation component and cloud optical thickness is required. The results of this study can be useful in the area where the global solar radiation is not observed by calculating the global solar radiation per minute or time.

Accelerated Convolution Image Processing by Using Look-Up Table and Overlap Region Buffering Method (Loop-Up Table과 필터 중첩영역 버퍼링 기법을 이용한 컨벌루션 영상처리 고속화)

  • Kim, Hyun-Woo;Kim, Min-Young
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.17-22
    • /
    • 2012
  • Convolution filtering methods have been widely applied to various digital signal processing fields for image blurring, sharpening, edge detection, and noise reduction, etc. According to their application purpose, the filter mask size or shape and the mask value are selected in advance, and the designed filter is applied to input image for the convolution processing. In this paper, we proposed an image processing acceleration method for the convolution processing by using two-dimensional Look-up table (LUT) and overlap-region buffering technique. First, based on the fixed convolution mask value, the multiplication operation between 8 or 10 bit pixel values of the input image and the filter mask values is performed a priori, and the results memorized in LUT are referred during the convolution process. Second, based on symmetric structural characteristics of the convolution filters, inherent duplicated operation region is analysed, and the saved operation results in one step before in the predefined memory buffer is recalled and reused in current operation step. Through this buffering, unnecessary repeated filter operation on the same regions is minimized in sequential manner. As the proposed algorithms minimize the computational amount needed for the convolution operation, they work well under the operation environments utilizing embedded systems with limited computational resources or the environments of utilizing general personnel computers. A series of experiments under various situations verifies the effectiveness and usefulness of the proposed methods.

Call-Site Tracing-based Shared Memory Allocator for False Sharing Reduction in DSM Systems (분산 공유 메모리 시스템에서 거짓 공유를 줄이는 호출지 추적 기반 공유 메모리 할당 기법)

  • Lee, Jong-Woo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.7
    • /
    • pp.349-358
    • /
    • 2005
  • False sharing is a result of co-location of unrelated data in the same unit of memory coherency, and is one source of unnecessary overhead being of no help to keep the memory coherency in multiprocessor systems. Moreover. the damage caused by false sharing becomes large in proportion to the granularity of memory coherency. To reduce false sharing in a page-based DSM system, it is necessary to allocate unrelated data objects that have different access patterns into the separate shared pages. In this paper we propose call-site tracing-based shared memory allocator. shortly CSTallocator. CSTallocator expects that the data objects requested from the different call-sites may have different access patterns in the future. So CSTailocator places each data object requested from the different call-sites into the separate shared pages, and consequently data objects that have the same call-site are likely to get together into the same shared pages. We use execution-driven simulation of real parallel applications to evaluate the effectiveness of our CSTallocator. Our observations show that by using CSTallocator a considerable amount of false sharing misses can be additionally reduced in comparison with the existing techniques.

Page Logging System for Web Mining Systems (웹마이닝 시스템을 위한 페이지 로깅 시스템)

  • Yun, Seon-Hui;O, Hae-Seok
    • The KIPS Transactions:PartC
    • /
    • v.8C no.6
    • /
    • pp.847-854
    • /
    • 2001
  • The Web continues to grow fast rate in both a large aclae volume of traffic and the size and complexity of Web sites. Along with growth, the complexity of tasks such as Web site design Web server design and of navigating simply through a Web site have increased. An important input to these design tasks is the analysis of how a web site is being used. The is paper proposes a Page logging System(PLS) identifying reliably user sessions required in Web mining system PLS consists of Page Logger acquiring all the page accesses of the user Log processor producing user session from these data, and statements to incorporate a call to page logger applet. Proposed PLS abbreviates several preprocessing tasks which spends a log of time and efforts that must be performed in Web mining systems. In particular, it simplifies the complexity of transaction identification phase through acquiring directly the amount of time a user stays on a page. Also PLS solves local cache hits and proxy IPs that create problems with identifying user sessions from Web sever log.

  • PDF

A Dynamic Buffer Allocation Scheme in Video-on-Demand System (주문형 비디오 시스템에서의 동적 버퍼 할당 기법)

  • Lee, Sang-Ho;Moon, Yang-Sae;Whang, Kyu-Young;Cho, Wan-Sup
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.28 no.9
    • /
    • pp.442-460
    • /
    • 2001
  • In video-on-demand(VOD) systems it is important to minimize initial latency and memory requirements. The minimization of initial latency enables the system to provide services with short response time, and the minimization of memory requirements enables the system to service more concurrent user requests with the same amount of memory. In VOD systems, since initial latency and memory requirement increase according to the increment of buffer size allocated to user requests, the buffer size allocated to user requests must be minimized. The existing static buffer allocation scheme, however, determines the buffer size based on the assumption that thy system is in fully loaded state. Thus, when the system is in partially loaded state, the scheme allocates user requests unnecessarily large buffers. This paper proposes a dynamics buffer allocation scheme that allocates user requests the minimum buffer size in fully loaded state as well as a partially loaded state. This scheme dynamically determines the buffer size based on the number of user requests in service and the number of user requests arriving while servicing current requests. In addition, through analyses and simulations, this paper validates that the dynamics buffer allocation outperforms the statics buffer allocation in initial latency and the number of concurrent user requests that can be supported. Our simulation results show that, in proportion to the static buffer allocation scheme, the dynamic buffer allocation scheme reduces the average initial latency by 29%~65%, and in a systems having several disks. increases the average number of concurrent user requests by 48%~68%. Our results show that the dynamic buffer allocation scheme significantly improves the performance and reduce the capacity requirements of VOD systems.

  • PDF

Topographical Analysis of Landslide in Mt. Woomyeon Using DSM (DSM 자료를 이용한 우면산 산사태 지형 분석)

  • Kim, Gihong;Choi, Hyun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.12
    • /
    • pp.60-66
    • /
    • 2020
  • Torrential rain causes landslide damage every year. In particular, the 2011 downpour caused landslides at numerous points throughout Mt. Woomyeon, which resulted in considerable damage to people and property. Because it occurred in an urban area, this case became a major social issue and received public attention. Measures were quickly implemented for multilateral investigations and recovery. Landslides caused by heavy rain are greatly affected by rainfall at the time. Landslides from the upper part erode the flow path, increasing the size, causing much damage to the lower part. This study selected a rural village area among the damaged areas of Mt. Woomyeon, and analyzed the change in terrain profile before and after a landslide using the DSM data obtained from airborne LiDAR. This area can be divided into three hydrological basins. For each basin, the analysis was performed on the average slope of each part of the flow path, as well as the erosion and deposition due to soil flow. As a result of the analysis, it was estimated that the total amount of soil from the Jeonwon village was 15,300㎥. These field data based on GIS can be used as basic information to predict damage in the case of a similar disaster, and it can be helpful in analyzing the results of various debris flow simulations.

Analysis of Intrinsic Patterns of Time Series Based on Chaos Theory: Focusing on Roulette and KOSPI200 Index Future (카오스 이론 기반 시계열의 내재적 패턴분석: 룰렛과 KOSPI200 지수선물 데이터 대상)

  • Lee, HeeChul;Kim, HongGon;Kim, Hee-Woong
    • Knowledge Management Research
    • /
    • v.22 no.4
    • /
    • pp.119-133
    • /
    • 2021
  • As a large amount of data is produced in each industry, a number of time series pattern prediction studies are being conducted to make quick business decisions. However, there is a limit to predicting specific patterns in nonlinear time series data due to the uncertainty inherent in the data, and there are difficulties in making strategic decisions in corporate management. In addition, in recent decades, various studies have been conducted on data such as demand/supply and financial markets that are suitable for industrial purposes to predict time series data of irregular random walk models, but predict specific rules and achieve sustainable corporate objectives There are difficulties. In this study, the prediction results were compared and analyzed using the Chaos analysis method for roulette data and financial market data, and meaningful results were derived. And, this study confirmed that chaos analysis is useful for finding a new method in analyzing time series data. By comparing and analyzing the characteristics of roulette games with the time series of Korean stock index future, it was derived that predictive power can be improved if the trend is confirmed, and it is meaningful in determining whether nonlinear time series data with high uncertainty have a specific pattern.

Policies and Measures for Managing Personal Digital Legacy (개인의 사후 디지털 기록관리를 위한 정책과 방안)

  • Kim, Jinhong;Rieh, Hae-young
    • The Korean Journal of Archival Studies
    • /
    • no.72
    • /
    • pp.165-203
    • /
    • 2022
  • Many people create records in digital space, and the amount of digital records left after individual dies has increased. The digital record left by the deceased is different from the record heritage that has physical substances. In many cases, the records of the deceased not just belong to the deceased, and many deceased did not explicitly disclose their online accounts and method of dispose of digital records during their lifetime, so this problem may lead to problems of inheritance to the bereaved family. In addition, digital records may be neglected or deleted after a person's death due to software problems, specific platform's terms of use, account deletion by bereaved family, etc. This leads to the problem that daily records, which are important clues to the social aspects at the time, are easily lost. Several studies have revealed that individuals are interested in preserving their digital records, but do not know how to do it, so they are benign neglect. For this reason, it is necessary to pay attention to personal digital records and personal digital legacy, and to prepare related policies and plans. Accordingly, this study analyzes problems related to the management of digital records after an individual's death, related to laws and systems, the status and policies of platforms and industries, the status of personal record management, etc. Various solutions were suggested, such as a need for enactment for digital personal record management act, platform's explicit policy for individual's post-mortem records, digital records management plan for archival institutions, individual's a preemptive management plan for his/her own records, and a method for writing a will related to digital account information.

Deep-Learning Seismic Inversion using Laplace-domain wavefields (라플라스 영역 파동장을 이용한 딥러닝 탄성파 역산)

  • Jun Hyeon Jo;Wansoo Ha
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.2
    • /
    • pp.84-93
    • /
    • 2023
  • The supervised learning-based deep-learning seismic inversion techniques have demonstrated successful performance in synthetic data examples targeting small-scale areas. The supervised learning-based deep-learning seismic inversion uses time-domain wavefields as input and subsurface velocity models as output. Because the time-domain wavefields contain various types of wave information, the data size is considerably large. Therefore, research applying supervised learning-based deep-learning seismic inversion trained with a significant amount of field-scale data has not yet been conducted. In this study, we predict subsurface velocity models using Laplace-domain wavefields as input instead of time-domain wavefields to apply a supervised learning-based deep-learning seismic inversion technique to field-scale data. Using Laplace-domain wavefields instead of time-domain wavefields significantly reduces the size of the input data, thereby accelerating the neural network training, although the resolution of the results is reduced. Additionally, a large grid interval can be used to efficiently predict the velocity model of the field data size, and the results obtained can be used as the initial model for subsequent inversions. The neural network is trained using only synthetic data by generating a massive synthetic velocity model and Laplace-domain wavefields of the same size as the field-scale data. In addition, we adopt a towed-streamer acquisition geometry to simulate a marine seismic survey. Testing the trained network on numerical examples using the test data and a benchmark model yielded appropriate background velocity models.