• Title/Summary/Keyword: 개인 데이터 저장

Search Result 828, Processing Time 0.032 seconds

Building a Log Framework for Personalization Based on a Java Open Source (JAVA 오픈소스 기반의 개인화를 지원하는 Log Framework 구축)

  • Sin, Choongsub;Park, Seog
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.8
    • /
    • pp.524-530
    • /
    • 2015
  • A log is for text monitoring and perceiving the issues of a system during the development and operation of a program. Based on the log, system developers and operators can trace the cause of an issue. In the development phase, it is relatively simple for a log to be traced while there are only a small number of personnel uses of a system such as developers and testers. However, it is the difficult to trace a log when many people can use the system in the operation phase. In major cases, because a log cannot be tracked, even tracing is dropped. This study proposed a simplified tracing of a log during the system operation. Thus, the purpose is to create a log on the run time based on an ID/IP, using features provided by the Logback. It saves an ID/IP of the tracking user on a DB, and loads the user's ID/IP onto the memory to trace once WAS starts running. Before the online service operates, an Interceptor is executed to decide whether to load a log file, and then it generates the service requested by a certain user in a separate log file. The load is insignificant since the arithmetic operation occurs in a JVM, although every service must pass through the Interceptor to be executed.

Hardware Design of High Performance HEVC Deblocking Filter for UHD Videos (UHD 영상을 위한 고성능 HEVC 디블록킹 필터 설계)

  • Park, Jaeha;Ryoo, Kwangki
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.1
    • /
    • pp.178-184
    • /
    • 2015
  • This paper proposes a hardware architecture for high performance Deblocking filter(DBF) in High Efficiency Video Coding for UHD(Ultra High Definition) videos. This proposed hardware architecture which has less processing time has a 4-stage pipelined architecture with two filters and parallel boundary strength module. Also, the proposed filter can be used in low-voltage design by using clock gating architecture in 4-stage pipeline. The segmented memory architecture solves the hazard issue that arises when single port SRAM is accessed. The proposed order of filtering shortens the delay time that arises when storing data into the single port SRAM at the pre-processing stage. The DBF hardware proposed in this paper was designed with Verilog HDL, and was implemented with 22k logic gates as a result of synthesis using TSMC 0.18um CMOS standard cell library. Furthermore, the dynamic frequency can process UHD 8k($7680{\times}4320$) samples@60fps using a frequency of 150MHz with an 8K resolution and maximum dynamic frequency is 285MHz. Result from analysis shows that the proposed DBF hardware architecture operation cycle for one process coding unit has improved by 32% over the previous one.

Implementation of An Automatic Authentication System Based on Patient's Situations and Its Performance Evaluation (환자상황 기반의 자동인증시스템 구축 및 성능평가)

  • Ham, Gyu-Sung;Joo, Su-Chong
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.25-34
    • /
    • 2020
  • In the current medical information system, a system environment is constructed in which Biometric data generated by using IoT or medical equipment connected to a patient can be stored in a medical information server and monitored at the same time. Also, the patient's biometric data, medical information, and personal information after simple authentication using only the ID / PW via the mobile terminal of the medical staff are easily accessible. However, the method of accessing these medical information needs to be improved in the dimension of protecting patient's personal information, and provides a quick authentication system for first aid. In this paper, we implemented an automatic authentication system based on the patient's situation and evaluated its performance. Patient's situation was graded into normal and emergency situation, and the situation of the patient was determined in real time using incoming patient biometric data from the ward. If the patient's situation is an emergency, an emergency message including an emergency code is send to the mobile terminal of the medical staff, and they attempted automatic authentication to access the upper medical information of the patient. Automatic authentication is a combination of user authentication(ID/PW, emergency code) and mobile terminal authentication(medical staff's role, working hours, work location). After user authentication, mobile terminal authentication is proceeded automatically without additional intervention by medical staff. After completing all authentications, medical staffs get authorization according to the role of medical staffs and patient's situations, and can access to the patient's graded medical information and personal information through the mobile terminal. We protected the patient's medical information through limited medical information access by the medical staff according to the patient's situation, and provided an automatic authentication without additional intervention in an emergency situation. We performed performance evaluation to verify the performance of the implemented automatic authentication system.

X-tree Diff: An Efficient Change Detection Algorithm for Tree-structured Data (X-tree Diff: 트리 기반 데이터를 위한 효율적인 변화 탐지 알고리즘)

  • Lee, Suk-Kyoon;Kim, Dong-Ah
    • The KIPS Transactions:PartC
    • /
    • v.10C no.6
    • /
    • pp.683-694
    • /
    • 2003
  • We present X-tree Diff, a change detection algorithm for tree-structured data. Our work is motivated by need to monitor massive volume of web documents and detect suspicious changes, called defacement attack on web sites. From this context, our algorithm should be very efficient in speed and use of memory space. X-tree Diff uses a special ordered labeled tree, X-tree, to represent XML/HTML documents. X-tree nodes have a special field, tMD, which stores a 128-bit hash value representing the structure and data of subtrees, so match identical subtrees form the old and new versions. During this process, X-tree Diff uses the Rule of Delaying Ambiguous Matchings, implying that it perform exact matching where a node in the old version has one-to one corrspondence with the corresponding node in the new, by delaying all the others. It drastically reduces the possibility of wrong matchings. X-tree Diff propagates such exact matchings upwards in Step 2, and obtain more matchings downwsards from roots in Step 3. In step 4, nodes to ve inserted or deleted are decided, We aldo show thst X-tree Diff runs on O(n), woere n is the number of noses in X-trees, in worst case as well as in average case, This result is even better than that of BULD Diff algorithm, which is O(n log(n)) in worst case, We experimented X-tree Diff on reat data, which are about 11,000 home pages from about 20 wev sites, instead of synthetic documets manipulated for experimented for ex[erimentation. Currently, X-treeDiff algorithm is being used in a commeercial hacking detection system, called the WIDS(Web-Document Intrusion Detection System), which is to find changes occured in registered websites, and report suspicious changes to users.

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.

Very short-term rainfall prediction based on radar image learning using deep neural network (심층신경망을 이용한 레이더 영상 학습 기반 초단시간 강우예측)

  • Yoon, Seongsim;Park, Heeseong;Shin, Hongjoon
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.12
    • /
    • pp.1159-1172
    • /
    • 2020
  • This study applied deep convolution neural network based on U-Net and SegNet using long period weather radar data to very short-term rainfall prediction. And the results were compared and evaluated with the translation model. For training and validation of deep neural network, Mt. Gwanak and Mt. Gwangdeoksan radar data were collected from 2010 to 2016 and converted to a gray-scale image file in an HDF5 format with a 1km spatial resolution. The deep neural network model was trained to predict precipitation after 10 minutes by using the four consecutive radar image data, and the recursive method of repeating forecasts was applied to carry out lead time 60 minutes with the pretrained deep neural network model. To evaluate the performance of deep neural network prediction model, 24 rain cases in 2017 were forecast for rainfall up to 60 minutes in advance. As a result of evaluating the predicted performance by calculating the mean absolute error (MAE) and critical success index (CSI) at the threshold of 0.1, 1, and 5 mm/hr, the deep neural network model showed better performance in the case of rainfall threshold of 0.1, 1 mm/hr in terms of MAE, and showed better performance than the translation model for lead time 50 minutes in terms of CSI. In particular, although the deep neural network prediction model performed generally better than the translation model for weak rainfall of 5 mm/hr or less, the deep neural network prediction model had limitations in predicting distinct precipitation characteristics of high intensity as a result of the evaluation of threshold of 5 mm/hr. The longer lead time, the spatial smoothness increase with lead time thereby reducing the accuracy of rainfall prediction The translation model turned out to be superior in predicting the exceedance of higher intensity thresholds (> 5 mm/hr) because it preserves distinct precipitation characteristics, but the rainfall position tends to shift incorrectly. This study are expected to be helpful for the improvement of radar rainfall prediction model using deep neural networks in the future. In addition, the massive weather radar data established in this study will be provided through open repositories for future use in subsequent studies.

Implementing an Integrated System for R&D Results Management (연구성과물 통합 관리 시스템 구현)

  • Shin, Sung-Ho;Um, Jung-Ho;Seo, Dong-Min;Lee, Seung-Woo;Choi, Sung-Pil;Jung, Han-Min
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.8
    • /
    • pp.411-419
    • /
    • 2012
  • In case that R&D results from R&D projects are well managed and archived, the research institutes can transfer the valuable technologies related to R&D results with some costs to corporations. However, it is still difficult to maintain and reuse R&D results because they are managed by each person or each department and not integrated between R&D results. Therefore, the government should undertake to manage R&D results overall by collecting meta data and distribute analyzed information from meta data. Each researching institute also makes an efforts to manage R&D results focusing on their reusing. For this purpose, in this paper, we present a process to manage R&D results; insert meta data of R&D results to the system, upload files of R&D results to the database of the system, inquire, and use meta data of R&D results. Based on the process, we design a system architecture for managing R&D results. In addition, it should be mainly considered to design a global schema for integrating R&D results into one database. The system shows detailed information on R&D results and provides R&D results conveniently to users. We expect that we may reduce the cost of reusing R&D results and improve the quality of R&D results with designing efficiently a process and a global schema of R&D result management system.

A Study on the Current Status and Tasks of Medical Records Management: Focused on Applying the KS X ISO 15489 to the Y Hospital (의무기록관리의 현황과 개선방안: KS X ISO 15489표준의 Y병원 적용 중심으로)

  • Lee, Eun-Mi;Kim, Myeong;Hee, Jin
    • Journal of the Korean Society for information Management
    • /
    • v.29 no.3
    • /
    • pp.257-285
    • /
    • 2012
  • As the electronic medical records systems (EMRs) are introduced into the hospitals in Korea and the needs of chief stakehoders of medical records are changed, the environments related to creating and managing medical records has been changed dynamically. At this moment it might be meaningful to examine medical records based on records management principles rather than information management principles. The purpose of this paper is to apply the KS X ISO 1549 standards, which covers the principles of records management, to hospital medical records management and assess the current quality of medical records management, and define a few tasks of improvement for hospitals. To achieve this goal, this study has performed following activities: Firstly, principles that could be applied to medical records management were prepared for each record management steps described in the standards, such as capture, registration, classification, storage, access, trace and disposition, and 22 principles were selected from those 7 steps of the record management. Secondly, the Y hospital, which is affiliated with a medical school in Seoul, was chosen to evaluate the current situation regarding medical records management. The department head of the medical records management team in Y hospital was interviewed and the present status was evaluated according to each principle. Thirdly, tasks for improvement were suggested, in such stages as access, trace and disposition. With this study as a cornerstone, useful implications are expected to be gathered from future studies that apply standards for metadata of records, management systems for records, and record management systems to medical record management in hospitals.

Development and Application of SITES (부지환경종합관리시스템 개발과 적용)

  • Park, Joo-Wan;Yoon, Jeong-Hyoun;Kim, Chank-Lak;Cho, Sung-Il
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.6 no.3
    • /
    • pp.205-215
    • /
    • 2008
  • SITES(Site Information and Total Environmental Data Management System) has been developed for the purpose of systematically managing site characteristics and environmental data produced during the pre-operational, operational, and post-closure phases of a radioactive waste disposal facility. SITES is an integration system, which consists of 4 modules, to be available for maintenance of site characteristics data, for safety assessment, and for site/environment monitoring; site environmental data management module(SECURE), integrated safety assessment module(SAINT), site/environment monitoring module(SUDAL) and geological information module for geological data management(SITES-GIS). Each module has its database with the functions of browsing, storing, and reporting data and information. Data from SECURE and SUDAL are interconnected to be utilized as inputs to SAINT. SAINT has the functions that multi-user can access simultaneously via client-server system, and the safety assessment results can be managed with its embedded Quality Assurance feature. Comparison between assessment results and environmental monitoring data can be made and visualized in SUDAL and SITES-GIS. Also, SUDAL is designed that the periodic monitoring data and information could be opened to the public via internet homepage. SITES has applied to the Wolsong low- and intermediate-level radioactive waste disposal center in Korea, and is expected to enhance the function of site/environment monitoring in other nuclear-related facilities and also in industrial facilities handling hazardous materials.

  • PDF

A 3-D Measuring System of Thermoluminescence Spectra and Thermoluminescence of CaSO4 : Dy, P (열자극발광 스펙트럼의 3차원 측정 장치와 CaSO4 : Dy, P의 열자극발광)

  • Lee, Jung-Il;Moon, Jung-Hak;Kim, Douk-Hoon
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.6 no.2
    • /
    • pp.71-75
    • /
    • 2001
  • In this paper, a three-dimensional measuring system of thermoluminescence(TL) spectra based on temperature, wavelength and luminescence intensity was introduced. The system was composed of a spectrometer, temperature control unit for thermal stimulation, photon detector and personal computer for control the entire system. Temperature control was achieved by using feedback to ensure a linear-rise in the sample temperature. Digital multimeter(KEITHLEY 195A) measures the electromotive force of Copper-Constantan thermocouple and then transmits the data to the computer through GPIB card. The computer converts this signal to temperature using electromotive force-temperature table in program, and then control the power supply through the D/A converter. The spectrometer(SPEX 1681) is controlled by CD-2A, which is controlled by the computer through RS-232 communication port. For measuring the luminescence intensity during the heating run, the electrometer(KEITHLEY 617) measures the anode current of photomultiplier tube(HAMAMATSU R928) and transmits the data to computer through the A/D converter. And, we measured and analyzed thermoluminescence of $CaSO_4$ : Dy, P using the system. The measuring range of thermoluminescence spectra was 300K-575K and 300~800 nm, $CaSO_4$ : Dy. P was fabricated by the Yamashita's method in Korea Atomic Energy Research Institute(KAERI) for radiation dosimeter. Thermoluminesce spectra of the $CaSO_4$ : Dy, P consist of two main peak at temperature of $205^{\circ}C$, wavelength 476 nm and 572 nm and with minor ones at 658 nm and 749 nm.

  • PDF