• Title/Summary/Keyword: Computer Studies

Search Result 4,589, Processing Time 0.03 seconds

A method of improving the quality of 3D images acquired from RGB-depth camera (깊이 영상 카메라로부터 획득된 3D 영상의 품질 향상 방법)

  • Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.5
    • /
    • pp.637-644
    • /
    • 2021
  • In general, in the fields of computer vision, robotics, and augmented reality, the importance of 3D space and 3D object detection and recognition technology has emerged. In particular, since it is possible to acquire RGB images and depth images in real time through an image sensor using Microsoft Kinect method, many changes have been made to object detection, tracking and recognition studies. In this paper, we propose a method to improve the quality of 3D reconstructed images by processing images acquired through a depth-based (RGB-Depth) camera on a multi-view camera system. In this paper, a method of removing noise outside an object by applying a mask acquired from a color image and a method of applying a combined filtering operation to obtain the difference in depth information between pixels inside the object is proposed. Through each experiment result, it was confirmed that the proposed method can effectively remove noise and improve the quality of 3D reconstructed image.

A meta-analysis on advantages of peripheral nerve block post-total knee arthroplasty

  • You, Di;Qin, Lu;Li, Kai;Li, Di;Zhao, Guoqing;Li, Longyun
    • The Korean Journal of Pain
    • /
    • v.34 no.3
    • /
    • pp.271-287
    • /
    • 2021
  • Background: Postoperative pain management is crucial for patients undergoing total knee arthroplasty (TKA). There have been many recent clinical trials on post-TKA peripheral nerve block; however, they have reported inconsistent findings. In this meta-analysis, we aimed to comprehensively analyze studies on post-TKA analgesia to provide evidence-based clinical suggestions. Methods: We performed a computer-based query of PubMed, Embase, the Cochrane Library, and the Web of Science to retrieve related articles using neurothe following search terms: nerve block, nerve blockade, chemodenervation, chemical neurolysis, peridural block, epidural anesthesia, extradural anesthesia, total knee arthroplasty, total knee replacement, partial knee replacement, and others. After quality evaluation and data extraction, we analyzed the complications, visual analogue scale (VAS) score, patient satisfaction, perioperative opioid dosage, and rehabilitation indices. Evidence was rated using the Grading of Recommendations Assessment, Development, and Evaluation approach. Results: We included 16 randomized controlled trials involving 981 patients (511 receiving peripheral nerve block and 470 receiving epidural block) in the final analysis. Compared with an epidural block, a peripheral nerve block significantly reduced complications. There were no significant between-group differences in the postoperative VAS score, patient satisfaction, perioperative opioid dosage, and rehabilitation indices. Conclusions: Our findings demonstrate that the peripheral nerve block is superior to the epidural block in reducing complications without compromising the analgesic effect and patient satisfaction. Therefore, a peripheral nerve block is a safe and effective postoperative analgesic method with encouraging clinical prospects.

The Improvement of Information Protection Service Cost Model in Public Institution (공공기관 정보보호서비스 대가 모델의 개선 방안)

  • Oh, Sangik;Park, Namje
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.7
    • /
    • pp.123-131
    • /
    • 2019
  • In this paper, related studies were investigated by dividing them into cost-benefit analysis, security continuity services, and SW-centric calculations. The case analysis was conducted on A institutions in the United States, Japan and South Korea. Based on this, an improvement model was prepared through comparison with the current system. The SCS(Security Continuity Service) performance evaluation system-based information protection service cost calculation model is proposed. This method applies a service level agreement(SLA) and NIST Cybersecurity framework that are highly effective through cost-effectiveness analysis and calculates consideration based on characteristics, performance criteria, and weights by information protection service. This model can be used as a tool to objectively calculate the cost of information protection services at public institutions. It is also expected that this system can be established by strengthening the current recommended statutory level to the enforceability level, improving the evaluation system of state agencies and public institutions, introducing a verification system of information protection services by national certification bodies, and expanding its scope to all systems.

Cyber threat Detection and Response Time Modeling (사이버 위협 탐지대응시간 모델링)

  • Han, Choong-Hee;Han, ChangHee
    • Journal of Internet Computing and Services
    • /
    • v.22 no.3
    • /
    • pp.53-58
    • /
    • 2021
  • There is little research on actual business activities in the field of security control. Therefore, in this paper, we intend to present a practical research methodology that can contribute to the calculation of the size of the appropriate input personnel through the modeling of the threat information detection response time of the security control and to analyze the effectiveness of the latest security solutions. The total threat information detection response time performed by the security control center is defined as TIDRT (Total Intelligence Detection & Response Time). The total threat information detection response time (TIDRT) is composed of the sum of the internal intelligence detection & response time (IIDRT) and the external intelligence detection & response time (EIDRT). The internal threat information detection response time (IIDRT) can be calculated as the sum of the five steps required. The ultimate goal of this study is to model the major business activities of the security control center with an equation to calculate the cyber threat information detection response time calculation formula of the security control center. In Chapter 2, previous studies are examined, and in Chapter 3, the calculation formula of the total threat information detection response time is modeled. Chapter 4 concludes with a conclusion.

Video Camera Model Identification System Using Deep Learning (딥 러닝을 이용한 비디오 카메라 모델 판별 시스템)

  • Kim, Dong-Hyun;Lee, Soo-Hyeon;Lee, Hae-Yeoun
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.8
    • /
    • pp.1-9
    • /
    • 2019
  • With the development of imaging information communication technology in modern society, imaging acquisition and mass production technology have developed rapidly. However, crime rates using these technology are increased and forensic studies are conducted to prevent it. Identification techniques for image acquisition devices are studied a lot, but the field is limited to images. In this paper, camera model identification technique for video, not image is proposed. We analyzed video frames using the trained model with images. Through training and analysis by considering the frame characteristics of video, we showed the superiority of the model using the P frame. Then, we presented a video camera model identification system by applying a majority-based decision algorithm. In the experiment using 5 video camera models, we obtained maximum 96.18% accuracy for each frame identification and the proposed video camera model identification system achieved 100% identification rate for each camera model.

Camera Model Identification Using Modified DenseNet and HPF (변형된 DenseNet과 HPF를 이용한 카메라 모델 판별 알고리즘)

  • Lee, Soo-Hyeon;Kim, Dong-Hyun;Lee, Hae-Yeoun
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.8
    • /
    • pp.11-19
    • /
    • 2019
  • Against advanced image-related crimes, a high level of digital forensic methods is required. However, feature-based methods are difficult to respond to new device features by utilizing human-designed features, and deep learning-based methods should improve accuracy. This paper proposes a deep learning model to identify camera models based on DenseNet, the recent technology in the deep learning model field. To extract camera sensor features, a HPF feature extraction filter was applied. For camera model identification, we modified the number of hierarchical iterations and eliminated the Bottleneck layer and compression processing used to reduce computation. The proposed model was analyzed using the Dresden database and achieved an accuracy of 99.65% for 14 camera models. We achieved higher accuracy than previous studies and overcome their disadvantages with low accuracy for the same manufacturer.

Fault Diagnosis of Bearing Based on Convolutional Neural Network Using Multi-Domain Features

  • Shao, Xiaorui;Wang, Lijiang;Kim, Chang Soo;Ra, Ilkyeun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1610-1629
    • /
    • 2021
  • Failures frequently occurred in manufacturing machines due to complex and changeable manufacturing environments, increasing the downtime and maintenance costs. This manuscript develops a novel deep learning-based method named Multi-Domain Convolutional Neural Network (MDCNN) to deal with this challenging task with vibration signals. The proposed MDCNN consists of time-domain, frequency-domain, and statistical-domain feature channels. The Time-domain channel is to model the hidden patterns of signals in the time domain. The frequency-domain channel uses Discrete Wavelet Transformation (DWT) to obtain the rich feature representations of signals in the frequency domain. The statistic-domain channel contains six statistical variables, which is to reflect the signals' macro statistical-domain features, respectively. Firstly, in the proposed MDCNN, time-domain and frequency-domain channels are processed by CNN individually with various filters. Secondly, the CNN extracted features from time, and frequency domains are merged as time-frequency features. Lastly, time-frequency domain features are fused with six statistical variables as the comprehensive features for identifying the fault. Thereby, the proposed method could make full use of those three domain-features for fault diagnosis while keeping high distinguishability due to CNN's utilization. The authors designed massive experiments with 10-folder cross-validation technology to validate the proposed method's effectiveness on the CWRU bearing data set. The experimental results are calculated by ten-time averaged accuracy. They have confirmed that the proposed MDCNN could intelligently, accurately, and timely detect the fault under the complex manufacturing environments, whose accuracy is nearly 100%.

Metadata Log Management for Full Stripe Parity in Flash Storage Systems (플래시 저장 시스템의 Full Stripe Parity를 위한 메타데이터 로그 관리 방법)

  • Lim, Seung-Ho
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.11
    • /
    • pp.17-26
    • /
    • 2019
  • RAID-5 technology is one of the choice for flash storage device to enhance its reliability. However, RAID-5 has inherent parity update overhead, especially, parity overhead for partial stripe write is one of the crucial issues for flash-based RAID-5 technologies. In this paper, we design efficient parity log architecture for RAID-5 to eliminate runtime partial parity overhead. During runtime, partial parity is retained in buffer memory until full stripe write completed, and the parity is written with full strip write. In addition, parity log is maintained in memory until whole the stripe group is used for data write. With this parity log, partial parity can be recovered from the power loss. In the experiments, the parity log method can eliminate partial parity writes overhead with a little parity log writes. Hence it can reduce write amplification at the same reliability.

Emergency Rescue Guidance Scheme Using Wireless Sensor Networks (재난 상황 시 센서 네트워크 기반 구조자 진입 경로 탐색 방안)

  • Joo, Yang-Ick
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.10
    • /
    • pp.1248-1253
    • /
    • 2019
  • Using current evacuation methods, a crew describes the physical location of an accident and guides evacuation using alarms and emergency guide lights. However, in case of an accident on a large and complex building, an intelligent and effective emergency evacuation system is required to ensure the safety of evacuees. Therefore, several studies have been performed on intelligent path finding and emergency evacuation algorithms which are centralized guidance methods using gathered data from distributed sensor nodes. However, another important aspect is effective rescue guidance in an emergency situation. So far, there has been no consideration on the efficient rescue guidance scheme. Therefore, this paper proposes the genetic algorithm based emergency rescue guidance method using distributed wireless sensor networks. Performance evaluation using a computer simulation shows that the proposed scheme guarantees efficient path finding. The fitness converges to the minimum value in reasonable time. The density of each exit node is remarkably decreased as well.

Design of Adaptive Deduplication Algorithm Based on File Type and Size (파일 유형과 크기에 따른 적응형 중복 제거 알고리즘 설계)

  • Hwang, In-Cheol;Kwon, Oh-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.2
    • /
    • pp.149-157
    • /
    • 2020
  • Today, due to the large amount of data duplication caused by the increase in user data, various deduplication studies have been conducted. However, research on personal storage is relatively poor. Personal storage, unlike high-performance computers, needs to perform deduplication while reducing CPU and memory resource usage. In this paper, we propose an adaptive algorithm that selectively applies fixed size chunking (FSC) and whole file chunking (WFH) according to the file type and size in order to maintain the deduplication rate and reduce the load in personal storage. We propose an algorithm for minimization. The experimental results show that the proposed file system has more than 1.3 times slower at first write operation but less than 3 times reducing in memory usage compare to LessFS and it is 2.5 times faster at rewrite operation.