• Title/Summary/Keyword: FUSION Software

Search Result 209, Processing Time 0.023 seconds

Real-Time Stock Price Prediction using Apache Spark (Apache Spark를 활용한 실시간 주가 예측)

  • Dong-Jin Shin;Seung-Yeon Hwang;Jeong-Joon Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.79-84
    • /
    • 2023
  • Apache Spark, which provides the fastest processing speed among recent distributed and parallel processing technologies, provides real-time functions and machine learning functions. Although official documentation guides for these functions are provided, a method for fusion of functions to predict a specific value in real time is not provided. Therefore, in this paper, we conducted a study to predict the value of data in real time by fusion of these functions. The overall configuration is collected by downloading stock price data provided by the Python programming language. And it creates a model of regression analysis through the machine learning function, and predicts the adjusted closing price among the stock price data in real time by fusing the real-time streaming function with the machine learning function.

Access Control Policy of Data Considering Varying Context in Sensor Fusion Environment of Internet of Things (사물인터넷 센서퓨전 환경에서 동적인 상황을 고려한 데이터 접근제어 정책)

  • Song, You-jin;Seo, Aria;Lee, Jaekyu;Kim, Yei-chang
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.9
    • /
    • pp.409-418
    • /
    • 2015
  • In order to delivery of the correct information in IoT environment, it is important to deduce collected information according to a user's situation and to create a new information. In this paper, we propose a control access scheme of information through context-aware to protect sensitive information in IoT environment. It focuses on the access rights management to grant access in consideration of the user's situation, and constrains(access control policy) the access of the data stored in network of unauthorized users. To this end, after analysis of the existing research 'CP-ABE-based on context information access control scheme', then include dynamic conditions in the range of status information, finally we propose a access control policy reflecting the extended multi-dimensional context attribute. Proposed in this paper, access control policy considering the dynamic conditions is designed to suit for IoT sensor fusion environment. Therefore, comparing the existing studies, there are advantages it make a possible to ensure the variety and accuracy of data, and to extend the existing context properties.

Pose-invariant Face Recognition using a Cylindrical Model and Stereo Camera (원통 모델과 스테레오 카메라를 이용한 포즈 변화에 강인한 얼굴인식)

  • 노진우;홍정화;고한석
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.929-938
    • /
    • 2004
  • This paper proposes a pose-invariant face recognition method using cylindrical model and stereo camera. We divided this paper into two parts. One is single input image case, the other is stereo input image case. In single input image case, we normalized a face's yaw pose using cylindrical model, and in stereo input image case, we normalized a face's pitch pose using cylindrical model with previously estimated pitch pose angle by the stereo geometry. Also, since we have an advantage that we can utilize two images acquired at the same time, we can increase overall recognition performance by decision-level fusion. Through representative experiments, we achieved an increased recognition rate from 61.43% to 94.76% by the yaw pose transform, and the recognition rate with the proposed method achieves as good as that of the more complicated 3D face model. Also, by using stereo camera system we achieved an increased recognition rate 5.24% more for the case of upper face pose, and 3.34% more by decision-level fusion.

Effective Multi-Modal Feature Fusion for 3D Semantic Segmentation with Multi-View Images (멀티-뷰 영상들을 활용하는 3차원 의미적 분할을 위한 효과적인 멀티-모달 특징 융합)

  • Hye-Lim Bae;Incheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.505-518
    • /
    • 2023
  • 3D point cloud semantic segmentation is a computer vision task that involves dividing the point cloud into different objects and regions by predicting the class label of each point. Existing 3D semantic segmentation models have some limitations in performing sufficient fusion of multi-modal features while ensuring both characteristics of 2D visual features extracted from RGB images and 3D geometric features extracted from point cloud. Therefore, in this paper, we propose MMCA-Net, a novel 3D semantic segmentation model using 2D-3D multi-modal features. The proposed model effectively fuses two heterogeneous 2D visual features and 3D geometric features by using an intermediate fusion strategy and a multi-modal cross attention-based fusion operation. Also, the proposed model extracts context-rich 3D geometric features from input point cloud consisting of irregularly distributed points by adopting PTv2 as 3D geometric encoder. In this paper, we conducted both quantitative and qualitative experiments with the benchmark dataset, ScanNetv2 in order to analyze the performance of the proposed model. In terms of the metric mIoU, the proposed model showed a 9.2% performance improvement over the PTv2 model using only 3D geometric features, and a 12.12% performance improvement over the MVPNet model using 2D-3D multi-modal features. As a result, we proved the effectiveness and usefulness of the proposed model.

Deep Learning-Based User Emergency Event Detection Algorithms Fusing Vision, Audio, Activity and Dust Sensors (영상, 음성, 활동, 먼지 센서를 융합한 딥러닝 기반 사용자 이상 징후 탐지 알고리즘)

  • Jung, Ju-ho;Lee, Do-hyun;Kim, Seong-su;Ahn, Jun-ho
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.109-118
    • /
    • 2020
  • Recently, people are spending a lot of time inside their homes because of various diseases. It is difficult to ask others for help in the case of a single-person household that is injured in the house or infected with a disease and needs help from others. In this study, an algorithm is proposed to detect emergency event, which are situations in which single-person households need help from others, such as injuries or disease infections, in their homes. It proposes vision pattern detection algorithms using home CCTVs, audio pattern detection algorithms using artificial intelligence speakers, activity pattern detection algorithms using acceleration sensors in smartphones, and dust pattern detection algorithms using air purifiers. However, if it is difficult to use due to security issues of home CCTVs, it proposes a fusion method combining audio, activity and dust pattern sensors. Each algorithm collected data through YouTube and experiments to measure accuracy.

Prediction of Local Tumor Progression after Radiofrequency Ablation (RFA) of Hepatocellular Carcinoma by Assessment of Ablative Margin Using Pre-RFA MRI and Post-RFA CT Registration

  • Yoon, Jeong Hee;Lee, Jeong Min;Klotz, Ernst;Woo, Hyunsik;Yu, Mi Hye;Joo, Ijin;Lee, Eun Sun;Han, Joon Koo
    • Korean Journal of Radiology
    • /
    • v.19 no.6
    • /
    • pp.1053-1065
    • /
    • 2018
  • Objective: To evaluate the clinical impact of using registration software for ablative margin assessment on pre-radiofrequency ablation (RFA) magnetic resonance imaging (MRI) and post-RFA computed tomography (CT) compared with the conventional side-by-side MR-CT visual comparison. Materials and Methods: In this Institutional Review Board-approved prospective study, 68 patients with 88 hepatocellulcar carcinomas (HCCs) who had undergone pre-RFA MRI were enrolled. Informed consent was obtained from all patients. Pre-RFA MRI and post-RFA CT images were analyzed to evaluate the presence of a sufficient safety margin (${\geq}3mm$) in two separate sessions using either side-by-side visual comparison or non-rigid registration software. Patients with an insufficient ablative margin on either one or both methods underwent additional treatment depending on the technical feasibility and patient's condition. Then, ablative margins were re-assessed using both methods. Local tumor progression (LTP) rates were compared between the sufficient and insufficient margin groups in each method. Results: The two methods showed 14.8% (13/88) discordance in estimating sufficient ablative margins. On registration software-assisted inspection, patients with insufficient ablative margins showed a significantly higher 5-year LTP rate than those with sufficient ablative margins (66.7% vs. 27.0%, p = 0.004). However, classification by visual inspection alone did not reveal a significant difference in 5-year LTP between the two groups (28.6% vs. 30.5%, p = 0.79). Conclusion: Registration software provided better ablative margin assessment than did visual inspection in patients with HCCs who had undergone pre-RFA MRI and post-RFA CT for prediction of LTP after RFA and may provide more precise risk stratification of those who are treated with RFA.

Performance evaluation using BER/SNR of wearable fabric reconfigurable beam-steering antenna for On/Off-body communication systems (On/Off-body 통신시스템을 위한 직물소재 웨어러블 재구성 빔 스티어링 안테나의 BER/SNR 성능 검증)

  • Kang, Seonghun;Jeong, Sangsoo;Jung, Chang Won
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.7
    • /
    • pp.4842-4848
    • /
    • 2015
  • This paper presents a comparison of communication performance between the reconfigurable beam-steering antenna and the omni-directional (loop) antenna during standstill and walking motion. Both omni-directional and reconfigurable antennas were manufactured on the same fabric (${\varepsilon}_r=1.35$, $tqn{\delta}=0.02$) substrate and operated around 5 GHz band. The reconfigurable antenna was designed to steer the beam directions. To implement the beam-steering capability, the antenna used two PIN diodes. The measured peak gains were 5.9-6.6 dBi and the overall half power beam width (HPBW) was $102^{\circ}$. In order to compare the communication efficiency, both the bit error rate (BER) and the signal-to-noise ratio (SNR) were measured using a GNU Radio Companion software tool and user software radio peripheral (USRP) devices. The measurement were performed when both antennas were standstill and walking motion in an antenna chamber as well as in a smart home environment. From these results, the performances of the reconfigurable beam steering antenna outperformed that of the loop antenna. In addition, in terms of communication efficiencies, in an antenna chamber was better than in a smart home environment. In terms of movement of antennas, standstill state has better results than walking motion state.

Real-Virtual Fusion Hologram Generation System using RGB-Depth Camera (RGB-Depth 카메라를 이용한 현실-가상 융합 홀로그램 생성 시스템)

  • Song, Joongseok;Park, Jungsik;Park, Hanhoon;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.866-876
    • /
    • 2014
  • Generating of digital hologram of video contents with computer graphics(CG) requires natural fusion of 3D information between real and virtual. In this paper, we propose the system which can fuse real-virtual 3D information naturally and fast generate the digital hologram of fused results using multiple-GPUs based computer-generated-hologram(CGH) computing part. The system calculates camera projection matrix of RGB-Depth camera, and estimates the 3D information of virtual object. The 3D information of virtual object from projection matrix and real space are transmitted to Z buffer, which can fuse the 3D information, naturally. The fused result in Z buffer is transmitted to multiple-GPUs based CGH computing part. In this part, the digital hologram of fused result can be calculated fast. In experiment, the 3D information of virtual object from proposed system has the mean relative error(MRE) about 0.5138% in relation to real 3D information. In other words, it has the about 99% high-accuracy. In addition, we verify that proposed system can fast generate the digital hologram of fused result by using multiple GPUs based CGH calculation.

Multimodality and Application Software (다중영상기기의 응용 소프트웨어)

  • Im, Ki-Chun
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.2
    • /
    • pp.153-163
    • /
    • 2008
  • Medical imaging modalities to image either anatomical structure or functional processes have developed along somewhat independent paths. Functional images with single photon emission computed tomography (SPECT) and positron emission tomography (PET) are playing an increasingly important role in the diagnosis and staging of malignant disease, image-guided therapy planning, and treatment monitoring. SPECT and PET complement the more conventional anatomic imaging modalities of computed tomography (CT) and magnetic resonance (MR) imaging. When the functional imaging modality was combined with the anatomic imaging modality, the multimodality can help both identify and localize functional abnormalities. Combining PET with a high-resolution anatomical imaging modality such as CT can resolve the localization issue as long as the images from the two modalities are accurately coregistered. Software-based registration techniques have difficulty accounting for differences in patient positioning and involuntary movement of internal organs, often necessitating labor-intensive nonlinear mapping that may not converge to a satisfactory result. These challenges have recently been addressed by the introduction of the combined PET/CT scanner and SPECT/CT scanner, a hardware-oriented approach to image fusion. Combined PET/CT and SPECT/CT devices are playing an increasingly important role in the diagnosis and staging of human disease. The paper will review the development of multi modality instrumentations for clinical use from conception to present-day technology and the application software.

Development of SW Education Convergence Science Curriculum-linked Experimental Automation Teaching Tool (SW교육 융합 과학교과 연계형 실험 자동화 교구 개발)

  • Son, Min-Woo;Kim, Jin-ha;Ju, Yeong-Tae;Kim, Jong-Sil;Kim, Eung-Kon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.5
    • /
    • pp.967-972
    • /
    • 2020
  • Most of the experimental tools currently used are applied to experiments in the physical field by utilizing sensors and only MBL that are suitable for specific experiments have been developed. However, There is no experimental design stage using SW fusion, and there is a limit to the application of various chemistry experiments in textbooks, and in the case of Arduino, it is difficult for students to learn and understand language when programming. In this paper, we designed and developed a SW education convergence science experiment apparatus including a learner's active experiment design process, overcoming the shortcomings of the existing microcomputer experiment and the limitations of software education.