• Title/Summary/Keyword: unsupervised deep learning

Search Result 102, Processing Time 0.021 seconds

Unlabeled Wi-Fi RSSI Indoor Positioning by Using IMU

  • Chanyeong, Ju;Jaehyun, Yoo
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.12 no.1
    • /
    • pp.37-42
    • /
    • 2023
  • Wi-Fi Received Signal Strength Indicator (RSSI) is considered one of the most important sensor data types for indoor localization. However, collecting a RSSI fingerprint, which consists of pairs of a RSSI measurement set and a corresponding location, is costly and time-consuming. In this paper, we propose a Wi-Fi RSSI learning technique without true location data to overcome the limitations of static database construction. Instead of the true reference positions, inertial measurement unit (IMU) data are used to generate pseudo locations, which enable a trainer to move during data collection. This improves the efficiency of data collection dramatically. From an experiment it is seen that the proposed algorithm successfully learns the unsupervised Wi-Fi RSSI positioning model, resulting in 2 m accuracy when the cumulative distribution function (CDF) is 0.8.

Forecasting the Precipitation of the Next Day Using Deep Learning (딥러닝 기법을 이용한 내일강수 예측)

  • Ha, Ji-Hun;Lee, Yong Hee;Kim, Yong-Hyuk
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.2
    • /
    • pp.93-98
    • /
    • 2016
  • For accurate precipitation forecasts the choice of weather factors and prediction method is very important. Recently, machine learning has been widely used for forecasting precipitation, and artificial neural network, one of machine learning techniques, showed good performance. In this paper, we suggest a new method for forecasting precipitation using DBN, one of deep learning techniques. DBN has an advantage that initial weights are set by unsupervised learning, so this compensates for the defects of artificial neural networks. We used past precipitation, temperature, and the parameters of the sun and moon's motion as features for forecasting precipitation. The dataset consists of observation data which had been measured for 40 years from AWS in Seoul. Experiments were based on 8-fold cross validation. As a result of estimation, we got probabilities of test dataset, so threshold was used for the decision of precipitation. CSI and Bias were used for indicating the precision of precipitation. Our experimental results showed that DBN performed better than MLP.

Application and Potential of Artificial Intelligence in Heart Failure: Past, Present, and Future

  • Minjae Yoon;Jin Joo Park;Taeho Hur;Cam-Hao Hua;Musarrat Hussain;Sungyoung Lee;Dong-Ju Choi
    • International Journal of Heart Failure
    • /
    • v.6 no.1
    • /
    • pp.11-19
    • /
    • 2024
  • The prevalence of heart failure (HF) is increasing, necessitating accurate diagnosis and tailored treatment. The accumulation of clinical information from patients with HF generates big data, which poses challenges for traditional analytical methods. To address this, big data approaches and artificial intelligence (AI) have been developed that can effectively predict future observations and outcomes, enabling precise diagnoses and personalized treatments of patients with HF. Machine learning (ML) is a subfield of AI that allows computers to analyze data, find patterns, and make predictions without explicit instructions. ML can be supervised, unsupervised, or semi-supervised. Deep learning is a branch of ML that uses artificial neural networks with multiple layers to find complex patterns. These AI technologies have shown significant potential in various aspects of HF research, including diagnosis, outcome prediction, classification of HF phenotypes, and optimization of treatment strategies. In addition, integrating multiple data sources, such as electrocardiography, electronic health records, and imaging data, can enhance the diagnostic accuracy of AI algorithms. Currently, wearable devices and remote monitoring aided by AI enable the earlier detection of HF and improved patient care. This review focuses on the rationale behind utilizing AI in HF and explores its various applications.

Opera Clustering: K-means on librettos datasets

  • Jeong, Harim;Yoo, Joo Hun
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.45-52
    • /
    • 2022
  • With the development of artificial intelligence analysis methods, especially machine learning, various fields are widely expanding their application ranges. However, in the case of classical music, there still remain some difficulties in applying machine learning techniques. Genre classification or music recommendation systems generated by deep learning algorithms are actively used in general music, but not in classical music. In this paper, we attempted to classify opera among classical music. To this end, an experiment was conducted to determine which criteria are most suitable among, composer, period of composition, and emotional atmosphere, which are the basic features of music. To generate emotional labels, we adopted zero-shot classification with four basic emotions, 'happiness', 'sadness', 'anger', and 'fear.' After embedding the opera libretto with the doc2vec processing model, the optimal number of clusters is computed based on the result of the elbow method. Decided four centroids are then adopted in k-means clustering to classify unsupervised libretto datasets. We were able to get optimized clustering based on the result of adjusted rand index scores. With these results, we compared them with notated variables of music. As a result, it was confirmed that the four clusterings calculated by machine after training were most similar to the grouping result by period. Additionally, we were able to verify that the emotional similarity between composer and period did not appear significantly. At the end of the study, by knowing the period is the right criteria, we hope that it makes easier for music listeners to find music that suits their tastes.

Unsupervised Vortex-induced Vibration Detection Using Data Synthesis (합성데이터를 이용한 비지도학습 기반 실시간 와류진동 탐지모델)

  • Sunho Lee;Sunjoong Kim
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.5
    • /
    • pp.315-321
    • /
    • 2023
  • Long-span bridges are flexible structures with low natural frequencies and damping ratios, making them susceptible to vibrational serviceability problems. However, the current design guideline of South Korea assumes a uniform threshold of wind speed or vibrational amplitude to assess the occurrence of harmful vibrations, potentially overlooking the complex vibrational patterns observed in long-span bridges. In this study, we propose a pointwise vortex-induced vibration (VIV) detection method using a deep-learning-based signalsegmentation model. Departing from conventional supervised methods of data acquisition and manual labeling, we synthesize training data by generating sinusoidal waves with an envelope to accurately represent VIV. A Fourier synchrosqueezed transform is leveraged to extract time-frequency features, which serve as input data for training a bidirectional long short-term memory model. The effectiveness of the model trained on synthetic VIV data is demonstrated through a comparison with its counterpart trained on manually labeled real datasets from an actual cable-supported bridge.

Deep Learning Approach for Automatic Discontinuity Mapping on 3D Model of Tunnel Face (터널 막장 3차원 지형모델 상에서의 불연속면 자동 매핑을 위한 딥러닝 기법 적용 방안)

  • Chuyen Pham;Hyu-Soung Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.6
    • /
    • pp.508-518
    • /
    • 2023
  • This paper presents a new approach for the automatic mapping of discontinuities in a tunnel face based on its 3D digital model reconstructed by LiDAR scan or photogrammetry techniques. The main idea revolves around the identification of discontinuity areas in the 3D digital model of a tunnel face by segmenting its 2D projected images using a deep-learning semantic segmentation model called U-Net. The proposed deep learning model integrates various features including the projected RGB image, depth map image, and local surface properties-based images i.e., normal vector and curvature images to effectively segment areas of discontinuity in the images. Subsequently, the segmentation results are projected back onto the 3D model using depth maps and projection matrices to obtain an accurate representation of the location and extent of discontinuities within the 3D space. The performance of the segmentation model is evaluated by comparing the segmented results with their corresponding ground truths, which demonstrates the high accuracy of segmentation results with the intersection-over-union metric of approximately 0.8. Despite still being limited in training data, this method exhibits promising potential to address the limitations of conventional approaches, which only rely on normal vectors and unsupervised machine learning algorithms for grouping points in the 3D model into distinct sets of discontinuities.

Malware Detection Using Deep Recurrent Neural Networks with no Random Initialization

  • Amir Namavar Jahromi;Sattar Hashemi
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.177-189
    • /
    • 2023
  • Malware detection is an increasingly important operational focus in cyber security, particularly given the fast pace of such threats (e.g., new malware variants introduced every day). There has been great interest in exploring the use of machine learning techniques in automating and enhancing the effectiveness of malware detection and analysis. In this paper, we present a deep recurrent neural network solution as a stacked Long Short-Term Memory (LSTM) with a pre-training as a regularization method to avoid random network initialization. In our proposal, we use global and short dependencies of the inputs. With pre-training, we avoid random initialization and are able to improve the accuracy and robustness of malware threat hunting. The proposed method speeds up the convergence (in comparison to stacked LSTM) by reducing the length of malware OpCode or bytecode sequences. Hence, the complexity of our final method is reduced. This leads to better accuracy, higher Mattews Correlation Coefficients (MCC), and Area Under the Curve (AUC) in comparison to a standard LSTM with similar detection time. Our proposed method can be applied in real-time malware threat hunting, particularly for safety critical systems such as eHealth or Internet of Military of Things where poor convergence of the model could lead to catastrophic consequences. We evaluate the effectiveness of our proposed method on Windows, Ransomware, Internet of Things (IoT), and Android malware datasets using both static and dynamic analysis. For the IoT malware detection, we also present a comparative summary of the performance on an IoT-specific dataset of our proposed method and the standard stacked LSTM method. More specifically, of our proposed method achieves an accuracy of 99.1% in detecting IoT malware samples, with AUC of 0.985, and MCC of 0.95; thus, outperforming standard LSTM based methods in these key metrics.

Respiratory Motion Correction on PET Images Based on 3D Convolutional Neural Network

  • Hou, Yibo;He, Jianfeng;She, Bo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2191-2208
    • /
    • 2022
  • Motion blur in PET (Positron emission tomography) images induced by respiratory motion will reduce the quality of imaging. Although exiting methods have positive performance for respiratory motion correction in medical practice, there are still many aspects that can be improved. In this paper, an improved 3D unsupervised framework, Res-Voxel based on U-Net network was proposed for the motion correction. The Res-Voxel with multiple residual structure may improve the ability of predicting deformation field, and use a smaller convolution kernel to reduce the parameters of the model and decrease the amount of computation required. The proposed is tested on the simulated PET imaging data and the clinical data. Experimental results demonstrate that the proposed achieved Dice indices 93.81%, 81.75% and 75.10% on the simulated geometric phantom data, voxel phantom data and the clinical data respectively. It is demonstrated that the proposed method can improve the registration and correction performance of PET image.

Comparative Study of Keyword Extraction Models in Biomedical Domain (생의학 분야 키워드 추출 모델에 대한 비교 연구)

  • Donghee Lee;Soonchan Kwon;Beakcheol Jang
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.77-84
    • /
    • 2023
  • Given the growing volume of biomedical papers, the ability to efficiently extract keywords has become crucial for accessing and responding to important information in the literature. In this study, we conduct a comprehensive evaluation of different unsupervised learning-based models and BERT-based models for keyword extraction in the biomedical field. Our experimental findings reveal that the BioBERT model, trained on biomedical-specific data, achieves the highest performance. This study offers precise and dependable insights to guide forthcoming research in biomedical keyword extraction. By establishing a well-suited experimental framework and conducting thorough comparisons and analyses of diverse models, we have furnished essential information. Furthermore, we anticipate extending our contributions to other domains by providing comparative experiments and practical guidelines for effective keyword extraction.

Loss Compression and Loss Correction Technique of 3D Point Cloud Data (3차원 데이터의 손실압축과 손실보정기법 연구)

  • Shin, Kwang-seong;Shin, Seong-yoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.351-352
    • /
    • 2021
  • Due to the recent rapid change in the social environment due to Corona 19, the need for non-face-to-face/contact-based information exchange technology is rapidly emerging. Due to these changes, the development of an alternative system using a sense of immersion and a sense of presence is urgently required. In this study, in order to implement a video conferencing system, we implemented a technology for transmitting large-capacity 3D data in real time without delay. For this, the applied algorithm of GAN, the latest deep learning algorithm of the unsupervised learning series, was used.

  • PDF