• Title/Summary/Keyword: learning transfer

Search Result 742, Processing Time 0.021 seconds

A Novel Approach to COVID-19 Diagnosis Based on Mel Spectrogram Features and Artificial Intelligence Techniques

  • Alfaidi, Aseel;Alshahrani, Abdullah;Aljohani, Maha
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.195-207
    • /
    • 2022
  • COVID-19 has remained one of the most serious health crises in recent history, resulting in the tragic loss of lives and significant economic impacts on the entire world. The difficulty of controlling COVID-19 poses a threat to the global health sector. Considering that Artificial Intelligence (AI) has contributed to improving research methods and solving problems facing diverse fields of study, AI algorithms have also proven effective in disease detection and early diagnosis. Specifically, acoustic features offer a promising prospect for the early detection of respiratory diseases. Motivated by these observations, this study conceptualized a speech-based diagnostic model to aid in COVID-19 diagnosis. The proposed methodology uses speech signals from confirmed positive and negative cases of COVID-19 to extract features through the pre-trained Visual Geometry Group (VGG-16) model based on Mel spectrogram images. This is used in addition to the K-means algorithm that determines effective features, followed by a Genetic Algorithm-Support Vector Machine (GA-SVM) classifier to classify cases. The experimental findings indicate the proposed methodology's capability to classify COVID-19 and NOT COVID-19 of varying ages and speaking different languages, as demonstrated in the simulations. The proposed methodology depends on deep features, followed by the dimension reduction technique for features to detect COVID-19. As a result, it produces better and more consistent performance than handcrafted features used in previous studies.

Price Prediction of Fractional Investment Products Using LSTM Algorithm: Focusing on Musicow (LSTM 모델을 이용한 조각투자 상품의 가격 예측: 뮤직카우를 중심으로)

  • Jung, Hyunjo;Lee, Jaehwan;Suh, Jihae
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.81-94
    • /
    • 2022
  • Real estate and artworks were considered challenging investment targets for individual investors because of their relatively high average transaction price despite their long investment history. Recently, the so-called fractional investment, generally known as investing in a share of the ownership right for real-life assets, etc., and most investors perceive that they actually own a piece (fraction) of the ownership right through their investments, is gaining popularity. Founded in 2016, Musicow started the first service that allows users to invest in copyright fees related to music distribution. Using the LSTM algorithm, one of the deep learning algorithms, this research predict the price of right to participate in copyright fees traded in Musicow. In addition to variables related to claims such as transfer price, transaction volume of claims, and copyright fees, comprehensive indicators indicating the market conditions for music copyright fees participation, exchange rates reflecting economic conditions, KTB interest rates, and Korea Composite Stock Index were also used as variables. As a result, it was confirmed that the LSTM algorithm accurately predicts the transaction price even in the case of fractional investment which has a relatively low transaction volume.

Short-Term Crack in Sewer Forecasting Method Based on CNN-LSTM Hybrid Neural Network Model (CNN-LSTM 합성모델에 의한 하수관거 균열 예측모델)

  • Jang, Seung-Ju;Jang, Seung-Yup
    • Journal of the Korean Geosynthetics Society
    • /
    • v.21 no.2
    • /
    • pp.11-19
    • /
    • 2022
  • In this paper, we propose a GoogleNet transfer learning and CNN-LSTM combination method to improve the time-series prediction performance for crack detection using crack data captured inside the sewer pipes. LSTM can solve the long-term dependency problem of CNN, so spatial and temporal characteristics can be considered at the same time. The predictive performance of the proposed method is excellent in all test variables as a result of comparing the RMSE(Root Mean Square Error) for time series sections using the crack data inside the sewer pipe. In addition, as a result of examining the prediction performance at the time of data generation, the proposed method was verified that it is effective in predicting crack detection by comparing with the existing CNN-only model. If the proposed method and experimental results obtained through this study are utilized, it can be applied in various fields such as the environment and humanities where time series data occurs frequently as well as crack data of concrete structures.

Privacy-preserving Proptech using Domain Adaptation in Metaverse (메타버스 내 원격 부동산 중계 시스템을 위한 부동산 매물 영상 내 민감정보 삭제 기술)

  • Junho Kim;Jinhong Kim;Byeongjun Kang;Jaewon Choi;Jihoon Kim;Dongwoo Kang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.11a
    • /
    • pp.187-190
    • /
    • 2022
  • 본 논문은 메타버스 등 인공지능 연계 증강/가상현실 부동 중계 플랫폼에서 부동산 영상 기반 매물 소개 시스템 구축에서 사생활 및 개인정보가 영상에 담기게 될 수 있는 위험이 존재하기에 부동산 영상 내의 개인정보 및 민감 정보를 인공지능 기술을 기반으로 검출하여 삭제해주고 복원해주는 인공지능 기술 연구개발을 목표로 하였다. 한국형 부동산 내 민감 object 를 정의하고, 최신 인공지능 딥러닝 기술 기반 민감 object detection 알고리즘을 연구 개발하며, 영상에서 삭제된 부분은 인공지능 기술을 기반으로 물체가 없는 실제 공간영상으로 복원해주는 영상복원 기술도 연구 개발하였다. 한국형 부동산 환경 (영상 촬영 조도, 디스플레이 스타일, 주변 가구 배치 등)에 맞는 인공지능 모델 구축을 위하여, 자체적으로 한국 영상 database 구축 및 Transfer learning for target domain adaptation 을 진행하였다. 제안된 알고리즘은 일반적인 환경에서 98%의 정확도와 challenge 환경에서 (occlusion 빛 반사, 저조도 등) 81%의 정확도를 보였다. 본 기술은 Proptech 분야에서 주목받고 있는 메타버스 기반 온라인 중계 서비스 기술을 활성화하기 위하여 기획되었으며, 특히 메타버스 부동산 중계 플랫폼의 활성화를 위하여 사생활 보호 측면에서 필요한 중요 기술을 인공지능 기술을 활용하여 연구 개발하였다.

  • PDF

Method of Automatically Generating Metadata through Audio Analysis of Video Content (영상 콘텐츠의 오디오 분석을 통한 메타데이터 자동 생성 방법)

  • Sung-Jung Young;Hyo-Gyeong Park;Yeon-Hwi You;Il-Young Moon
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.6
    • /
    • pp.557-561
    • /
    • 2021
  • A meatadata has become an essential element in order to recommend video content to users. However, it is passively generated by video content providers. In the paper, a method for automatically generating metadata was studied in the existing manual metadata input method. In addition to the method of extracting emotion tags in the previous study, a study was conducted on a method for automatically generating metadata for genre and country of production through movie audio. The genre was extracted from the audio spectrogram using the ResNet34 artificial neural network model, a transfer learning model, and the language of the speaker in the movie was detected through speech recognition. Through this, it was possible to confirm the possibility of automatically generating metadata through artificial intelligence.

Convolutional neural networks for automated tooth numbering on panoramic radiographs: A scoping review

  • Ramadhan Hardani Putra;Eha Renwi Astuti;Aga Satria Nurrachman;Dina Karimah Putri;Ahmad Badruddin Ghazali;Tjio Andrinanti Pradini;Dhinda Tiara Prabaningtyas
    • Imaging Science in Dentistry
    • /
    • v.53 no.4
    • /
    • pp.271-281
    • /
    • 2023
  • Purpose: The objective of this scoping review was to investigate the applicability and performance of various convolutional neural network (CNN) models in tooth numbering on panoramic radiographs, achieved through classification, detection, and segmentation tasks. Materials and Methods: An online search was performed of the PubMed, Science Direct, and Scopus databases. Based on the selection process, 12 studies were included in this review. Results: Eleven studies utilized a CNN model for detection tasks, 5 for classification tasks, and 3 for segmentation tasks in the context of tooth numbering on panoramic radiographs. Most of these studies revealed high performance of various CNN models in automating tooth numbering. However, several studies also highlighted limitations of CNNs, such as the presence of false positives and false negatives in identifying decayed teeth, teeth with crown prosthetics, teeth adjacent to edentulous areas, dental implants, root remnants, wisdom teeth, and root canal-treated teeth. These limitations can be overcome by ensuring both the quality and quantity of datasets, as well as optimizing the CNN architecture. Conclusion: CNNs have demonstrated high performance in automated tooth numbering on panoramic radiographs. Future development of CNN-based models for this purpose should also consider different stages of dentition, such as the primary and mixed dentition stages, as well as the presence of various tooth conditions. Ultimately, an optimized CNN architecture can serve as the foundation for an automated tooth numbering system and for further artificial intelligence research on panoramic radiographs for a variety of purposes.

Multicontents Integrated Image Animation within Synthesis for Hiqh Quality Multimodal Video (고화질 멀티 모달 영상 합성을 통한 다중 콘텐츠 통합 애니메이션 방법)

  • Jae Seung Roh;Jinbeom Kang
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.257-269
    • /
    • 2023
  • There is currently a burgeoning demand for image synthesis from photos and videos using deep learning models. Existing video synthesis models solely extract motion information from the provided video to generate animation effects on photos. However, these synthesis models encounter challenges in achieving accurate lip synchronization with the audio and maintaining the image quality of the synthesized output. To tackle these issues, this paper introduces a novel framework based on an image animation approach. Within this framework, upon receiving a photo, a video, and audio input, it produces an output that not only retains the unique characteristics of the individuals in the photo but also synchronizes their movements with the provided video, achieving lip synchronization with the audio. Furthermore, a super-resolution model is employed to enhance the quality and resolution of the synthesized output.

Resolving Memory Bottlenecks in Hardware Accelerators with Data Prefetch

  • Hyein Lee;Jinoo Joung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.6
    • /
    • pp.1-12
    • /
    • 2024
  • Deep learning with faster and more accurate results requires large amounts of storage space and large computations. Accordingly, many studies are using hardware accelerators for quick and accurate calculations. However, the performance bottleneck is due to data movement between the hardware accelerators and the CPU. In this paper, we propose a data prefetch strategy that can efficiently reduce such operational bottlenecks. The core idea of the data prefetch strategy is to predict the data needed for the next task and upload it to local memory while the hardware accelerator (Matrix Multiplication Unit, MMU) performs a task. This strategy can be enhanced by using a dual buffer to perform read and write operations simultaneously. This reduces latency and execution time of data transfer. Through simulations, we demonstrate a 24% improvement in the performance of hardware accelerators by maximizing parallel processing with dual buffers and bottlenecks between memories with data prefetch.

Visual Model of Pattern Design Based on Deep Convolutional Neural Network

  • Jingjing Ye;Jun Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.2
    • /
    • pp.311-326
    • /
    • 2024
  • The rapid development of neural network technology promotes the neural network model driven by big data to overcome the texture effect of complex objects. Due to the limitations in complex scenes, it is necessary to establish custom template matching and apply it to the research of many fields of computational vision technology. The dependence on high-quality small label sample database data is not very strong, and the machine learning system of deep feature connection to complete the task of texture effect inference and speculation is relatively poor. The style transfer algorithm based on neural network collects and preserves the data of patterns, extracts and modernizes their features. Through the algorithm model, it is easier to present the texture color of patterns and display them digitally. In this paper, according to the texture effect reasoning of custom template matching, the 3D visualization of the target is transformed into a 3D model. The high similarity between the scene to be inferred and the user-defined template is calculated by the user-defined template of the multi-dimensional external feature label. The convolutional neural network is adopted to optimize the external area of the object to improve the sampling quality and computational performance of the sample pyramid structure. The results indicate that the proposed algorithm can accurately capture the significant target, achieve more ablation noise, and improve the visualization results. The proposed deep convolutional neural network optimization algorithm has good rapidity, data accuracy and robustness. The proposed algorithm can adapt to the calculation of more task scenes, display the redundant vision-related information of image conversion, enhance the powerful computing power, and further improve the computational efficiency and accuracy of convolutional networks, which has a high research significance for the study of image information conversion.

Nonlinear Vector Alignment Methodology for Mapping Domain-Specific Terminology into General Space (전문어의 범용 공간 매핑을 위한 비선형 벡터 정렬 방법론)

  • Kim, Junwoo;Yoon, Byungho;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.127-146
    • /
    • 2022
  • Recently, as word embedding has shown excellent performance in various tasks of deep learning-based natural language processing, researches on the advancement and application of word, sentence, and document embedding are being actively conducted. Among them, cross-language transfer, which enables semantic exchange between different languages, is growing simultaneously with the development of embedding models. Academia's interests in vector alignment are growing with the expectation that it can be applied to various embedding-based analysis. In particular, vector alignment is expected to be applied to mapping between specialized domains and generalized domains. In other words, it is expected that it will be possible to map the vocabulary of specialized fields such as R&D, medicine, and law into the space of the pre-trained language model learned with huge volume of general-purpose documents, or provide a clue for mapping vocabulary between mutually different specialized fields. However, since linear-based vector alignment which has been mainly studied in academia basically assumes statistical linearity, it tends to simplify the vector space. This essentially assumes that different types of vector spaces are geometrically similar, which yields a limitation that it causes inevitable distortion in the alignment process. To overcome this limitation, we propose a deep learning-based vector alignment methodology that effectively learns the nonlinearity of data. The proposed methodology consists of sequential learning of a skip-connected autoencoder and a regression model to align the specialized word embedding expressed in each space to the general embedding space. Finally, through the inference of the two trained models, the specialized vocabulary can be aligned in the general space. To verify the performance of the proposed methodology, an experiment was performed on a total of 77,578 documents in the field of 'health care' among national R&D tasks performed from 2011 to 2020. As a result, it was confirmed that the proposed methodology showed superior performance in terms of cosine similarity compared to the existing linear vector alignment.