• Title/Summary/Keyword: Deep Features

Search Result 1,096, Processing Time 0.024 seconds

PharmacoNER Tagger: a deep learning-based tool for automatically finding chemicals and drugs in Spanish medical texts

  • Armengol-Estape, Jordi;Soares, Felipe;Marimon, Montserrat;Krallinger, Martin
    • Genomics & Informatics
    • /
    • v.17 no.2
    • /
    • pp.15.1-15.7
    • /
    • 2019
  • Automatically detecting mentions of pharmaceutical drugs and chemical substances is key for the subsequent extraction of relations of chemicals with other biomedical entities such as genes, proteins, diseases, adverse reactions or symptoms. The identification of drug mentions is also a prior step for complex event types such as drug dosage recognition, duration of medical treatments or drug repurposing. Formally, this task is known as named entity recognition (NER), meaning automatically identifying mentions of predefined entities of interest in running text. In the domain of medical texts, for chemical entity recognition (CER), techniques based on hand-crafted rules and graph-based models can provide adequate performance. In the recent years, the field of natural language processing has mainly pivoted to deep learning and state-of-the-art results for most tasks involving natural language are usually obtained with artificial neural networks. Competitive resources for drug name recognition in English medical texts are already available and heavily used, while for other languages such as Spanish these tools, although clearly needed were missing. In this work, we adapt an existing neural NER system, NeuroNER, to the particular domain of Spanish clinical case texts, and extend the neural network to be able to take into account additional features apart from the plain text. NeuroNER can be considered a competitive baseline system for Spanish drug and CER promoted by the Spanish national plan for the advancement of language technologies (Plan TL).

Deep Learning based Human Recognition using Integration of GAN and Spatial Domain Techniques

  • Sharath, S;Rangaraju, HG
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.127-136
    • /
    • 2021
  • Real-time human recognition is a challenging task, as the images are captured in an unconstrained environment with different poses, makeups, and styles. This limitation is addressed by generating several facial images with poses, makeup, and styles with a single reference image of a person using Generative Adversarial Networks (GAN). In this paper, we propose deep learning-based human recognition using integration of GAN and Spatial Domain Techniques. A novel concept of human recognition based on face depiction approach by generating several dissimilar face images from single reference face image using Domain Transfer Generative Adversarial Networks (DT-GAN) combined with feature extraction techniques such as Local Binary Pattern (LBP) and Histogram is deliberated. The Euclidean Distance (ED) is used in the matching section for comparison of features to test the performance of the method. A database of millions of people with a single reference face image per person, instead of multiple reference face images, is created and saved on the centralized server, which helps to reduce memory load on the centralized server. It is noticed that the recognition accuracy is 100% for smaller size datasets and a little less accuracy for larger size datasets and also, results are compared with present methods to show the superiority of proposed method.

A Computer-Aided Diagnosis of Brain Tumors Using a Fine-Tuned YOLO-based Model with Transfer Learning

  • Montalbo, Francis Jesmar P.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4816-4834
    • /
    • 2020
  • This paper proposes transfer learning and fine-tuning techniques for a deep learning model to detect three distinct brain tumors from Magnetic Resonance Imaging (MRI) scans. In this work, the recent YOLOv4 model trained using a collection of 3064 T1-weighted Contrast-Enhanced (CE)-MRI scans that were pre-processed and labeled for the task. This work trained with the partial 29-layer YOLOv4-Tiny and fine-tuned to work optimally and run efficiently in most platforms with reliable performance. With the help of transfer learning, the model had initial leverage to train faster with pre-trained weights from the COCO dataset, generating a robust set of features required for brain tumor detection. The results yielded the highest mean average precision of 93.14%, a 90.34% precision, 88.58% recall, and 89.45% F1-Score outperforming other previous versions of the YOLO detection models and other studies that used bounding box detections for the same task like Faster R-CNN. As concluded, the YOLOv4-Tiny can work efficiently to detect brain tumors automatically at a rapid phase with the help of proper fine-tuning and transfer learning. This work contributes mainly to assist medical experts in the diagnostic process of brain tumors.

Fast and Accurate Single Image Super-Resolution via Enhanced U-Net

  • Chang, Le;Zhang, Fan;Li, Biao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1246-1262
    • /
    • 2021
  • Recent studies have demonstrated the strong ability of deep convolutional neural networks (CNNs) to significantly boost the performance in single image super-resolution (SISR). The key concern is how to efficiently recover and utilize diverse information frequencies across multiple network layers, which is crucial to satisfying super-resolution image reconstructions. Hence, previous work made great efforts to potently incorporate hierarchical frequencies through various sophisticated architectures. Nevertheless, economical SISR also requires a capable structure design to balance between restoration accuracy and computational complexity, which is still a challenge for existing techniques. In this paper, we tackle this problem by proposing a competent architecture called Enhanced U-Net Network (EUN), which can yield ready-to-use features in miscellaneous frequencies and combine them comprehensively. In particular, the proposed building block for EUN is enhanced from U-Net, which can extract abundant information via multiple skip concatenations. The network configuration allows the pipeline to propagate information from lower layers to higher ones. Meanwhile, the block itself is committed to growing quite deep in layers, which empowers different types of information to spring from a single block. Furthermore, due to its strong advantage in distilling effective information, promising results are guaranteed with comparatively fewer filters. Comprehensive experiments manifest our model can achieve favorable performance over that of state-of-the-art methods, especially in terms of computational efficiency.

Connection stiffness reduction analysis in steel bridge via deep CNN and modal experimental data

  • Dang, Hung V.;Raza, Mohsin;Tran-Ngoc, H.;Bui-Tien, T.;Nguyen, Huan X.
    • Structural Engineering and Mechanics
    • /
    • v.77 no.4
    • /
    • pp.495-508
    • /
    • 2021
  • This study devises a novel approach, namely quadruple 1D convolutional neural network, for detecting connection stiffness reduction in steel truss bridge structure using experimental and numerical modal data. The method is developed based on expertise in two domains: firstly, in Structural Health Monitoring, the mode shapes and its high-order derivatives, including second, third, and fourth derivatives, are accurate indicators in assessing damages. Secondly, in the Machine Learning literature, the deep convolutional neural networks are able to extract relevant features from input data, then perform classification tasks with high accuracy and reduced time complexity. The efficacy and effectiveness of the present method are supported through an extensive case study with the railway Nam O bridge. It delivers highly accurate results in assessing damage localization and damage severity for single as well as multiple damage scenarios. In addition, the robustness of this method is tested with the presence of white noise reflecting unavoidable uncertainties in signal processing and modeling in reality. The proposed approach is able to provide stable results with data corrupted by noise up to 10%.

Novel Image Classification Method Based on Few-Shot Learning in Monkey Species

  • Wang, Guangxing;Lee, Kwang-Chan;Shin, Seong-Yoon
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.2
    • /
    • pp.79-83
    • /
    • 2021
  • This paper proposes a novel image classification method based on few-shot learning, which is mainly used to solve model overfitting and non-convergence in image classification tasks of small datasets and improve the accuracy of classification. This method uses model structure optimization to extend the basic convolutional neural network (CNN) model and extracts more image features by adding convolutional layers, thereby improving the classification accuracy. We incorporated certain measures to improve the performance of the model. First, we used general methods such as setting a lower learning rate and shuffling to promote the rapid convergence of the model. Second, we used the data expansion technology to preprocess small datasets to increase the number of training data sets and suppress over-fitting. We applied the model to 10 monkey species and achieved outstanding performances. Experiments indicated that our proposed method achieved an accuracy of 87.92%, which is 26.1% higher than that of the traditional CNN method and 1.1% higher than that of the deep convolutional neural network ResNet50.

Establishment of Priority Update Area for Land Coverage Classification Using Orthoimages and Serial Cadastral Maps

  • Song, Junyoung;Won, Taeyeon;Jo, Su Min;Eo, Yang Dam;Park, Jin Sue
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.4
    • /
    • pp.763-776
    • /
    • 2021
  • This paper introduces a method of selecting priority update areas for subdivided land cover maps by training orthoimages and serial cadastral maps in a deep learning model. For the experiment, orthoimages and serial cadastral maps were obtained from the National Spatial Data Infrastructure Portal. Based on the VGG-16 model, 51,470 images were trained on 33 subdivided classifications within the experimental area and an accuracy evaluation was conducted. The overall accuracy was 61.42%. In addition, using the differences in the classification prediction probability of the misclassified polygon and the cosine similarity that numerically expresses the similarity of the land category features with the original subdivided land cover class, the cases were classified and the areas in which the boundary setting was incorrect and in which the image itself was determined to have a problem were identified as the priority update polygons that should be checked by operators.

A Robust Energy Consumption Forecasting Model using ResNet-LSTM with Huber Loss

  • Albelwi, Saleh
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.301-307
    • /
    • 2022
  • Energy consumption has grown alongside dramatic population increases. Statistics show that buildings in particular utilize a significant amount of energy, worldwide. Because of this, building energy prediction is crucial to best optimize utilities' energy plans and also create a predictive model for consumers. To improve energy prediction performance, this paper proposes a ResNet-LSTM model that combines residual networks (ResNets) and long short-term memory (LSTM) for energy consumption prediction. ResNets are utilized to extract complex and rich features, while LSTM has the ability to learn temporal correlation; the dense layer is used as a regression to forecast energy consumption. To make our model more robust, we employed Huber loss during the optimization process. Huber loss obtains high efficiency by handling minor errors quadratically. It also takes the absolute error for large errors to increase robustness. This makes our model less sensitive to outlier data. Our proposed system was trained on historical data to forecast energy consumption for different time series. To evaluate our proposed model, we compared our model's performance with several popular machine learning and deep learning methods such as linear regression, neural networks, decision tree, and convolutional neural networks, etc. The results show that our proposed model predicted energy consumption most accurately.

Hybrid Learning-Based Cell Morphology Profiling Framework for Classifying Cancer Heterogeneity (암의 이질성 분류를 위한 하이브리드 학습 기반 세포 형태 프로파일링 기법)

  • Min, Chanhong;Jeong, Hyuntae;Yang, Sejung;Shin, Jennifer Hyunjong
    • Journal of Biomedical Engineering Research
    • /
    • v.42 no.5
    • /
    • pp.232-240
    • /
    • 2021
  • Heterogeneity in cancer is the major obstacle for precision medicine and has become a critical issue in the field of a cancer diagnosis. Many attempts were made to disentangle the complexity by molecular classification. However, multi-dimensional information from dynamic responses of cancer poses fundamental limitations on biomolecular marker-based conventional approaches. Cell morphology, which reflects the physiological state of the cell, can be used to track the temporal behavior of cancer cells conveniently. Here, we first present a hybrid learning-based platform that extracts cell morphology in a time-dependent manner using a deep convolutional neural network to incorporate multivariate data. Feature selection from more than 200 morphological features is conducted, which filters out less significant variables to enhance interpretation. Our platform then performs unsupervised clustering to unveil dynamic behavior patterns hidden from a high-dimensional dataset. As a result, we visualize morphology state-space by two-dimensional embedding as well as representative morphology clusters and trajectories. This cell morphology profiling strategy by hybrid learning enables simplification of the heterogeneous population of cancer.

Trends of Compiler Development for AI Processor (인공지능 프로세서 컴파일러 개발 동향)

  • Kim, J.K.;Kim, H.J.;Cho, Y.C.P.;Kim, H.M.;Lyuh, C.G.;Han, J.;Kwon, Y.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.2
    • /
    • pp.32-42
    • /
    • 2021
  • The rapid growth of deep-learning applications has invoked the R&D of artificial intelligence (AI) processors. A dedicated software framework such as a compiler and runtime APIs is required to achieve maximum processor performance. There are various compilers and frameworks for AI training and inference. In this study, we present the features and characteristics of AI compilers, training frameworks, and inference engines. In addition, we focus on the internals of compiler frameworks, which are based on either basic linear algebra subprograms or intermediate representation. For an in-depth insight, we present the compiler infrastructure, internal components, and operation flow of ETRI's "AI-Ware." The software framework's significant role is evidenced from the optimized neural processing unit code produced by the compiler after various optimization passes, such as scheduling, architecture-considering optimization, schedule selection, and power optimization. We conclude the study with thoughts about the future of state-of-the-art AI compilers.