• Title/Summary/Keyword: Learning data set

Search Result 1,101, Processing Time 0.027 seconds

CBIR-based Data Augmentation and Its Application to Deep Learning (CBIR 기반 데이터 확장을 이용한 딥 러닝 기술)

  • Kim, Sesong;Jung, Seung-Won
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.403-408
    • /
    • 2018
  • Generally, a large data set is required for learning of deep learning. However, since it is not easy to create large data sets, there are a lot of techniques that make small data sets larger through data expansion such as rotation, flipping, and filtering. However, these simple techniques have limitation on extendibility because they are difficult to escape from the features already possessed. In order to solve this problem, we propose a method to acquire new image data by using existing data. This is done by retrieving and acquiring similar images using existing image data as a query of the content-based image retrieval (CBIR). Finally, we compare the performance of the base model with the model using CBIR.

A Deep Learning based IOT Device Recognition System (딥러닝을 이용한 IOT 기기 인식 시스템)

  • Chu, Yeon Ho;Choi, Young Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.18 no.2
    • /
    • pp.1-5
    • /
    • 2019
  • As the number of IOT devices is growing rapidly, various 'see-thru connection' techniques have been reported for efficient communication with them. In this paper, we propose a deep learning based IOT device recognition system for interaction with these devices. The overall system consists of a TensorFlow based deep learning server and two Android apps for data collection and recognition purposes. As the basic neural network model, we adopted Google's inception-v3, and modified the output stage to classify 20 types of IOT devices. After creating a data set consisting of 1000 images of 20 categories, we trained our deep learning network using a transfer learning technology. As a result of the experiment, we achieve 94.5% top-1 accuracy and 98.1% top-2 accuracy.

A Tolerant Rough Set Approach for Handwritten Numeral Character Classification

  • Kim, Daijin;Kim, Chul-Hyun
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.288-295
    • /
    • 1998
  • This paper proposes a new data classification method based on the tolerant rough set that extends the existing equivalent rough set. Similarity measure between two data is described by a distance function of all constituent attributes and they are defined to be tolerant when their similarity measure exceeds a similarity threshold value. The determination of optimal similarity theshold value is very important for the accurate classification. So, we determine it optimally by using the genetic algorithm (GA), where the goal of evolution is to balance two requirements such that (1) some tolerant objects are required to be included in the same class as many as possible. After finding the optimal similarity threshold value, a tolerant set of each object is obtained and the data set is grounded into the lower and upper approximation set depending on the coincidence of their classes. We propose a two-stage classification method that all data are classified by using the lower approxi ation at the first stage and then the non-classified data at the first stage are classified again by using the rough membership functions obtained from the upper approximation set. We apply the proposed classification method to the handwritten numeral character classification. problem and compare its classification performance and learning time with those of the feed forward neural network's back propagation algorithm.

  • PDF

퍼지 학습 규칙을 이용한 퍼지 신경회로망

  • 김용수
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1997.11a
    • /
    • pp.180-184
    • /
    • 1997
  • This paper presents the fuzzy neural network which utilizes a fuzzified Kohonen learning uses a fuzzy membership value, a function of the iteration, and a intra-membership value instead of a learning rate. The IRIS data set if used to test the fuzzy neural network. The test result shows the performance of the fuzzy neural network depends on k and the vigilance parameter T.

  • PDF

Subgroup Discovery Method with Internal Disjunctive Expression

  • Kim, Seyoung;Ryu, Kwang Ryel
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.1
    • /
    • pp.23-32
    • /
    • 2017
  • We can obtain useful knowledge from data by using a subgroup discovery algorithm. Subgroup discovery is a rule model learning method that finds data subgroups containing specific information from data and expresses them in a rule form. Subgroups are meaningful as they account for a high percentage of total data and tend to differ significantly from the overall data. Subgroup is expressed with conjunction of only literals previously. So, the scope of the rules that can be derived from the learning process is limited. In this paper, we propose a method to increase expressiveness of rules through internal disjunctive representation of attribute values. Also, we analyze the characteristics of existing subgroup discovery algorithms and propose an improved algorithm that complements their defects and takes advantage of them. Experiments are conducted with the traffic accident data given from Busan metropolitan city. The results shows that performance of the proposed method is better than that of existing methods. Rule set learned by proposed method has interesting and general rules more.

Image-based rainfall prediction from a novel deep learning method

  • Byun, Jongyun;Kim, Jinwon;Jun, Changhyun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.183-183
    • /
    • 2021
  • Deep learning methods and their application have become an essential part of prediction and modeling in water-related research areas, including hydrological processes, climate change, etc. It is known that application of deep learning leads to high availability of data sources in hydrology, which shows its usefulness in analysis of precipitation, runoff, groundwater level, evapotranspiration, and so on. However, there is still a limitation on microclimate analysis and prediction with deep learning methods because of deficiency of gauge-based data and shortcomings of existing technologies. In this study, a real-time rainfall prediction model was developed from a sky image data set with convolutional neural networks (CNNs). These daily image data were collected at Chung-Ang University and Korea University. For high accuracy of the proposed model, it considers data classification, image processing, ratio adjustment of no-rain data. Rainfall prediction data were compared with minutely rainfall data at rain gauge stations close to image sensors. It indicates that the proposed model could offer an interpolation of current rainfall observation system and have large potential to fill an observation gap. Information from small-scaled areas leads to advance in accurate weather forecasting and hydrological modeling at a micro scale.

  • PDF

Machine Learning Model to Predict Osteoporotic Spine with Hounsfield Units on Lumbar Computed Tomography

  • Nam, Kyoung Hyup;Seo, Il;Kim, Dong Hwan;Lee, Jae Il;Choi, Byung Kwan;Han, In Ho
    • Journal of Korean Neurosurgical Society
    • /
    • v.62 no.4
    • /
    • pp.442-449
    • /
    • 2019
  • Objective : Bone mineral density (BMD) is an important consideration during fusion surgery. Although dual X-ray absorptiometry is considered as the gold standard for assessing BMD, quantitative computed tomography (QCT) provides more accurate data in spine osteoporosis. However, QCT has the disadvantage of additional radiation hazard and cost. The present study was to demonstrate the utility of artificial intelligence and machine learning algorithm for assessing osteoporosis using Hounsfield units (HU) of preoperative lumbar CT coupling with data of QCT. Methods : We reviewed 70 patients undergoing both QCT and conventional lumbar CT for spine surgery. The T-scores of 198 lumbar vertebra was assessed in QCT and the HU of vertebral body at the same level were measured in conventional CT by the picture archiving and communication system (PACS) system. A multiple regression algorithm was applied to predict the T-score using three independent variables (age, sex, and HU of vertebral body on conventional CT) coupling with T-score of QCT. Next, a logistic regression algorithm was applied to predict osteoporotic or non-osteoporotic vertebra. The Tensor flow and Python were used as the machine learning tools. The Tensor flow user interface developed in our institute was used for easy code generation. Results : The predictive model with multiple regression algorithm estimated similar T-scores with data of QCT. HU demonstrates the similar results as QCT without the discordance in only one non-osteoporotic vertebra that indicated osteoporosis. From the training set, the predictive model classified the lumbar vertebra into two groups (osteoporotic vs. non-osteoporotic spine) with 88.0% accuracy. In a test set of 40 vertebrae, classification accuracy was 92.5% when the learning rate was 0.0001 (precision, 0.939; recall, 0.969; F1 score, 0.954; area under the curve, 0.900). Conclusion : This study is a simple machine learning model applicable in the spine research field. The machine learning model can predict the T-score and osteoporotic vertebrae solely by measuring the HU of conventional CT, and this would help spine surgeons not to under-estimate the osteoporotic spine preoperatively. If applied to a bigger data set, we believe the predictive accuracy of our model will further increase. We propose that machine learning is an important modality of the medical research field.

Automated Segmentation of Left Ventricular Myocardium on Cardiac Computed Tomography Using Deep Learning

  • Hyun Jung Koo;June-Goo Lee;Ji Yeon Ko;Gaeun Lee;Joon-Won Kang;Young-Hak Kim;Dong Hyun Yang
    • Korean Journal of Radiology
    • /
    • v.21 no.6
    • /
    • pp.660-669
    • /
    • 2020
  • Objective: To evaluate the accuracy of a deep learning-based automated segmentation of the left ventricle (LV) myocardium using cardiac CT. Materials and Methods: To develop a fully automated algorithm, 100 subjects with coronary artery disease were randomly selected as a development set (50 training / 20 validation / 30 internal test). An experienced cardiac radiologist generated the manual segmentation of the development set. The trained model was evaluated using 1000 validation set generated by an experienced technician. Visual assessment was performed to compare the manual and automatic segmentations. In a quantitative analysis, sensitivity and specificity were calculated according to the number of pixels where two three-dimensional masks of the manual and deep learning segmentations overlapped. Similarity indices, such as the Dice similarity coefficient (DSC), were used to evaluate the margin of each segmented masks. Results: The sensitivity and specificity of automated segmentation for each segment (1-16 segments) were high (85.5-100.0%). The DSC was 88.3 ± 6.2%. Among randomly selected 100 cases, all manual segmentation and deep learning masks for visual analysis were classified as very accurate to mostly accurate and there were no inaccurate cases (manual vs. deep learning: very accurate, 31 vs. 53; accurate, 64 vs. 39; mostly accurate, 15 vs. 8). The number of very accurate cases for deep learning masks was greater than that for manually segmented masks. Conclusion: We present deep learning-based automatic segmentation of the LV myocardium and the results are comparable to manual segmentation data with high sensitivity, specificity, and high similarity scores.

Learning Deep Representation by Increasing ConvNets Depth for Few Shot Learning

  • Fabian, H.S. Tan;Kang, Dae-Ki
    • International journal of advanced smart convergence
    • /
    • v.8 no.4
    • /
    • pp.75-81
    • /
    • 2019
  • Though recent advancement of deep learning methods have provided satisfactory results from large data domain, somehow yield poor performance on few-shot classification tasks. In order to train a model with strong performance, i.e. deep convolutional neural network, it depends heavily on huge dataset and the labeled classes of the dataset can be extremely humongous. The cost of human annotation and scarcity of the data among the classes have drastically limited the capability of current image classification model. On the contrary, humans are excellent in terms of learning or recognizing new unseen classes with merely small set of labeled examples. Few-shot learning aims to train a classification model with limited labeled samples to recognize new classes that have neverseen during training process. In this paper, we increase the backbone depth of the embedding network in orderto learn the variation between the intra-class. By increasing the network depth of the embedding module, we are able to achieve competitive performance due to the minimized intra-class variation.

Human-like sign-language learning method using deep learning

  • Ji, Yangho;Kim, Sunmok;Kim, Young-Joo;Lee, Ki-Baek
    • ETRI Journal
    • /
    • v.40 no.4
    • /
    • pp.435-445
    • /
    • 2018
  • This paper proposes a human-like sign-language learning method that uses a deep-learning technique. Inspired by the fact that humans can learn sign language from just a set of pictures in a book, in the proposed method, the input data are pre-processed into an image. In addition, the network is partially pre-trained to imitate the preliminarily obtained knowledge of humans. The learning process is implemented with a well-known network, that is, a convolutional neural network. Twelve sign actions are learned in 10 situations, and can be recognized with an accuracy of 99% in scenarios with low-cost equipment and limited data. The results show that the system is highly practical, as well as accurate and robust.