• Title/Summary/Keyword: Feature-based classification

Search Result 1,323, Processing Time 0.025 seconds

Comparison of cone beam computed tomography and conventional panoramic radiography in assessing the topographic relationship between the mandibular canal and impacted third molars (하악 제3대구치와 하악관과의 위치관계에 대한 파노라마 방사선사진과 cone beam형 전산화단층촬영상의 비교)

  • Choi, Hyung-Soo;Kim, Gyu-Tae;Choi, Yong-Suk;Hwang, Eui-Hwan
    • Imaging Science in Dentistry
    • /
    • v.38 no.3
    • /
    • pp.169-176
    • /
    • 2008
  • Purpose : To assess the diagnostic accuracy and value in an imaging technique field through the comparison of cone beam computed tomography and conventional panoramic radiography in assessing the topographic relationship between the mandibular canal and impacted third molars. Materials and Methods : Participants consisted of 100 patients offered the images through cone beam computed tomography and panoramic radiography. PSR-$9000^{TM}$ Dental CT system (Asahi Roentgen Ind. Co., Ltd, Japan) was used as the unit of cone beam computed tomography. CE-II (Asahi Roentgen Ind. Co., Ltd, Japan) and Pro Max (Planmeca Oy, Finland) were used as the unit of panoramic radiography. The images obtained through panoramic radiography were classified into 3 types according to the distance between mandibular canal and root of mandibular third molar. And they were classified into 4 types according to the proximity of radiographic feature. The images obtained through cone beam computed tomography based on the classification above were classified into 4 types according to the location between the mandibular canal and the root and were analyzed. And they were classified into buccal, inferior, lingual, and between roots, according to the location between mandibular canal and root. The data were statistically analyzed and estimated by $X^2$-test. Results : 1. There was no statistical significance according to 3 types (type I, type II, type III) through CBCT. 2. The results of 4 types (type A, type B, type C, type D) through CBCT were as high prevalence of CBCT 1 in type A, CBCT 2 in type B, CBCT 3 in type C, and CBCT1 in type D and those of which showed statistical significance (P value=0.03). 3. The results according to location between mandibular canal and root through CBCT recorded each 49, 25, 17, 9 as buccal, inferior, lingual, between roots. Conclusion : When estimating the mandibular canal and the roots through the panoramic radiography, it could be difficult to drive the views of which this estimation was considerable. Thus it is required to have an accurate diagnostic approaching through CBCT that could estimate the location between mandibular canal and roots.

  • PDF

Cascade CNN with CPU-FPGA Architecture for Real-time Face Detection (실시간 얼굴 검출을 위한 Cascade CNN의 CPU-FPGA 구조 연구)

  • Nam, Kwang-Min;Jeong, Yong-Jin
    • Journal of IKEEE
    • /
    • v.21 no.4
    • /
    • pp.388-396
    • /
    • 2017
  • Since there are many variables such as various poses, illuminations and occlusions in a face detection problem, a high performance detection system is required. Although CNN is excellent in image classification, CNN operatioin requires high-performance hardware resources. But low cost low power environments are essential for small and mobile systems. So in this paper, the CPU-FPGA integrated system is designed based on 3-stage cascade CNN architecture using small size FPGA. Adaptive Region of Interest (ROI) is applied to reduce the number of CNN operations using face information of the previous frame. We use a Field Programmable Gate Array(FPGA) to accelerate the CNN computations. The accelerator reads multiple featuremap at once on the FPGA and performs a Multiply-Accumulate (MAC) operation in parallel for convolution operation. The system is implemented on Altera Cyclone V FPGA in which ARM Cortex A-9 and on-chip SRAM are embedded. The system runs at 30FPS with HD resolution input images. The CPU-FPGA integrated system showed 8.5 times of the power efficiency compared to systems using CPU only.

Fire Detection Approach using Robust Moving-Region Detection and Effective Texture Features of Fire (강인한 움직임 영역 검출과 화재의 효과적인 텍스처 특징을 이용한 화재 감지 방법)

  • Nguyen, Truc Kim Thi;Kang, Myeongsu;Kim, Cheol-Hong;Kim, Jong-Myon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.6
    • /
    • pp.21-28
    • /
    • 2013
  • This paper proposes an effective fire detection approach that includes the following multiple heterogeneous algorithms: moving region detection using grey level histograms, color segmentation using fuzzy c-means clustering (FCM), feature extraction using a grey level co-occurrence matrix (GLCM), and fire classification using support vector machine (SVM). The proposed approach determines the optimal threshold values based on grey level histograms in order to detect moving regions, and then performs color segmentation in the CIE LAB color space by applying the FCM. These steps help to specify candidate regions of fire. We then extract features of fire using the GLCM and these features are used as inputs of SVM to classify fire or non-fire. We evaluate the proposed approach by comparing it with two state-of-the-art fire detection algorithms in terms of the fire detection rate (or percentages of true positive, PTP) and the false fire detection rate (or percentages of true negative, PTN). Experimental results indicated that the proposed approach outperformed conventional fire detection algorithms by yielding 97.94% for PTP and 4.63% for PTN, respectively.

A Study on the Ship Sale and Purchase Brokers' Liability as Agent in English Maritime Law (영국 해사법상 선박매매 브로커의 대리인 책임에 관한 일고찰)

  • Jeong, Seon-Cheol
    • Journal of Navigation and Port Research
    • /
    • v.37 no.6
    • /
    • pp.617-625
    • /
    • 2013
  • "Sale and purchase brokers" are independent contractors who act as agents for principals intending to seller or buy ships in English Maritime Law. The essential feature is that legal position of shipbroker is largely one of agency. They can be obtained by a study of the Lloyd's Register or the equivalent registers of other Classification Societies, the American Bureau of Shipping and Korean Registers. Such a broker is of valuable assistance to the prospective seller or purchaser. And the broker's liability normally arises in the context of a contract. But, expressed in general terms, those contractual obligations are, in absence of contrary agreement, to act with reasonable care and skilled to obtain the cover requested by his client not to guarantee that such will be concluded and to ensure that the scope of the policy, its essential terms and relevant exclusions are made known to the insured. Acting in this professional capacity, the broker's liability are such that the facts upon which an action for breach of contract may be based may also found an action for the trot of negligence provided that there is shown to be the necessary 'assumption of responsibility' by the broker conveyed directly or indirectly to the insured. This thesis deals with liability of S&P Brokers, the legal problems of ship broking, commission, conflicts of interest and secret commissions in English Maritime Law and the Cases.

Automatic Video Editing Technology based on Matching System using Genre Characteristic Patterns (장르 특성 패턴을 활용한 매칭시스템 기반의 자동영상편집 기술)

  • Mun, Hyejun;Lim, Yangmi
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.861-869
    • /
    • 2020
  • We introduce the application that automatically makes several images stored in user's device into one video by using the different climax patterns appearing for each film genre. For the classification of the genre characteristics of movies, a climax pattern model style was created by analyzing the genre of domestic movie drama, action, horror and foreign movie drama, action, and horror. The climax pattern was characterized by the change in shot size, the length of the shot, and the frequency of insert use in a specific scene part of the movie, and the result was visualized. The model visualized by genre developed as a template using Firebase DB. Images stored in the user's device were selected and matched with the climax pattern model developed as a template for each genre. Although it is a short video, it is a feature of the proposed application that it can create an emotional story video that reflects the characteristics of the genre. Recently, platform operators such as YouTube and Naver are upgrading applications that automatically generate video using a picture or video taken by the user directly with a smartphone. However, applications that have genre characteristics like movies or include video-generation technology to show stories are still insufficient. It is predicted that the proposed automatic video editing has the potential to develop into a video editing application capable of transmitting emotions.

Comparative Analysis by Batch Size when Diagnosing Pneumonia on Chest X-Ray Image using Xception Modeling (Xception 모델링을 이용한 흉부 X선 영상 폐렴(pneumonia) 진단 시 배치 사이즈별 비교 분석)

  • Kim, Ji-Yul;Ye, Soo-Young
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.4
    • /
    • pp.547-554
    • /
    • 2021
  • In order to quickly and accurately diagnose pneumonia on a chest X-ray image, different batch sizes of 4, 8, 16, and 32 were applied to the same Xception deep learning model, and modeling was performed 3 times, respectively. As a result of the performance evaluation of deep learning modeling, in the case of modeling to which batch size 32 was applied, the results of accuracy, loss function value, mean square error, and learning time per epoch showed the best results. And in the accuracy evaluation of the Test Metric, the modeling applied with batch size 8 showed the best results, and the precision evaluation showed excellent results in all batch sizes. In the recall evaluation, modeling applied with batch size 16 showed the best results, and for F1-score, modeling applied with batch size 16 showed the best results. And the AUC score evaluation was the same for all batch sizes. Based on these results, deep learning modeling with batch size 32 showed high accuracy, stable artificial neural network learning, and excellent speed. It is thought that accurate and rapid lesion detection will be possible if a batch size of 32 is applied in an automatic diagnosis study for feature extraction and classification of pneumonia in chest X-ray images using deep learning in the future.

Denoising Self-Attention Network for Mixed-type Data Imputation (혼합형 데이터 보간을 위한 디노이징 셀프 어텐션 네트워크)

  • Lee, Do-Hoon;Kim, Han-Joon;Chun, Joonghoon
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.11
    • /
    • pp.135-144
    • /
    • 2021
  • Recently, data-driven decision-making technology has become a key technology leading the data industry, and machine learning technology for this requires high-quality training datasets. However, real-world data contains missing values for various reasons, which degrades the performance of prediction models learned from the poor training data. Therefore, in order to build a high-performance model from real-world datasets, many studies on automatically imputing missing values in initial training data have been actively conducted. Many of conventional machine learning-based imputation techniques for handling missing data involve very time-consuming and cumbersome work because they are applied only to numeric type of columns or create individual predictive models for each columns. Therefore, this paper proposes a new data imputation technique called 'Denoising Self-Attention Network (DSAN)', which can be applied to mixed-type dataset containing both numerical and categorical columns. DSAN can learn robust feature expression vectors by combining self-attention and denoising techniques, and can automatically interpolate multiple missing variables in parallel through multi-task learning. To verify the validity of the proposed technique, data imputation experiments has been performed after arbitrarily generating missing values for several mixed-type training data. Then we show the validity of the proposed technique by comparing the performance of the binary classification models trained on imputed data together with the errors between the original and imputed values.

A Study on Analyzing Sentiments on Movie Reviews by Multi-Level Sentiment Classifier (영화 리뷰 감성분석을 위한 텍스트 마이닝 기반 감성 분류기 구축)

  • Kim, Yuyoung;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.71-89
    • /
    • 2016
  • Sentiment analysis is used for identifying emotions or sentiments embedded in the user generated data such as customer reviews from blogs, social network services, and so on. Various research fields such as computer science and business management can take advantage of this feature to analyze customer-generated opinions. In previous studies, the star rating of a review is regarded as the same as sentiment embedded in the text. However, it does not always correspond to the sentiment polarity. Due to this supposition, previous studies have some limitations in their accuracy. To solve this issue, the present study uses a supervised sentiment classification model to measure a more accurate sentiment polarity. This study aims to propose an advanced sentiment classifier and to discover the correlation between movie reviews and box-office success. The advanced sentiment classifier is based on two supervised machine learning techniques, the Support Vector Machines (SVM) and Feedforward Neural Network (FNN). The sentiment scores of the movie reviews are measured by the sentiment classifier and are analyzed by statistical correlations between movie reviews and box-office success. Movie reviews are collected along with a star-rate. The dataset used in this study consists of 1,258,538 reviews from 175 films gathered from Naver Movie website (movie.naver.com). The results show that the proposed sentiment classifier outperforms Naive Bayes (NB) classifier as its accuracy is about 6% higher than NB. Furthermore, the results indicate that there are positive correlations between the star-rate and the number of audiences, which can be regarded as the box-office success of a movie. The study also shows that there is the mild, positive correlation between the sentiment scores estimated by the classifier and the number of audiences. To verify the applicability of the sentiment scores, an independent sample t-test was conducted. For this, the movies were divided into two groups using the average of sentiment scores. The two groups are significantly different in terms of the star-rated scores.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

PCA­based Waveform Classification of Rabbit Retinal Ganglion Cell Activity (주성분분석을 이용한 토끼 망막 신경절세포의 활동전위 파형 분류)

  • 진계환;조현숙;이태수;구용숙
    • Progress in Medical Physics
    • /
    • v.14 no.4
    • /
    • pp.211-217
    • /
    • 2003
  • The Principal component analysis (PCA) is a well-known data analysis method that is useful in linear feature extraction and data compression. The PCA is a linear transformation that applies an orthogonal rotation to the original data, so as to maximize the retained variance. PCA is a classical technique for obtaining an optimal overall mapping of linearly dependent patterns of correlation between variables (e.g. neurons). PCA provides, in the mean-squared error sense, an optimal linear mapping of the signals which are spread across a group of variables. These signals are concentrated into the first few components, while the noise, i.e. variance which is uncorrelated across variables, is sequestered in the remaining components. PCA has been used extensively to resolve temporal patterns in neurophysiological recordings. Because the retinal signal is stochastic process, PCA can be used to identify the retinal spikes. With excised rabbit eye, retina was isolated. A piece of retina was attached with the ganglion cell side to the surface of the microelectrode array (MEA). The MEA consisted of glass plate with 60 substrate integrated and insulated golden connection lanes terminating in an 8${\times}$8 array (spacing 200 $\mu$m, electrode diameter 30 $\mu$m) in the center of the plate. The MEA 60 system was used for the recording of retinal ganglion cell activity. The action potentials of each channel were sorted by off­line analysis tool. Spikes were detected with a threshold criterion and sorted according to their principal component composition. The first (PC1) and second principal component values (PC2) were calculated using all the waveforms of the each channel and all n time points in the waveform, where several clusters could be separated clearly in two dimension. We verified that PCA-based waveform detection was effective as an initial approach for spike sorting method.

  • PDF