• Title/Summary/Keyword: Support Features

Search Result 1,568, Processing Time 0.026 seconds

The Effect of Consumer Characteristics on Mobile Fashion Shopping -Focusing on Market Mavenship, Innovativeness, Purchase Experience- (모바일 패션 쇼핑에 대한 소비자 특성의 효과 -시장 전문성, 혁신성, 구매경험을 중심으로-)

  • Ryou, Eunjeong;Ahn, Soo-Kyoung
    • Journal of Fashion Business
    • /
    • v.23 no.1
    • /
    • pp.89-102
    • /
    • 2019
  • The objective of this study was to investigate the influence of consumer characteristics including market mavenship, innovativeness, and purchase experience by mobile on the mobile fashion shopping. The data for this study was collected from the nationwide consumer panels through online survey. A total of 306 subjects aged from 20 to 39 years old and had purchased fashion goods using mobile devices completed a self-administered questionnaire. A series of exploratory and confirmative factor analysis identified four dimensions of mobile fashion shopping features such as tangibility, ubiquity, security and personalization. A structural equation modeling test was employed to examine the relationship of consumer characteristics, mobile fashion shopping features, and consumer behavior. Market mavenship had a positive influence on the perceived features of mobile fashion shopping. Innovativeness negatively influenced tangibility, ubiquity, and personalization. Each construct of mobile shopping features positively affected satisfaction while security had only a direct negative impact on purchasing intention. Satisfaction had a significantly positive impact on purchase intention. Purchase experience of mobile fashion shopping partially affected the relationship between consumer characteristics and perceived features of mobile fashion shopping. These results provide a practical implication theoretical support for increasing consumer satisfaction with mobile fashion shopping in terms of consumer characteristics.

A Recognition Algorithm of Handwritten Numerals based on Structure Features (구조적 특징기반 자유필기체 숫자인식 알고리즘)

  • Song, Jeong-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.6
    • /
    • pp.151-156
    • /
    • 2018
  • Because of its large differences in writing style, context-independency and high recognition accuracy requirement, free handwritten digital identification is still a very difficult problem. Analyzing the characteristic of handwritten digits, this paper proposes a new handwritten digital identification method based on combining structural features. Given a handwritten digit, a variety of structural features of the digit including end points, bifurcation points, horizontal lines and so on are identified automatically and robustly by a proposed extended structural features identification algorithm and a decision tree based on those structural features are constructed to support automatic recognition of the handwritten digit. Experimental result demonstrates that the proposed method is superior to other general methods in recognition rate and robustness.

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

Damage detection of bridges based on spectral sub-band features and hybrid modeling of PCA and KPCA methods

  • Bisheh, Hossein Babajanian;Amiri, Gholamreza Ghodrati
    • Structural Monitoring and Maintenance
    • /
    • v.9 no.2
    • /
    • pp.179-200
    • /
    • 2022
  • This paper proposes a data-driven methodology for online early damage identification under changing environmental conditions. The proposed method relies on two data analysis methods: feature-based method and hybrid principal component analysis (PCA) and kernel PCA to separate damage from environmental influences. First, spectral sub-band features, namely, spectral sub-band centroids (SSCs) and log spectral sub-band energies (LSSEs), are proposed as damage-sensitive features to extract damage information from measured structural responses. Second, hybrid modeling by integrating PCA and kernel PCA is performed on the spectral sub-band feature matrix for data normalization to extract both linear and nonlinear features for nonlinear procedure monitoring. After feature normalization, suppressing environmental effects, the control charts (Hotelling T2 and SPE statistics) is implemented to novelty detection and distinguish damage in structures. The hybrid PCA-KPCA technique is compared to KPCA by applying support vector machine (SVM) to evaluate the effectiveness of its performance in detecting damage. The proposed method is verified through numerical and full-scale studies (a Bridge Health Monitoring (BHM) Benchmark Problem and a cable-stayed bridge in China). The results demonstrate that the proposed method can detect the structural damage accurately and reduce false alarms by suppressing the effects and interference of environmental variations.

Prediction of Chronic Hepatitis Susceptibility using Single Nucleotide Polymorphism Data and Support Vector Machine (Single Nucleotide Polymorphism(SNP) 데이타와 Support Vector Machine(SVM)을 이용한 만성 간염 감수성 예측)

  • Kim, Dong-Hoi;Uhmn, Saang-Yong;Hahm, Ki-Baik;Kim, Jin
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.7
    • /
    • pp.276-281
    • /
    • 2007
  • In this paper, we use Support Vector Machine to predict the susceptibility of chronic hepatitis from single nucleotide polymorphism data. Our data set consists of SNP data for 328 patients based on 28 SNPs and patients classes(chronic hepatitis, healthy). We use leave-one-out cross validation method for estimation of the accuracy. The experimental results show that SVM with SNP is capable of classifying the SNP data successfully for chronic hepatitis susceptibility with accuracy value of 67.1%. The accuracy of all SNPs with health related feature(sex, age) is improved more than 7%(accuracy 74.9%). This result shows that the accuracy of predicting susceptibility can be improved with health related features. With more SNPs and other health related features, SVM prediction of SNP data is a potential tool for chronic hepatitis susceptibility.

Study on the Development of Auto-classification Algorithm for Ginseng Seedling using SVM (Support Vector Machine) (SVM(Support Vector Machine)을 이용한 묘삼 자동등급 판정 알고리즘 개발에 관한 연구)

  • Oh, Hyun-Keun;Lee, Hoon-Soo;Chung, Sun-Ok;Cho, Byoung-Kwan
    • Journal of Biosystems Engineering
    • /
    • v.36 no.1
    • /
    • pp.40-47
    • /
    • 2011
  • Image analysis algorithm for the quality evaluation of ginseng seedling was investigated. The images of ginseng seedling were acquired with a color CCD camera and processed with the image analysis methods, such as binary conversion, labeling, and thinning. The processed images were used to calculate the length and weight of ginseng seedlings. The length and weight of the samples could be predicted with standard errors of 0.343 mm, and 0.0214 g respectively, $R^2$ values of 0.8738 and 0.9835 respectively. For the evaluation of the three quality grades of Gab, Eul, and abnormal ginseng seedlings, features from the processed images were extracted. The features combined with the ratio of the lengths and areas of the ginseng seedlings efficiently differentiate the abnormal shapes from the normal ones of the samples. The grade levels were evaluated with an efficient pattern recognition method of support vector machine analysis. The quality grade of ginseng seedling could be evaluated with an accuracy of 95% and 97% for training and validation, respectively. The result indicates that color image analysis with support vector machine algorithm has good potential to be used for the development of an automatic sorting system for ginseng seedling.

Voice-Based Gender Identification Employing Support Vector Machines (음성신호 기반의 성별인식을 위한 Support Vector Machines의 적용)

  • Lee, Kye-Hwan;Kang, Sang-Ick;Kim, Deok-Hwan;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.2
    • /
    • pp.75-79
    • /
    • 2007
  • We propose an effective voice-based gender identification method using a support vector machine(SVM). The SVM is a binary classification algorithm that classifies two groups by finding the voluntary nonlinear boundary in a feature space and is known to yield high classification performance. In the present work, we compare the identification performance of the SVM with that of a Gaussian mixture model(GMM) using the mel frequency cepstral coefficients(MFCC). A novel means of incorporating a features fusion scheme based on a combination of the MFCC and pitch is proposed with the aim of improving the performance of gender identification using the SVM. Experiment results indicate that the gender identification performance using the SVM is significantly better than that of the GMM. Moreover, the performance is substantially improved when the proposed features fusion technique is applied.

Pedestrian Classification using CNN's Deep Features and Transfer Learning (CNN의 깊은 특징과 전이학습을 사용한 보행자 분류)

  • Chung, Soyoung;Chung, Min Gyo
    • Journal of Internet Computing and Services
    • /
    • v.20 no.4
    • /
    • pp.91-102
    • /
    • 2019
  • In autonomous driving systems, the ability to classify pedestrians in images captured by cameras is very important for pedestrian safety. In the past, after extracting features of pedestrians with HOG(Histogram of Oriented Gradients) or SIFT(Scale-Invariant Feature Transform), people classified them using SVM(Support Vector Machine). However, extracting pedestrian characteristics in such a handcrafted manner has many limitations. Therefore, this paper proposes a method to classify pedestrians reliably and effectively using CNN's(Convolutional Neural Network) deep features and transfer learning. We have experimented with both the fixed feature extractor and the fine-tuning methods, which are two representative transfer learning techniques. Particularly, in the fine-tuning method, we have added a new scheme, called M-Fine(Modified Fine-tuning), which divideslayers into transferred parts and non-transferred parts in three different sizes, and adjusts weights only for layers belonging to non-transferred parts. Experiments on INRIA Person data set with five CNN models(VGGNet, DenseNet, Inception V3, Xception, and MobileNet) showed that CNN's deep features perform better than handcrafted features such as HOG and SIFT, and that the accuracy of Xception (threshold = 0.5) isthe highest at 99.61%. MobileNet, which achieved similar performance to Xception and learned 80% fewer parameters, was the best in terms of efficiency. Among the three transfer learning schemes tested above, the performance of the fine-tuning method was the best. The performance of the M-Fine method was comparable to or slightly lower than that of the fine-tuningmethod, but higher than that of the fixed feature extractor method.

A Development of Feature-based Wire Harness Drawing System (특징형상 기반 자동차 전장도면설계 시스템 개발 연구)

  • 이상준;이수홍
    • Korean Journal of Computational Design and Engineering
    • /
    • v.1 no.3
    • /
    • pp.177-188
    • /
    • 1996
  • An approach to providing computational support with an expert shell is discussed with the scope of an industrial wire harness design, especially at a manufacturing stage. Key issues include the development of an architecture that supports a frequent design change among engineers associated with different parts of the wiring design process and the development of hierarchical representations that capture the different characteristics (e.g., connectivity, configuration) of the harnesses. The abstraction of design information results in features, while the abstraction of drawing elements leads to the definition of objects. These abstractions are essential for efficient transactions among people and computer tools in a domain that involves numerous interacting constraints. In this paper the strategy for the problem decomposition, the definition of features, and the ways in which features are shared by various operations and design changes, are discussed. We conclude with a discussion of some of the issues raised by the project and the steps underway to address them.

  • PDF

Prominence Detection Using Feature Differences of Neighboring Syllables for English Speech Clinics (영어 강세 교정을 위한 주변 음 특징 차를 고려한 강조점 검출)

  • Shim, Sung-Geon;You, Ki-Sun;Sung, Won-Yong
    • Phonetics and Speech Sciences
    • /
    • v.1 no.2
    • /
    • pp.15-22
    • /
    • 2009
  • Prominence of speech, which is often called 'accent,' affects the fluency of speaking American English greatly. In this paper, we present an accurate prominence detection method that can be utilized in computer-aided language learning (CALL) systems. We employed pitch movement, overall syllable energy, 300-2200 Hz band energy, syllable duration, and spectral and temporal correlation as features to model the prominence of speech. After the features for vowel syllables of speech were extracted, prominent syllables were classified by SVM (Support Vector Machine). To further improve accuracy, the differences in characteristics of neighboring syllables were added as additional features. We also applied a speech recognizer to extract more precise syllable boundaries. The performance of our prominence detector was measured based on the Intonational Variation in English (IViE) speech corpus. We obtained 84.9% accuracy which is about 10% higher than previous research.

  • PDF