• Title/Summary/Keyword: Software classification

Search Result 899, Processing Time 0.021 seconds

Model Adaptation Using Discriminative Noise Adaptive Training Approach for New Environments

  • Jung, Ho-Young;Kang, Byung-Ok;Lee, Yun-Keun
    • ETRI Journal
    • /
    • v.30 no.6
    • /
    • pp.865-867
    • /
    • 2008
  • A conventional environment adaptation for robust speech recognition is usually conducted using transform-based techniques. Here, we present a discriminative adaptation strategy based on a multi-condition-trained model, and propose a new method to provide universal application to a new environment using the environment's specific conditions. Experimental results show that a speech recognition system adapted using the proposed method works successfully for other conditions as well as for those of the new environment.

  • PDF

The effect of switching costs on resistance to change in the use of software

  • Perera, Nipuna;Kim, Hee-Woong
    • 한국경영정보학회:학술대회논문집
    • /
    • 2007.06a
    • /
    • pp.539-544
    • /
    • 2007
  • People tend to resist changing their software even alternatives are better then the current one. This study examines the resistance to change in the use of software from the switching costs perspective based on status quo bias theory. For this study, we select Web Browsers as software. Based on the classification of switching costs into three groups (psychological, procedural, and loss), this study identifies six types of switching costs (uncertainty, commitment, learning, setup, lost performance, and sunk costs). This study tests the effects of six switching costs on user resistance to change based on the survey of 204 web browser users. The results indicate that lost performance costs and emotional costs have significant effects on user resistance to change. This research contributes towards understanding of switching costs and the effects on user resistance to change. This study also offers suggestions to software vendors for retaining their users and to organizations for managing user resistance in switching and adopting software.

  • PDF

The Classification of the Software Quality by the Rough Tolerance Class

  • Choi, Wan-Kyoo;Lee, Sung-Joo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.249-253
    • /
    • 2004
  • When we decide the software quality on the basis of the software measurement, the transitive property which is a requirement for an equivalence relation is not always satisfied. Therefore, we propose a scheme for classifying the software quality that employs a tolerance relation instead of an equivalence relation. Given the experimental data set, the proposed scheme generates the tolerant classes for elements in the experiment data set, and generates the tolerant ranges for classifying the software quality by clustering the means of the tolerance classes. Through the experiment, we showed that the proposed scheme could product very useful and valid results. That is, it has no problems that we use as the criteria for classifying the software quality the tolerant ranges generated by the proposed scheme.

Deep Learning-Based Brain Tumor Classification in MRI images using Ensemble of Deep Features

  • Kang, Jaeyong;Gwak, Jeonghwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.7
    • /
    • pp.37-44
    • /
    • 2021
  • Automatic classification of brain MRI images play an important role in early diagnosis of brain tumors. In this work, we present a deep learning-based brain tumor classification model in MRI images using ensemble of deep features. In our proposed framework, three different deep features from brain MR image are extracted using three different pre-trained models. After that, the extracted deep features are fed to the classification module. In the classification module, the three different deep features are first fed into the fully-connected layers individually to reduce the dimension of the features. After that, the output features from the fully-connected layers are concatenated and fed into the fully-connected layer to predict the final output. To evaluate our proposed model, we use openly accessible brain MRI dataset from web. Experimental results show that our proposed model outperforms other machine learning-based models.

A Study on Improving Performance of Software Requirements Classification Models by Handling Imbalanced Data (불균형 데이터 처리를 통한 소프트웨어 요구사항 분류 모델의 성능 개선에 관한 연구)

  • Jong-Woo Choi;Young-Jun Lee;Chae-Gyun Lim;Ho-Jin Choi
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.7
    • /
    • pp.295-302
    • /
    • 2023
  • Software requirements written in natural language may have different meanings from the stakeholders' viewpoint. When designing an architecture based on quality attributes, it is necessary to accurately classify quality attribute requirements because the efficient design is possible only when appropriate architectural tactics for each quality attribute are selected. As a result, although many natural language processing models have been studied for the classification of requirements, which is a high-cost task, few topics improve classification performance with the imbalanced quality attribute datasets. In this study, we first show that the classification model can automatically classify the Korean requirement dataset through experiments. Based on these results, we explain that data augmentation through EDA(Easy Data Augmentation) techniques and undersampling strategies can improve the imbalance of quality attribute datasets, and show that they are effective in classifying requirements. The results improved by 5.24%p on F1-score, indicating that handling imbalanced data helps classify Korean requirements of classification models. Furthermore, detailed experiments of EDA illustrate operations that help improve classification performance.

Purposes, Results, and Types of Software Post Life Cycle Changes

  • Koh, Seokha;Han, Man Pil
    • Journal of Information Technology Applications and Management
    • /
    • v.22 no.3
    • /
    • pp.143-167
    • /
    • 2015
  • This paper addresses the issue how the total life cycle cost may be minimized and how the cost should be allocated to the acquirer and developer. This paper differentiates post life cycle change (PLCC) endeavors from PLCC activities, rigorously classifies PLCC endeavors according to the result of PLCC endeavors, and rigorously defines the life cycle cost of a software product. This paper reviews classical definitions of software 'maintenance' types and proposes a new typology of PLCC activities too. The proposed classification schemes are exhaustive and mutually exclusive, and provide a new paradigm to review existing literatures regarding software cost estimation, software 'maintenance,' software evolution, and software architecture from a new perspective. This paper argues that the long-term interest of the acquirer is not protected properly because warranty period is typically too short and because the main concern of warranty service is given to removing the defects detected easily. Based on the observation that defects are caused solely by errors the developer has committed for software while defects are often induced by using for hardware (so, this paper cautiously proposes not to use the term 'maintenance' at all for software), this paper argues that the cost to remove defects should not be borne by the acquirer for software.

An Activity-Centric Quality Model of Software

  • Koh, Seokha
    • Journal of Information Technology Applications and Management
    • /
    • v.26 no.2
    • /
    • pp.111-123
    • /
    • 2019
  • In this paper, software activity, software activity instance, and the quality of the activity instance are defined as the 'activity which is performed on the software product by a person or a group of persons,' the 'distinctive and individual performance of software activity,' and the 'performer's evaluation on how good or bad his/her own activity instance is,' respectively. The representative values of the instance quality population associated with a product and its sub-population are defined as the (software) activity quality and activity quality characteristic of the product, respectively. The activity quality model in this paper classifies activity quality characteristics according to the classification hierarchy of software activity by the goal. In the model, a quality characteristic can have two types of sub-characteristics : Special sub-characteristic and component sub-characteristic, where the former is its super-characteristic too simultaneously and the latter is not its super-characteristic but a part of its super-characteristic. The activity quality model is parsimonious, coherent, and easy to understand and use. The activity quality model can serve as a corner stone on which a software quality body of knowledge, which constituted with a set of models parsimonious, coherent, and easy to understand and use and the theories explaining the cause-and-relationships among the models, can be built. The body of knowledge can be called the (grand) activity-centric quality model of software.

An Application of Canonical Correlation Analysis Technique to Land Cover Classification of LANDSAT Images

  • Lee, Jong-Hun;Park, Min-Ho;Kim, Yong-Il
    • ETRI Journal
    • /
    • v.21 no.4
    • /
    • pp.41-51
    • /
    • 1999
  • This research is an attempt to obtain more accurate land cover information from LANDSAT images. Canonical correlation analysis, which has not been widely used in the image classification community, was applied to the classification of a LANDSAT images. It was found that it is easy to select training areas on the classification using canonical correlation analysis in comparison with the maximum likelihood classifier of $ERDAS^{(R)}$ software. In other words, the selected positions of training areas hardly affect the classification results using canonical correlation analysis. when the same training areas are used, the mapping accuracy of the canonical correlation classification results compared with the ground truth data is not lower than that of the maximum likelihood classifier. The kappa analysis for the canonical correlation classifier and the maximum likelihood classifier showed that the two methods are alike in classification accuracy. However, the canonical correlation classifier has better points than the maximum likelihood classifier in classification characteristics. Therefore, the classification using canonical correlation analysis applied in this research is effective for the extraction of land cover information from LANDSAT images and will be able to be put to practical use.

  • PDF

The Impact of Software and Medical Industry on Korea Economy (소프트웨어산업과 의료산업이 한국경제에 미치는 파급효과)

  • Yun, Eungyeong;Moon, Jun Hwan;Choi, Hangsok
    • Journal of Information Technology Services
    • /
    • v.17 no.2
    • /
    • pp.49-67
    • /
    • 2018
  • This study compares economic impact between Software and Medical industry through Input Output Table by Bank of Korea. We classify Software and Medical industry by The ninth Korea Standard Industry Classification and use linkage effects, value added inducement coefficient, and labor inducement coefficient to analyze economic impact. First, software and medical industry have different linkage effects between backward and forward. Second, They have higher value added inducement coefficient than average of all industry. Third, They not only have higher labor inducement coefficient than average of all industry but also simillar effect on labor induction. According to the result of this study, software and medical industry have high economic impact on Korea economy, and therefore are intensively fostered by policy support.

A Study on TensorFlow based Image Processing: Focusing by Pill Classification (텐서플로우 기반 이미지 프로세싱에 대한 연구: 알약분류 중심으로)

  • Joe, Soo-Hyoung;Kang, Jin-Goo;Kim, Jung-Hoon;Lee, Sung-Jun;Kim, Gyeyoung;Kim, Youngjong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.559-561
    • /
    • 2019
  • 이미지 프로세싱이란 기존의 이미지에 대해 컴퓨터를 이용하여 새로운 이미지로 창작하거나 수정하는 일련의 작업 과정이다. 우리는 알약의 이미지를 가져와 machine이 인지 할 수 있도록 수정한 후, 사진에 찍힌 알약을 구별하고 사용자 에게 그 알약의 정보들을 제공 할 수 있는 텐서플로우 기반의 이미지 프로세싱 방법에 대해 연구 하였다.