• Title/Summary/Keyword: New Category

Search Result 835, Processing Time 0.034 seconds

The Text Analysis of Plasticity Expressed in the Modern Art to Wear (Part I) - Focused on the West Art Works since 1980s - (현대 예술의상에 표현된 조형성의 텍스트 분석 (제1보) - 1980년대 이후 서구작가 작품을 중심으로 -)

  • Seo Seung Mi;Yang Sook Hi
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.29 no.6
    • /
    • pp.793-804
    • /
    • 2005
  • The new paradigm of the 21st century demand an openly different world of formative ideologies in respect to art and design. The purpose of this study is focused on trying to comprehend aesthetic essence of clothing as an, with the investigation of artistic theories manifested by art philosophers. Art to Wear was categorized into style to understand its artistic meaning as well as to analyze its character. Upon the foundation of semiotics theory, the feature of Art to Wear and its analysis category were argued in the context of Charles Morris three dimension of semiotics analysis. The conclusion to the research is like so. The feature and analysis category of Art to Wear upon a semiotics perspective was divided into syntactic dimension, semantic dimension and pragmatic dimension. The analytical categorization upon the perspective of syntactic dimension fell into the category of topology, shape and color. The semantic dimension of Art to Wear was divided into categories of denotation and connotation. In addition, the pragmatic dimension of Art to Wear analytical categorization was divided into a delivering function and common function.

Creation of Market Categories through Product Strategy: A Text-Mining Approach

  • IMAI, Marina
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.8 no.2
    • /
    • pp.439-451
    • /
    • 2021
  • The study aims to investigate the process employed by companies to intentionally create market categories through implementation of product strategy. Much of the research on market category formation focuses on the spontaneous emergence of market categories, with a few studies focusing on the intentional creation of market categories. In the course of this study, I therefore sought to understand the logic by which companies intentionally create market categories, by treating the process through which market categories are formed as a sensemaking process, and by treating the behavior of a company intentionally forming a market category as an effort to manage this sensemaking process. In empirical study, we conducted an exploratory case analysis through content analysis of company press releases and consumer reviews. It is possible that market categories can be formed or changed if the way in which they are shared among market participants can be changed. In this study, we identified two sense-giving activities for the creation of market categories by firms as follows: (1) reorganizing market categories that flat-panel TV manufacturers in the North American market have attempted to form into subcategories of smart TVs, and (2) connecting them to surrounding categories through strategic labeling to establish new categories.

An expanded Matrix Factorization model for real-time Web service QoS prediction

  • Hao, Jinsheng;Su, Guoping;Han, Xiaofeng;Nie, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.3913-3934
    • /
    • 2021
  • Real-time prediction of Web service of quality (QoS) provides more convenience for web services in cloud environment, but real-time QoS prediction faces severe challenges, especially under the cold-start situation. Existing literatures of real-time QoS predicting ignore that the QoS of a user/service is related to the QoS of other users/services. For example, users/services belonging to the same group of category will have similar QoS values. All of the methods ignore the group relationship because of the complexity of the model. Based on this, we propose a real-time Matrix Factorization based Clustering model (MFC), which uses category information as a new regularization term of the loss function. Specifically, in order to meet the real-time characteristic of the real-time prediction model, and to minimize the complexity of the model, we first map the QoS values of a large number of users/services to a lower-dimensional space by the PCA method, and then use the K-means algorithm calculates user/service category information, and use the average result to obtain a stable final clustering result. Extensive experiments on real-word datasets demonstrate that MFC outperforms other state-of-the-art prediction algorithms.

A Multi-category Task for Bitrate Interval Prediction with the Target Perceptual Quality

  • Yang, Zhenwei;Shen, Liquan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4476-4491
    • /
    • 2021
  • Video service providers tend to face user network problems in the process of transmitting video streams. They strive to provide user with superior video quality in a limited bitrate environment. It is necessary to accurately determine the target bitrate range of the video under different quality requirements. Recently, several schemes have been proposed to meet this requirement. However, they do not take the impact of visual influence into account. In this paper, we propose a new multi-category model to accurately predict the target bitrate range with target visual quality by machine learning. Firstly, a dataset is constructed to generate multi-category models by machine learning. The quality score ladders and the corresponding bitrate-interval categories are defined in the dataset. Secondly, several types of spatial-temporal features related to VMAF evaluation metrics and visual factors are extracted and processed statistically for classification. Finally, bitrate prediction models trained on the dataset by RandomForest classifier can be used to accurately predict the target bitrate of the input videos with target video quality. The classification prediction accuracy of the model reaches 0.705 and the encoded video which is compressed by the bitrate predicted by the model can achieve the target perceptual quality.

Quality Control Program and Its Results of Korean Society for Cytopathologists (대한세포병리학회 정도관리 현황 및 결과)

  • Lee, Hye-Kyung;Kim, Sung-Nam;Khang, Shin-Kwang;Kang, Chang-Suk;Yoon, Hye-Kyoung
    • The Korean Journal of Cytopathology
    • /
    • v.19 no.2
    • /
    • pp.65-71
    • /
    • 2008
  • In Korea, the quality control(QC) program forcytopathology was introduced in 1995. The program consists of a checklist for the cytolopathology departments, analysis data on all the participating institutions' QC data, including the annual data on cytologic examinations, the distribution of the gynecological cytologic diagnoses, as based on The Bethesda System 2001, and the data on cytologic-histolgical correlation of the gynecological field, and an evaluation for diagnostic accuracy. The diagnostic accuracy program has been performed 3 times per year with using gynecological, body fluid and fine needle aspiration cytologic slides. We report here on the institutional QC data and the evaluation for diagnostic accuracy since 2004, and also on the new strategy for quality control and assurance in the cytologic field. The diagnostic accuracy results of both the participating institutions and the QC committee were as follows; Category 0 and A: about 94%, Category B: 4-5%, Category C: less than 2%. As a whole, the cytologic daignostic accuracy is relatively satisfactory. In 2008, on site evaluation for pathology and cytology laboratories, as based on the "Quality Assurance Program for Pathology Services" is now going on, and a new method using virtual slides or image files for determining the diagnostic accuracy will be performed in November 2008.

Seismic Design of Reinforced Concrete Structures of Limited Ductility in New Zealand Standard (뉴질랜드 기준에서의 제한된 연성의 RC 구조물 내진설계)

  • 이한선
    • Proceedings of the Earthquake Engineering Society of Korea Conference
    • /
    • 2000.10a
    • /
    • pp.288-295
    • /
    • 2000
  • As the level of earthquake intensity in Korea is considered to be moderate, some structures or structural elements may be subjected to the reduced ductility demand, in contrast to the structures in high seismicity, due to the large inherent strength induced by gravity loads. New Zealand Standard(NZS) deals with these structures within the category of structures of limited ductility. This paper briefly reviews the concept of structures of limited ductility in NZS, and its applicability to Korean case. A structural wall system which is used as the structural system for typical apartments is taken as an example for the illustration.

  • PDF

An Experimental Study on the Modal Test of Gas Turbine Blade Integrity (가스터빈 블레이드 MODAL TEST를 위한 실험적 방법에 관한 연구)

  • 조철환;양경현;김성휘
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2001.11b
    • /
    • pp.1388-1392
    • /
    • 2001
  • In this paper, an experimental method of several modal analyses was devised to iify the vibration characteristics of G/T blade in power plants. Also, it is being applied this method to establish the standard category of natural frequency of new developed blades. So acceptance margin to avoid resonance due to nozzle waking force is being established for new blades. It is expected to improve the availability of G/T blades by using the result of this study.

  • PDF

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.

A Study on Developing Evaluation Indicators of University Libraries in Digital Environment (디지털 환경에서 대학도서관 평가지표 개발에 관한 연구)

  • 곽병희
    • Proceedings of the Korean Society for Information Management Conference
    • /
    • 2002.11a
    • /
    • pp.23-65
    • /
    • 2002
  • This study is to consider varying factors of internal/external informational and operational environments in libraries, and develop a new evaluation indicators for university libraries in digital environments. In order to do this research, the previous works have been investigated, and the Delphi study and our own analysis have been performed. The main research results are as follows. First, the results through the Delphi method that was to adopted to verify evaluation items and indicators determined by the literature review show that the repressed values for each evaluation category is greater than 3.00. overall average is 4.02. and standard deviation is ranged from 0.40 to 0.62 for each category. This means that the evaluation indicators are valid. Second, the factor analysis was performed to verify the construct validity of evaluation indicators. As a result. the cumulative variance of evaluation indicators consisting of 11 dimensions per factor is 72.733%. In turn, this result shows that the validity of these indicators is very reliable. Third, t-test and one way ANOVA are performed within significance probability 0.05 in order to verify differences in each librarians point of views for the degree of importance in evaluation indicators. The results show that these evaluation indicators are verified to be appropriate since there is no significant difference. Based on the Delphi study and our own analysis. we developed a new evaluation indicators that consists of 7 evaluation categories, 35 evaluation items, and 92 evaluation indicators.

  • PDF

Diagnostic Factor Analysis for Objective Assesment of Cleft Lip Nose Deformity (구순열 환자 코변형(cleft lip nose deformity)의 정량적 평가를 위한 진단 요인 분석)

  • Nam, Ki-Chang;Kim, Soo-Chan;Kim, Sung-Woo;Ji, Hyo-Chul;Rah, Dong-Kyun;Kim, Deok-Won
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.3-5
    • /
    • 2004
  • Cleft lip is one of the most common congenital deformities in craniofacial region. Despite the many reports on the outcome of various surgical techniques from individual medical centers, the evaluation of the outcome is based on the subjective observation because of lack of the objective evaluation system. Therefore, a new technique of objective and scientific evaluation for the nasal deformity of secondary cleft lip and nose deformity is critical to improve the management of the cleft patients including the decision of optimal age of operation and surgical technique as veil as evaluation of the outcome. In this study, a new method was proposed to evaluate the nasal deformity using nostril angle, distance, and area of patient images. The images were also evaluated by three expert plastic surgeons, and put into scale of 5 percentile. Measurement results were compared between the each category and the surgeon's evaluation, and coefficients of each category were statistically tested. As a result, The normalized overlap area of right and left nostrils and distance ratio between two centers of nostrils showed high coefficient with evaluations of plastic surgeons.

  • PDF