• Title/Summary/Keyword: Document Frequency

Search Result 298, Processing Time 0.028 seconds

Groupware Current Status Analysis Ⅰ (그룹웨어의 현황 분석 Ⅰ)

  • Kim, Sun-Uk;Gim, Bong-Jin
    • IE interfaces
    • /
    • v.10 no.3
    • /
    • pp.75-93
    • /
    • 1997
  • Unlike individual applications, it is extremely hard to obtain user requirements for group systems, since there exists very complicated dynamics in group. This may result in spreading a great amount of products with a broad range of contents. Thus, this study presents a comparative analysis of groupware products. As a result, these products have been categorized into three areas which include cooperation/document management systems, collaborative writing systems, and decision-making/meeting systems. While the systems reviewed here focus on the cooperation/document management systems, the other two areas will be dealt in details in part Ⅱ. The first area ends up with two large categories such as proprietary groupware products and intranet groupware products. However, it has been observed that there is a natural convergence between these two categories. Consequently, the comparative analysis has been performed in terms of functions provided on the two categories and a combined category. Each group of the functions has been divided into three parts which consist of basic functions, quasi-basic functions, and others. Such a decision has been made based on the frequency rate of the functions provided in the products. With a more strict rule, the basic functions comprise electronic mail, sanction, bulletin board, document management, scheduling, security, Web browser, and Internet connectivity. This study also provides a framework for integrated functional model of groupware systems. The basic functions are merged into the model. However, the model is so flexible that it can partially include the quasi-functions in addition to the basic functions. In the future, it is expected that a large number of products will stem from the modification of the functional model.

  • PDF

A Study on Analysis of Topic Modeling using Customer Reviews based on Sharing Economy: Focusing on Sharing Parking (공유경제 기반의 고객리뷰를 이용한 토픽모델링 분석: 공유주차를 중심으로)

  • Lee, Taewon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.3
    • /
    • pp.39-51
    • /
    • 2020
  • This study will examine the social issues and consumer awareness of sharing parking through the method text mining. In this experiment, the topic by keyword was extracted and analyzed using TFIDF (Term frequency inverse document frequency) and LDA (Latent dirichlet allocation) technique. As a result of categorization by topic, citizens' complaints such as local government agreements, parking space negotiations, parking culture improvement, citizen participation, etc., played an important role in implementing shared parking services. The contribution of this study highly differentiated from previous studies that conducted exploratory studies using corporate and regional cases, and can be said to have a high academic contribution. In addition, based on the results obtained by utilizing the LDA analysis in this study, there is a practical contribution that it can be applied or utilized in establishing a sharing economy policy for revitalizing the local economy.

A Study of the Influence of Choice of Record Fields on Retrieval Performance in the Bibliographic Database (서지 데이터베이스에서의 레코드 필드 선택이 검색 성능에 미치는 영향에 관한 연구)

  • Heesop Kim
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.35 no.4
    • /
    • pp.97-122
    • /
    • 2001
  • This empirical study investigated the effect of choice of record field(s) upon which to search on retrieval performance for a large operational bibliographic database. The query terms used in the study were identified algorithmically from each target set in four different ways: (1) controlled terms derived from index term frequency weights, (2) uncontrolled terms derived from index term frequency weights. (3) controlled terms derived from inverse document frequency weights, and (4) uncontrolled terms based on universe document frequency weights. Su potable choices of record field were recognised. Using INSPEC terminology, these were the fields: (1) Abstract. (2) 'Anywhere'(i.e., ail fields). (3) Descriptors. (4) Identifiers, (5) 'Subject'(i.e., 'Descriptors' plus Identifiers'). and (6) Title. The study was undertaken in an operational web-based IR environment using the INSPEC bibliographic database. The retrieval performances were evaluated using D measure (bivariate in Recall and Precision). The main findings were that: (1) there exist significant differences in search performance arising from choice of field, using 'mean performance measure' as the criterion statistic; (2) the rankings of field-choices for each of these performance measures is sensitive to the choice of query : and (3) the optimal choice of field for the D-measure is Title.

  • PDF

Evaluation Model for Gab Analysis Between NCS Competence Unit Element and Traditional Curriculum (NCS 능력단위 요소와 기존 교육과정 간 갭 분석을 위한 평가모델)

  • Kim, Dae-kyung;Kim, Chang-Bok
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.4
    • /
    • pp.338-344
    • /
    • 2015
  • The national competency standards (NCS) is a systematize and standardize for skills required to perform their job. The NCS has developed a learning module with materialization and standardize by competence unit element, which is the unit of specific job competency. The existing curriculum is material to gab analysis for use in education training with competence unit element. The existing gab analysis has evaluated subjectively by experts. The gab analysis by experts bring up a subject subjective decision, accuracy lack, temporal and spatial inefficiency by psychological factor. This paper is proposed automated evaluation model for problem resolve of subjective evaluation. This paper use index term extraction, term frequency-inverse document frequency for feature value extraction, cosine similarity algorithm for gab analysis between existing curriculum and competence unit element. This paper was presented similarity mapping table between existing curriculum and competence unit element. The evaluation model in this paper should be complemented by an improved algorithm from the structural characteristics and speed.

Comparison of Product and Customer Feature Selection Methods for Content-based Recommendation in Internet Storefronts (인터넷 상점에서의 내용기반 추천을 위한 상품 및 고객의 자질 추출 성능 비교)

  • Ahn Hyung-Jun;Kim Jong-Woo
    • The KIPS Transactions:PartD
    • /
    • v.13D no.2 s.105
    • /
    • pp.279-286
    • /
    • 2006
  • One of the widely used methods for product recommendation in Internet storefronts is matching product features against target customer profiles. When using this method, it's very important to choose a suitable subset of features for recommendation efficiency and performance, which, however, has not been rigorously researched so far. In this paper, we utilize a dataset collected from a virtual shopping experiment in a Korean Internet book shopping mall to compare several popular methods from other disciplines for selecting features for product recommendation: the vector-space model, TFIDF(Term Frequency-Inverse Document Frequency), the mutual information method, and the singular value decomposition(SVD). The application of SVD showed the best performance in the analysis results.

WCTT: Web Crawling System based on HTML Document Formalization (WCTT: HTML 문서 정형화 기반 웹 크롤링 시스템)

  • Kim, Jin-Hwan;Kim, Eun-Gyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.4
    • /
    • pp.495-502
    • /
    • 2022
  • Web crawler, which is mainly used to collect text on the web today, is difficult to maintain and expand because researchers must implement different collection logic by collection channel after analyzing tags and styles of HTML documents. To solve this problem, the web crawler should be able to collect text by formalizing HTML documents to the same structure. In this paper, we designed and implemented WCTT(Web Crawling system based on Tag path and Text appearance frequency), a web crawling system that collects text with a single collection logic by formalizing HTML documents based on tag path and text appearance frequency. Because WCTT collects texts with the same logic for all collection channels, it is easy to maintain and expand the collection channel. In addition, it provides the preprocessing function that removes stopwords and extracts only nouns for keyword network analysis and so on.

Reviews Analysis of Korean Clinics Using LDA Topic Modeling (토픽 모델링을 활용한 한의원 리뷰 분석과 마케팅 제언)

  • Kim, Cho-Myong;Jo, A-Ram;Kim, Yang-Kyun
    • The Journal of Korean Medicine
    • /
    • v.43 no.1
    • /
    • pp.73-86
    • /
    • 2022
  • Objectives: In the health care industry, the influence of online reviews is growing. As medical services are provided mainly by providers, those services have been managed by hospitals and clinics. However, direct promotions of medical services by providers are legally forbidden. Due to this reason, consumers, like patients and clients, search a lot of reviews on the Internet to get any information about hospitals, treatments, prices, etc. It can be determined that online reviews indicate the quality of hospitals, and that analysis should be done for sustainable hospital marketing. Method: Using a Python-based crawler, we collected reviews, written by real patients, who had experienced Korean medicine, about more than 14,000 reviews. To extract the most representative words, reviews were divided by positive and negative; after that reviews were pre-processed to get only nouns and adjectives to get TF(Term Frequency), DF(Document Frequency), and TF-IDF(Term Frequency - Inverse Document Frequency). Finally, to get some topics about reviews, aggregations of extracted words were analyzed by using LDA(Latent Dirichlet Allocation) methods. To avoid overlap, the number of topics is set by Davis visualization. Results and Conclusions: 6 and 3 topics extracted in each positive/negative review, analyzed by LDA Topic Model. The main factors, consisting of topics were 1) Response to patients and customers. 2) Customized treatment (consultation) and management. 3) Hospital/Clinic's environments.

An Ensemble Approach for Cyber Bullying Text messages and Images

  • Zarapala Sunitha Bai;Sreelatha Malempati
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.59-66
    • /
    • 2023
  • Text mining (TM) is most widely used to find patterns from various text documents. Cyber-bullying is the term that is used to abuse a person online or offline platform. Nowadays cyber-bullying becomes more dangerous to people who are using social networking sites (SNS). Cyber-bullying is of many types such as text messaging, morphed images, morphed videos, etc. It is a very difficult task to prevent this type of abuse of the person in online SNS. Finding accurate text mining patterns gives better results in detecting cyber-bullying on any platform. Cyber-bullying is developed with the online SNS to send defamatory statements or orally bully other persons or by using the online platform to abuse in front of SNS users. Deep Learning (DL) is one of the significant domains which are used to extract and learn the quality features dynamically from the low-level text inclusions. In this scenario, Convolutional neural networks (CNN) are used for training the text data, images, and videos. CNN is a very powerful approach to training on these types of data and achieved better text classification. In this paper, an Ensemble model is introduced with the integration of Term Frequency (TF)-Inverse document frequency (IDF) and Deep Neural Network (DNN) with advanced feature-extracting techniques to classify the bullying text, images, and videos. The proposed approach also focused on reducing the training time and memory usage which helps the classification improvement.

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.

Design and Implementation of IVR Server Using VoiceXML (VoiceXML을 이용한 IVR 서버 설계 및 구현)

  • Lee, Chang-Ho;Jang, Won-Jo;Kang, Sun-Mee
    • Speech Sciences
    • /
    • v.9 no.3
    • /
    • pp.47-59
    • /
    • 2002
  • A new brilliant service using human-voice and DTMF (Dual Tone Multi Frequency) technique is expected nowadays in order to obtain valuable information on the internet more easily. VoiceXML (Voice eXtensible Markup Language) is the right choice that makes the new service possible. In this paper, the design and implementation of IVR (Interactive Voice Response) server using VoiceXML is described, where it connects with internet and IVR server efficiently. IVR server using VoiceXML is composed of two groups: VoiceXML document handling and VoiceXML execution. Scenario part of IVR server corresponds to VoiceXML document, the execution is performed by VoiceXML execution.

  • PDF