• Title/Summary/Keyword: textual information

Search Result 240, Processing Time 0.024 seconds

Application of Standard of Review for Safeguard Measure (세이프가드조치의 적법성 평가를 위한 심사기준의 적용에 관한 연구)

  • Lee, Eun-Sup;Kim, Sun-Ok
    • International Commerce and Information Review
    • /
    • v.9 no.2
    • /
    • pp.307-325
    • /
    • 2007
  • Examining the standards of review adopted by the dispute settlement body of the WTO in its decision on safeguard measures, the Appellate Body offers no coherent guidance or theory as to the legitimation of the safeguard measures adopted by the domestic authorities. It faults the lack of reasoned and adequate explanation in the national authorities' decision to impose safeguard measures, yet its own explanation of the permissible role for safeguard measure could hardly be less instructive. The Appellate Body has consistently emphasized fidelity to text in its decision but that approach can not work properly when the text is fundamentally deficient from the viewpoints that neither Article XIX nor the safeguard Agreement establish a coherent foundation for safeguard measures due to their vague and abstract provision. Without any coherent theory on guidance as to the legitimation of the safeguard measures, it would be absurd to expect WTO members to produce a reasoned and adequate explanation as to how their safeguard measures are in compliance with the WTO roles. In the absence of a thorough renegotiation for the proper operation of the WTO safeguard system, which seems quite unlikely for the foreseeable future, perhaps the unique method out of the current predicament is for the Appellate Body to lead a movement in establishing a sensible common law of safeguards, drawing on extra-textual guidance including the standards of review about their proper role in the WTO safeguard mechanism.

  • PDF

Exploration of Fit Reviews and its Impact on Ratings of Rental Dresses

  • Shin, Eonyou;McKinney, Ellen
    • Fashion, Industry and Education
    • /
    • v.15 no.2
    • /
    • pp.1-10
    • /
    • 2017
  • The purposes of this study were to explore (1) how fit reviews differ among height groups and (2) how overall numerical ratings differ depending on height groups and ifferent types of fit reviews. Content analysis was used to analyze systematically sampled online consumer reviews (OCRs) of formalwear dresses rented online. In part 1, 201 OCRs were analyzed to develop the coding scheme, which included three aspects of fit (physical, aesthetic, and functional), valence (negative, neutral, positive), and overall numerical rating. In part 2, 600 OCRs were coded and statistically analyzed. Differences in frequency were not found among height groups for any types of mentions (negative, neutral, and positive) in terms of the three aspects of fit in the OCRs. Differences in overall mean ratings were not found among height groups. Interestingly, valence of each aspect of fit reviews affected mean numeric ratings. This study is new in examining relationships among textual information (i.e., fit reviews), numerical information (i.e., numerical rating), and reviewer's characteristic (i.e., height). The results of this study offered practical implications for etailers and marketers that they should pay attention to the three aspects of fit reviews and monitor garments with negative fit evaluations for lower ratings. They may attempt to increase ratings by providing customers recommendations to get a better fit.

Clustering Representative Annotations for Image Browsing (이미지 브라우징 처리를 위한 전형적인 의미 주석 결합 방법)

  • Zhou, Tie-Hua;Wang, Ling;Lee, Yang-Koo;Ryu, Keun-Ho
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2010.06c
    • /
    • pp.62-65
    • /
    • 2010
  • Image annotations allow users to access a large image database with textual queries. But since the surrounding text of Web images is generally noisy. an efficient image annotation and retrieval system is highly desired. which requires effective image search techniques. Data mining techniques can be adopted to de-noise and figure out salient terms or phrases from the search results. Clustering algorithms make it possible to represent visual features of images with finite symbols. Annotationbased image search engines can obtains thousands of images for a given query; but their results also consist of visually noise. In this paper. we present a new algorithm Double-Circles that allows a user to remove noise results and characterize more precise representative annotations. We demonstrate our approach on images collected from Flickr image search. Experiments conducted on real Web images show the effectiveness and efficiency of the proposed model.

  • PDF

Ethical Conducts in Qualitative Research Methodology :Participant Observation and Interview Process

  • KANG, Eungoo;HWANG, Hee-Joong
    • Journal of Research and Publication Ethics
    • /
    • v.2 no.2
    • /
    • pp.5-10
    • /
    • 2021
  • Purpose: Ethical behaviors become more salient when researchers utilize face-to-face interviews and observation with vulnerable groups or communities, which may be unable to express their emotions during the sessions. The present research aims to investigate ethical behaviors while conducting research have resonance due to the deep nature of observation and interview data collection methods. Research design, data and methodology: The present research obtained non-numeric (Textual) data based on prior literature review to investigate Ethical Conducts in Qualitative Research. Non-numeric data differs from numeric data in how the data is collected, analyzed and presented. It is important to formulate written questions and adopt them what the method claims for the researcher to understand the studied phenomenon. Results: Our findings show that while conducting qualitative research, researchers must adhere to the following ethical conducts; upholding informed consent, confidentiality and privacy, adhering to beneficence's principle, practicing honesty and integrity. Each ethical conduct is discoursed in detail to realize more information on how it impacts the researcher and research participants. Conclusions: The current authors concludes that five ethical conducts are important for realizing extensive and rich information during qualitative research and may be exploited in implementing research policies for researchers utilizing observation and interviews methods of data collection.

Ethical Issues on Environmental Health Study

  • Hyein WOO
    • Journal of Research and Publication Ethics
    • /
    • v.4 no.1
    • /
    • pp.9-14
    • /
    • 2023
  • Purpose: Adequate public input and participation in environmental health research must be provided to ensure accurate results from studies involving human exposure to potentially hazardous substances. By addressing these ethical issues associated with environmental health research, this study can help reduce risks for individuals participating in studies and whole communities affected by their impactful findings. Research design, data and methodology: The current research should have followed the rule of qualitative textual research, searching and exploring the adequate prior resources such as books and peer-reviewed journal articles so that the current author could screen proper previous works which are acceptable for the content analysis. Results: The current research has figured out four ethical issues to improve environmental health study as follows: (1) Lack of Guidance for Collecting and Utilizing Data Ethically, (2) Insufficient Consideration Is Given to Vulnerable Populations When Conducting Studies, (3) Unclear Standards Exist for Protecting the Privacy Of Participant's Personal Information, and (4) Conducting Socially and Religiously Acceptable Research in Various Communities. Conclusions: This research concludes that future researchers should consider implementing anonymization techniques where possible so that findings are still accessible, but the risk posed by disclosing identifying information remains minimized during the analysis/publication stages.

Extracting User-Specific Advertising Keywords Based on Textual Data Mining from KakaoTalk (카카오톡에서의 텍스트 데이터 마이닝 기반의 사용자별 적합 광고 키워드 도출 )

  • Yerim Jeon;Dayeong So;Jimin Lee;Eunjin (Jinny) Jo;Jihoon Moon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.368-369
    • /
    • 2023
  • 대화 데이터 기반 광고 추천은 광고 마케팅에서 고객 맞춤형 광고 제공, 마케팅 효과 극대화 등을 위한 중요한 기술로 주목받고 있다. 본 논문에서는 모바일 인스턴스 메신저인 카카오톡 대화창에서 발생한 텍스트 데이터를 기반으로 대화 내용을 분석하여 대화 주제별 적절한 광고 키워드를 제안한다. 이를 위해 주제별 대화 내용을 미용, 식음료, 상거래로 세분하고 KoNLPy 의 Okt 를 이용하여 텍스트 전처리를 수행하고 키워드별로 빈도수를 뽑아 워드 클라우드를 제시한다. 또한, 잠재 디리클레 할당(Latent Dirichlet Allocation, LDA)을 기반으로 대화 주제를 세분화한 뒤 라벨링을 통해 주제별 대화 키워드를 분석한다. 실험 결과, 대화 주제를 온라인 쇼핑, 헤어, 뷰티 관리, 음식으로 나눌 수 있었으며, 토픽별 상위 키워드를 Word2Vec 을 통해 특정 단어와 유사한 키워드를 도출하여 적절한 광고 키워드를 제시할 수 있었다.

A Method for Evaluating News Value based on Supply and Demand of Information Using Text Analysis (텍스트 분석을 활용한 정보의 수요 공급 기반 뉴스 가치 평가 방안)

  • Lee, Donghoon;Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.45-67
    • /
    • 2016
  • Given the recent development of smart devices, users are producing, sharing, and acquiring a variety of information via the Internet and social network services (SNSs). Because users tend to use multiple media simultaneously according to their goals and preferences, domestic SNS users use around 2.09 media concurrently on average. Since the information provided by such media is usually textually represented, recent studies have been actively conducting textual analysis in order to understand users more deeply. Earlier studies using textual analysis focused on analyzing a document's contents without substantive consideration of the diverse characteristics of the source medium. However, current studies argue that analytical and interpretive approaches should be applied differently according to the characteristics of a document's source. Documents can be classified into the following types: informative documents for delivering information, expressive documents for expressing emotions and aesthetics, operational documents for inducing the recipient's behavior, and audiovisual media documents for supplementing the above three functions through images and music. Further, documents can be classified according to their contents, which comprise facts, concepts, procedures, principles, rules, stories, opinions, and descriptions. Documents have unique characteristics according to the source media by which they are distributed. In terms of newspapers, only highly trained people tend to write articles for public dissemination. In contrast, with SNSs, various types of users can freely write any message and such messages are distributed in an unpredictable way. Again, in the case of newspapers, each article exists independently and does not tend to have any relation to other articles. However, messages (original tweets) on Twitter, for example, are highly organized and regularly duplicated and repeated through replies and retweets. There have been many studies focusing on the different characteristics between newspapers and SNSs. However, it is difficult to find a study that focuses on the difference between the two media from the perspective of supply and demand. We can regard the articles of newspapers as a kind of information supply, whereas messages on various SNSs represent a demand for information. By investigating traditional newspapers and SNSs from the perspective of supply and demand of information, we can explore and explain the information dilemma more clearly. For example, there may be superfluous issues that are heavily reported in newspaper articles despite the fact that users seldom have much interest in these issues. Such overproduced information is not only a waste of media resources but also makes it difficult to find valuable, in-demand information. Further, some issues that are covered by only a few newspapers may be of high interest to SNS users. To alleviate the deleterious effects of information asymmetries, it is necessary to analyze the supply and demand of each information source and, accordingly, provide information flexibly. Such an approach would allow the value of information to be explored and approximated on the basis of the supply-demand balance. Conceptually, this is very similar to the price of goods or services being determined by the supply-demand relationship. Adopting this concept, media companies could focus on the production of highly in-demand issues that are in short supply. In this study, we selected Internet news sites and Twitter as representative media for investigating information supply and demand, respectively. We present the notion of News Value Index (NVI), which evaluates the value of news information in terms of the magnitude of Twitter messages associated with it. In addition, we visualize the change of information value over time using the NVI. We conducted an analysis using 387,014 news articles and 31,674,795 Twitter messages. The analysis results revealed interesting patterns: most issues show lower NVI than average of the whole issue, whereas a few issues show steadily higher NVI than the average.

Comparison of External Information Performance Predicting Subcellular Localization of Proteins (단백질의 세포내 위치를 예측하기 위한 외부정보의 성능 비교)

  • Chi, Sang-Mun
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.11
    • /
    • pp.803-811
    • /
    • 2010
  • Since protein subcellular location and biological function are highly correlated, the prediction of protein subcellular localization can provide information about the function of a protein. In order to enhance the prediction performance, external information other than amino acids sequence information is actively exploited in many researches. This paper compares the prediction capabilities resided in amino acid sequence similarity, protein profile, gene ontology, motif, and textual information. In the experiments using PLOC dataset which has proteins less than 80% sequence similarity, sequence similarity information and gene ontology are effective information, achieving a classification accuracy of 94.8%. In the experiments using BaCelLo IDS dataset with low sequence similarity less than 30%, using gene ontology gives the best prediction accuracies, 93.2% for animals and 86.6% for fungi.

A Method of Context based Free-form Annotation in XML Documents (XML문서 환경에서의 내용기반 자유형 Annotation 생성 기법)

  • 손원성;김재경;임순범;최윤철
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.9
    • /
    • pp.850-861
    • /
    • 2003
  • When creating annotation information in a free~form environment, ambiguity arises during the analysis stage between geometric information and the annotations. This needs to be resolved so that the accurate creation of annotation information in a free-form annotation environment is possible. This paper identifies and analyzes the ambiguities, specifying methods that are tailored to each of the various contexts that can cause conflicts with free-form marking in a XML-based annotation environment. The proposed general method is based on context which includes various textual and structure information between free-form marking and the annotations themselves. The context information used is expressed in XML based DTD, within the paper. The results are printed and shared through a system specifically implemented for this study. The results from the implementation of the Proposed method show that the annotated areas included in the free-form marking information are more accurate, achieving more accurate exchange results amongst multiple users in a heterogeneous document environment.

Annotation Modeling and System Implementation for Hand-held Environment (휴대용 단말기 환경을 위한 Annotation 모델링 및 시스템 구현)

  • Sohn, Won-Sung
    • Journal of The Korean Association of Information Education
    • /
    • v.10 no.2
    • /
    • pp.219-226
    • /
    • 2006
  • For the accurate creation of annotation information in a free-form annotation environment, the ambiguity that arises in the analysis stage between the geometric information and annotations needs to be resolved. Therefore, this This paper identifies, analyzes, and proposes presents solutions methods for the ambiguity that can occur between free-form marking and various contexts in XML-based annotation environment. The proposed method is based on context which includes various textual and structure information between free-form marking and annotated part. The proposed method show that the annotated portions areas included in the free-form marking information are more accurate, achieving more accurate exchange results amongst multiple users in a heterogeneous document environment. This study can be effectively applied to eLearning, Cyber-Class, and IETM

  • PDF