• Title/Summary/Keyword: 웹 검색 서비스

Search Result 797, Processing Time 0.024 seconds

Construction of Internet Public Library Asia (아시아 인터넷 공공 도서관(Internet Public Library Asia) 구축에 관한 연구)

  • 이원숙;일본명;일본명;일본명;일본명
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.13 no.2
    • /
    • pp.59-73
    • /
    • 2002
  • Libraries, not only research libraries but also public libraries, have been fundamentally affected by the immense spread of the Internet and the World Wide Web. Many public libraries have their Web pages, through which they are providing their new and conventional services. There are also web sites which provide library-like services. This paper shows an experimental project named Internet Public Library Asia, which provides information in multiple languages of information resources published in Chinese, Japanese and Korean languages. This paper firstly overviews how traditional public libraries have been affected by the Internet. Then, it describes a few aspects from the viewpoint of crucial library function on the Internet and also from the viewpoint of Asian resources and users. This paper secondly proposes a model to serve information about valuable resources published in multiple Asian languages, and then shows the metadata schema and a few software tools developed for IPL-Asia The name of IPL is borrowed from Internet Public Library based at University of Michigan, since it is, in part, a collaborative activity with the IPL in Michigan. The metadata schema is defined based both on Dublin Core and IEEE LOM and adapted for parallel description in the four languages, i. e. , Chinese, Japanese, Korean and English. The software tools provide functions to support collaboration among people engaged in development of metadata database and metadata editing. These tools have been developed based on the XML technologies.

  • PDF

Reliable Image-Text Fusion CAPTCHA to Improve User-Friendliness and Efficiency (사용자 편의성과 효율성을 증진하기 위한 신뢰도 높은 이미지-텍스트 융합 CAPTCHA)

  • Moon, Kwang-Ho;Kim, Yoo-Sung
    • The KIPS Transactions:PartC
    • /
    • v.17C no.1
    • /
    • pp.27-36
    • /
    • 2010
  • In Web registration pages and online polling applications, CAPTCHA(Completely Automated Public Turing Test To Tell Computers and Human Apart) is used for distinguishing human users from automated programs. Text-based CAPTCHAs have been widely used in many popular Web sites in which distorted text is used. However, because the advanced optical character recognition techniques can recognize the distorted texts, the reliability becomes low. Image-based CAPTCHAs have been proposed to improve the reliability of the text-based CAPTCHAs. However, these systems also are known as having some drawbacks. First, some image-based CAPTCHA systems with small number of image files in their image dictionary is not so reliable since attacker can recognize images by repeated executions of machine learning programs. Second, users may feel uncomfortable since they have to try CAPTCHA tests repeatedly when they fail to input a correct keyword. Third, some image-base CAPTCHAs require high communication cost since they should send several image files for one CAPTCHA. To solve these problems of image-based CAPTCHA, this paper proposes a new CAPTCHA based on both image and text. In this system, an image and keywords are integrated into one CAPTCHA image to give user a hint for the answer keyword. The proposed CAPTCHA can help users to input easily the answer keyword with the hint in the fused image. Also, the proposed system can reduce the communication costs since it uses only a fused image file for one CAPTCHA. To improve the reliability of the image-text fusion CAPTCHA, we also propose a dynamic building method of large image dictionary from gathering huge amount of images from theinternet with filtering phase for preserving the correctness of CAPTCHA images. In this paper, we proved that the proposed image-text fusion CAPTCHA provides users more convenience and high reliability than the image-based CAPTCHA through experiments.

Development of Intelligent Learning Tool based on Human eyeball Movement Analysis for Improving Foreign Language Competence (외국어 능력 향상을 위한 사용자 안구운동 분석 기반의 지능형 학습도구 개발)

  • Shin, Jihye;Jang, Young-Min;Kim, Sangwook;Mallipeddi, Rammohan;Bae, Jungok;Choi, Sungmook;Lee, Minho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.11
    • /
    • pp.153-161
    • /
    • 2013
  • Recently, there has been a tremendous increase in the availability of educational materials for foreign language learning. As part of this trend, there has been an increase in the amount of electronically mediated materials available. However, conventional educational contents developed using computer technology has provided typically one-way information, which is not the most helpful thing for users. Providing the user's convenience requires additional off-line analysis for diagnosing an individual user's learning. To improve the user's comprehension of texts written in a foreign language, we propose an intelligent learning tool based on the analysis of the user's eyeball movements, which is able to diagnose and improve foreign language reading ability by providing necessary supplementary aid just when it is needed. To determine the user's learning state, we correlate their eye movements with findings from research in cognitive psychology and neurophysiology. Based on this, the learning tool can distinguish whether users know or do not know words when they are reading foreign language sentences. If the learning tool judges a word to be unknown, it immediately provides the student with the meaning of the word by extracting it from an on-line dictionary. The proposed model provides a tool which empowers independent learning and makes access to the meanings of unknown words automatic. In this way, it can enhance a user's reading achievement as well as satisfaction with text comprehension in a foreign language.

Design and Implementation of Blockchain Network Based on Domain Name System (블록체인 네트워크 기반의 도메인 네임 시스템 설계 및 구현)

  • Heo, Jae-Wook;Kim, Jeong-Ho;Jun, Moon-Seog
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.5
    • /
    • pp.36-46
    • /
    • 2019
  • The number of hosts connected to the Internet has increased dramatically, introducing the Domain Name System(DNS) in 1984. DNS is now an important key point for all users of the Internet by allowing them to use a convenient character address without memorizing a series of numbers of complex IP address. However, relative to the importance of DNS, there still exist many problems such as the authorization allocation issue, the disputes over public registration, security vulnerability such as DNS cache poisoning, DNS spoofing, man-in-the-middle attack, DNS amplification attack, and the need for many domain names in the age of hyper-connected networks. In this paper, to effectively improve these problems of existing DNS, we proposed a method of implementing DNS using distributed ledger technology, blockchain, and implemented using a Ethereum-based platform. In addition, the qualitative analysis performance comparative evaluation of the existing domain name registration and domain name server was conducted, and conducted security assessments on the proposed system to improve security problem of existing DNS. In conclusion, it was shown that DNS services could be provided high security and high efficiently using blockchain.

A Study on Automated Fake News Detection Using Verification Articles (검증 자료를 활용한 가짜뉴스 탐지 자동화 연구)

  • Han, Yoon-Jin;Kim, Geun-Hyung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.12
    • /
    • pp.569-578
    • /
    • 2021
  • Thanks to web development today, we can easily access online news via various media. As much as it is easy to access online news, we often face fake news pretending to be true. As fake news items have become a global problem, fact-checking services are provided domestically, too. However, these are based on expert-based manual detection, and research to provide technologies that automate the detection of fake news is being actively conducted. As for the existing research, detection is made available based on contextual characteristics of an article and the comparison of a title and the main article. However, there is a limit to such an attempt making detection difficult when manipulation precision has become high. Therefore, this study suggests using a verifying article to decide whether a news item is genuine or not to be affected by article manipulation. Also, to improve the precision of fake news detection, the study added a process to summarize a subject article and a verifying article through the summarization model. In order to verify the suggested algorithm, this study conducted verification for summarization method of documents, verification for search method of verification articles, and verification for the precision of fake news detection in the finally suggested algorithm. The algorithm suggested in this study can be helpful to identify the truth of an article before it is applied to media sources and made available online via various media sources.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

A Collaborative Video Annotation and Browsing System using Linked Data (링크드 데이터를 이용한 협업적 비디오 어노테이션 및 브라우징 시스템)

  • Lee, Yeon-Ho;Oh, Kyeong-Jin;Sean, Vi-Sal;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.203-219
    • /
    • 2011
  • Previously common users just want to watch the video contents without any specific requirements or purposes. However, in today's life while watching video user attempts to know and discover more about things that appear on the video. Therefore, the requirements for finding multimedia or browsing information of objects that users want, are spreading with the increasing use of multimedia such as videos which are not only available on the internet-capable devices such as computers but also on smart TV and smart phone. In order to meet the users. requirements, labor-intensive annotation of objects in video contents is inevitable. For this reason, many researchers have actively studied about methods of annotating the object that appear on the video. In keyword-based annotation related information of the object that appeared on the video content is immediately added and annotation data including all related information about the object must be individually managed. Users will have to directly input all related information to the object. Consequently, when a user browses for information that related to the object, user can only find and get limited resources that solely exists in annotated data. Also, in order to place annotation for objects user's huge workload is required. To cope with reducing user's workload and to minimize the work involved in annotation, in existing object-based annotation automatic annotation is being attempted using computer vision techniques like object detection, recognition and tracking. By using such computer vision techniques a wide variety of objects that appears on the video content must be all detected and recognized. But until now it is still a problem facing some difficulties which have to deal with automated annotation. To overcome these difficulties, we propose a system which consists of two modules. The first module is the annotation module that enables many annotators to collaboratively annotate the objects in the video content in order to access the semantic data using Linked Data. Annotation data managed by annotation server is represented using ontology so that the information can easily be shared and extended. Since annotation data does not include all the relevant information of the object, existing objects in Linked Data and objects that appear in the video content simply connect with each other to get all the related information of the object. In other words, annotation data which contains only URI and metadata like position, time and size are stored on the annotation sever. So when user needs other related information about the object, all of that information is retrieved from Linked Data through its relevant URI. The second module enables viewers to browse interesting information about the object using annotation data which is collaboratively generated by many users while watching video. With this system, through simple user interaction the query is automatically generated and all the related information is retrieved from Linked Data and finally all the additional information of the object is offered to the user. With this study, in the future of Semantic Web environment our proposed system is expected to establish a better video content service environment by offering users relevant information about the objects that appear on the screen of any internet-capable devices such as PC, smart TV or smart phone.