• Title/Summary/Keyword: PAGES

Search Result 1,058, Processing Time 0.023 seconds

Empirical Analysis on the Effect of Design Pattern of Web Page, Perceived Risk and Media Richness to Customer Satisfaction (콘텐츠 제작방식, 지각된 위험, 미디어 풍부성이 고객만족에 미치는 영향 분석)

  • Park, Bong-Won;Lee, Jung-Mann;Lee, Jong-Won
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.6
    • /
    • pp.385-396
    • /
    • 2011
  • Internet web pages can be classified by three major types such as texts only, images with texts and videos with texts. The purpose of this paper is to analyze how customers recognize and respond perspective of perceived risk and media richness with regard to design patterns of internet web pages. Additionally, we will examine the extent to which aforementioned factors affect customer satisfaction. Analyses with perceived risks revealed that customers feel less personal risks including performance, psychology and time/convenience when used web pages of text-images and text-videos, compared to text only based web pages. However, customers feel that web pages consisting of image-text or video-text have higher points in terms of symbolism and social presence in media richness, compared to text only based web pages. Finally, we showed that personal risk and text-based Web page negatively affect but symbolism and social presence positively impact on customer satisfaction. Therefore, this study suggests a clue that why video-based Web content did not grow different from many people's expectation.

Effective Web Crawling Orderings from Graph Search Techniques (그래프 탐색 기법을 이용한 효율적인 웹 크롤링 방법들)

  • Kim, Jin-Il;Kwon, Yoo-Jin;Kim, Jin-Wook;Kim, Sung-Ryul;Park, Kun-Soo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.1
    • /
    • pp.27-34
    • /
    • 2010
  • Web crawlers are fundamental programs which iteratively download web pages by following links of web pages starting from a small set of initial URLs. Previously several web crawling orderings have been proposed to crawl popular web pages in preference to other pages, but some graph search techniques whose characteristics and efficient implementations had been studied in graph theory community have not been applied yet for web crawling orderings. In this paper we consider various graph search techniques including lexicographic breadth-first search, lexicographic depth-first search and maximum cardinality search as well as well-known breadth-first search and depth-first search, and then choose effective web crawling orderings which have linear time complexity and crawl popular pages early. Especially, for maximum cardinality search and lexicographic breadth-first search whose implementations are non-trivial, we propose linear-time web crawling orderings by applying the partition refinement method. Experimental results show that maximum cardinality search has desirable properties in both time complexity and the quality of crawled pages.

Coupling Metrics for Web Pages Clustering in Restructuring of Web Applications (웹 어플리케이션 재구조화를 위한 클러스터링에 사용되는 결합도 메트릭)

  • Lee, En-Joo;Park, Gen-Duk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.3
    • /
    • pp.75-84
    • /
    • 2007
  • Due to the increasing complexity and shorter life cycle of web applications, web applications need to be restructured to improve flexibility and extensibility. These days approaches are being used where systems are understood and restructured through clustering techniques. In this paper, the coupling metrics are proposed for clustering web pages more effectively. To achieve this, web application models are defined, where the relationship between web pages and the numbers of parameters are included. Considering direct and indirect coupling strength based on these models, coupling metrics are defined. The more direct relations between two pages and the more parameters they have, the stronger direct coupling is. The higher indirect connectivity strength between two pages is, the more similar the patterns of relationships among other web pages are. We verify the suggested metrics according to the well known verification framework and provide a case study to show that our metrics complements some existing metrics.

  • PDF

Extracting Specific Information in Web Pages Using Machine Learning (머신러닝을 이용한 웹페이지 내의 특정 정보 추출)

  • Lee, Joung-Yun;Kim, Jae-Gon
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.41 no.4
    • /
    • pp.189-195
    • /
    • 2018
  • With the advent of the digital age, production and distribution of web pages has been exploding. Internet users frequently need to extract specific information they want from these vast web pages. However, it takes lots of time and effort for users to find a specific information in many web pages. While search engines that are commonly used provide users with web pages containing the information they are looking for on the Internet, additional time and efforts are required to find the specific information among extensive search results. Therefore, it is necessary to develop algorithms that can automatically extract specific information in web pages. Every year, thousands of international conference are held all over the world. Each international conference has a website and provides general information for the conference such as the date of the event, the venue, greeting, the abstract submission deadline for a paper, the date of the registration, etc. It is not easy for researchers to catch the abstract submission deadline quickly because it is displayed in various formats from conference to conference and frequently updated. This study focuses on the issue of extracting abstract submission deadlines from International conference websites. In this study, we use three machine learning models such as SVM, decision trees, and artificial neural network to develop algorithms to extract an abstract submission deadline in an international conference website. Performances of the suggested algorithms are evaluated using 2,200 conference websites.

Effect of Rule Identification in Acquiring Rules from Web Pages (웹 페이지의 내재 규칙 습득 과정에서 규칙식별 역할에 대한 효과 분석)

  • Kang, Ju-Young;Lee, Jae-Kyu;Park, Sang-Un
    • Journal of Intelligence and Information Systems
    • /
    • v.11 no.1
    • /
    • pp.123-151
    • /
    • 2005
  • In the world of Web pages, there are oceans of documents in natural language texts and tables. To extract rules from Web pages and maintain consistency between them, we have developed the framework of XRML(extensible Rule Markup Language). XRML allows the identification of rules on Web pages and generates the identified rules automatically. For this purpose, we have designed the Rule Identification Markup Language (RIML) that is similar to the formal Rule Structure Markup Language (RSML), both as pares of XRML. RIML is designed to identify rules not only from texts, but also from tables on Web pages, and to transform to the formal rules in RSは syntax automatically. While designing RIML, we considered the features of sharing variables and values, omitted terms, and synonyms. Using these features, rules can be identified or changed once, automatically generating their corresponding RSML rules. We have conducted an experiment to evaluate the effect of the RIML approach with real world Web pages of Amazon.com, BamesandNoble.com, and Powells.com We found that $97.7\%$ of the rules can be detected on the Web pages, and the completeness of generated rule components is $88.5\%$. This is good proof that XRML can facilitate the extraction and maintenance of rules from Web pages while building expert systems in the Semantic Web environment.

  • PDF

Evaluating the Quality of Basic Life Support Information for Primary Korean-Speaking Individuals on the Internet (국내 인터넷 웹 페이지에 나타난 기본심폐소생술 정보의 질 평가)

  • Kang, Hee Do;Moon, Hyung Jun;Lee, Jung Won;Choi, Jae Hyung;Lee, Dong Wook;Kim, Hyun Su;Kang, In Gu;Kim, Doh Eui;Lee, Hyung Jung;Lee, Han You
    • Health Communication
    • /
    • v.13 no.2
    • /
    • pp.125-132
    • /
    • 2018
  • Purpose: The aim of this study is to investigate the quality of basic life support (BLS) information for primary Korean-speaking individuals on the internet. Methods: Using the $Google^{(C)}$ search engine, we searched for the terms 'CPR', 'cardiopulmonary resuscitation (in Korean)' and 'cardiac arrest (in Korean)'. The accuracy, reliability and accessibility of web pages was evaluated based on the 2015 American heart association(AHA) guidelines for CPR & emergency cardiovascular care, the health on the net foundation code of conduct and Korean web content accessibility guidelines 2.1, respectively. Results: Of the 178 web pages screened, 50 met criteria for inclusion. The overall quality of BLS information was not enough (median 5/7, IQR 4.75-6). 23(36%) pages were created in accordance with 2010 AHA guidelines. Only 24(48%) web pages educated on how to use the automated electrical defibrillator. The attribution and transparency of the reliability of pages was relatively low, 20(40%) and 16(32%). The web accessibility score was relatively high. Conclusion: A small of proportion of internet web pages searched by Google have high quality BLS information for a Korean-speaking population. Web pages based on past guideline were still being searched. The notation of the source of CPR information and the transparency of the author should be improved. The verification and evaluation of the quality of BLS information exposed to the Internet are continuously needed.

A Basic Thinking of Pansori Reading Text Appearance -A study on version of - (판소리 독서물 탄생의 기반 사유 -<춘향전> 필사본을 통한 고찰-)

  • Cha, Chounghwan
    • (The) Research of the performance art and culture
    • /
    • no.23
    • /
    • pp.313-346
    • /
    • 2011
  • This thesis investigated basic thinking of Pansori reading text appearance. Among Pansori reading texts, it is versions include unfamiliar contents and scenes in text. They was created by writers of Pansori reading text. Why created a writers of Pansori reading text them? First, writers of Pansori reading text created new contents and scenes in order to show their knowledge. Reading texts of this feature are 28pages version Chunhyangjun belonged to Kim Kwang-sun, 87pages version Chunhyangjun belonged to Sa Jae-dong, 154pages version Chunhyangjun belonged to Hong Yun-pyo etc. This reading texts was effected on knowledge culture of Chosun later period. Second, writers of Pansori reading text created new contents and scenes in order to reenact festivities field. Reading texts of this feature are 75pages version Chunhyangjun belonged to Kyungsang university, 52pages version Chunhyangjun belonged to Keimyung university etc. the former shows story field and Pansori field, the latter shows play field of Walja. Third, writers of Pansori reading text created new contents and scenes in order to lampoon yangban authority. Reading texts of this feature are 72pages version Chunhyangjun belonged to Chungnam university and it's affiliation, 59pages version Chunhyangjun belonged to Park Sun-ho and it's affiliation etc.

Analysis User Action in Web Pages using Ajax technique (Ajax 를 이용한 사용자의 웹 페이지 이용 행태 분석)

  • Lee, Dong-Hoon;Yoon, Tae-Bok;Kim, Kun-Su;Lee, Jee-Hyong
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.528-533
    • /
    • 2008
  • Web page evaluation is important issue in the Internet. Web pages are increasing extremely fast. The web page evaluation based on frequency, like the count of the page view (PV), is not sufficient way even it is used variously. Because users never use the unnecessary or irrelevant web pages for a long time. We concentrated on user's visiting duration time for the evaluation web pages. And we can collect user actions. Users do some action when users using the web page in the web browser. The movements of mouse pointer, mouse button click, page scrolling and so on are produced in the web browser. JavaScript can collect user action and Ajax can send collected data to server when user using the web browser without no user notification.

  • PDF

Architecture of XRML-based Comparison Shopping Mall and Its Performance on Delivery Cost Estimation (XRML 기반 비교쇼핑몰의 구조와 배송비 산정에 관한 실증분석)

  • Lee Jae Kyu;Kang Juyoung
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.30 no.2
    • /
    • pp.185-199
    • /
    • 2005
  • With the growth of internet shopping malls, there is increasing interest in comparison shopping mall. However most comparison sites compare only book prices by collecting simple XML data and do not provide .the exact comparison Including precise shipping costs. Shipping costs vary depending on each customer's address, the delivery method, and the category of selected goods, so rule based system is required in order to calculate exact shipping costs. Therefore, we designed and implemented comparison shopping mall which compares not only book prices but also shipping costs using rule based inference. By adopting the extensible Rule Markup language (XRML) approach, we proposed the methodology of extracting delivery rules from Web pages of each shopping mall. The XRML approach can facilitate nearly automatic rule extraction from Web pages and consistency maintenance between Web pages and rule base. We developed a ConsiderD system which applies our rule acquisition methodology based on XRML. The objective of the ConsiderD system is to compare the exact total cost of books including the delivery cost over Amazon.com, BarnesandNoble.com, and Powells.com. With this prototype, we conducted an experiment to show the potential of automatic rule acquisition from Web pages and illustrate the effect of delivery cost.