• Title/Summary/Keyword: the web page

Search Result 668, Processing Time 0.029 seconds

A Study Web Server tuning about Preventing for XSS In Web Page using DLL (DLL 를 이용한 웹페이지 에서의 XSS 대응에 대한 웹 서버 성능 향상 방안)

  • Lee, Nae-Hong;Lee, Heejo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.11a
    • /
    • pp.1234-1237
    • /
    • 2007
  • 웹 서비스를 기반으로 구축된 IT 환경에서 Dynamic Web page 를 동작하도록 하는 것이 CGI(Common Gateway Interface)이다. 이런 CGI 를 사용하는 Web page 에서는 XSS(Cross Site Scripting)에 취약점을 가지고 있다. XSS 의 취약점을 이용하여 Web page 의 변조, Cookie 의 가로채기 등의 악의적인 행동으로 인해 많은 피해사례가 있다. 기존의 연구들은 이러한 문제를 해결하기 위해서 게시판 입력 값을 체크하여 Meta character 를 필터링 하는 방법으로 XSS 공격을 대응하였다. 그러나 이러한 방법은 각 페이지 마다 필터링 스크립트를 사용하기 때문에 웹 서버의 성능에 많은 부하를 초래 하는 단점이 있었다. 따라서 본 논문에서는 이러한 웹 서버의 부하를 줄이기 위해 필터링 스크립트를 DLL(Dynamic link library) 화 시켜 모듈화된 함수를 각 페이지에서 호출하여 사용함으로써 웹 서버의 성능 향상을 제안 한다.

Ranking Quality Evaluation of PageRank Variations (PageRank 변형 알고리즘들 간의 순위 품질 평가)

  • Pham, Minh-Duc;Heo, Jun-Seok;Lee, Jeong-Hoon;Whang, Kyu-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.5
    • /
    • pp.14-28
    • /
    • 2009
  • The PageRank algorithm is an important component for ranking Web pages in Google and other search engines. While many improvements for the original PageRank algorithm have been proposed, it is unclear which variations (and their combinations) provide the "best" ranked results. In this paper, we evaluate the ranking quality of the well-known variations of the original PageRank algorithm and their combinations. In order to do this, we first classify the variations into link-based approaches, which exploit the link structure of the Web, and knowledge-based approaches, which exploit the semantics of the Web. We then propose algorithms that combine the ranking algorithms in these two approaches and implement both the variations and their combinations. For our evaluation, we perform extensive experiments using a real data set of one million Web pages. Through the experiments, we find the algorithms that provide the best ranked results from either the variations or their combinations.

An Improved Approach to Ranking Web Documents

  • Gupta, Pooja;Singh, Sandeep K.;Yadav, Divakar;Sharma, A.K.
    • Journal of Information Processing Systems
    • /
    • v.9 no.2
    • /
    • pp.217-236
    • /
    • 2013
  • Ranking thousands of web documents so that they are matched in response to a user query is really a challenging task. For this purpose, search engines use different ranking mechanisms on apparently related resultant web documents to decide the order in which documents should be displayed. Existing ranking mechanisms decide on the order of a web page based on the amount and popularity of the links pointed to and emerging from it. Sometime search engines result in placing less relevant documents in the top positions in response to a user query. There is a strong need to improve the ranking strategy. In this paper, a novel ranking mechanism is being proposed to rank the web documents that consider both the HTML structure of a page and the contextual senses of keywords that are present within it and its back-links. The approach has been tested on data sets of URLs and on their back-links in relation to different topics. The experimental result shows that the overall search results, in response to user queries, are improved. The ordering of the links that have been obtained is compared with the ordering that has been done by using the page rank score. The results obtained thereafter shows that the proposed mechanism contextually puts more related web pages in the top order, as compared to the page rank score.

Real time Rent-A-Car reservation service based on Flex (Flex 기반의 실시간 렌트카 예약서비스)

  • Kwon, Hoon;Han, Dong-gyun;Lee, Hye-sun;Kwak, Ho-young
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2007.11a
    • /
    • pp.853-856
    • /
    • 2007
  • Users requirement to use comfortable web page increasing these days because of development of Internet. Therefore concept of Web 2.0 were issued recently. Basic concept of Web 2.0 is implementation of comfortable UI(User Interface) and implementation of more active web than previous web. RIA(Rich Internet Application) is group of various technology to implement Web 2.0. It is possible that implement active web pages by many kind of technology that intend to RIA(Rich Internet Application). In this paper I restruct the rent-a-car reservation service page by Flex that popular in these days because of fantastic Animation to compare. effect and comfortable development environment. Therefore It is possible that constructing efficient active page that faster real-time information update and without page-to-page reload.

  • PDF

Web Page Segmentation

  • Ahmad, Mahmood;Lee, Sungyoung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.11a
    • /
    • pp.1087-1090
    • /
    • 2014
  • This paper describes an overview and research work related to web page segmentation. Over a period of time, various techniques have been used and proposed to extract meaningful information from web pages automatically. Due to voluminous amount of data this extraction demanded state of the art techniques that segment the web pages just like or close to humans. Motivation behind this is to facilitate applications that rely on the meaningful data acquired from multiple web pages. Information extraction, search engines, re-organized web display for small screen devices are few strong candidate areas where web page extraction has adequate potential and utility of usage.

A Web Surfing Assistant for Improved Web Accessibility (웹 접근성 향상을 위한 웹 서핑 도우미)

  • Lee SooCheol;Lee Sieun;Hwang Eenjun
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.9
    • /
    • pp.1180-1195
    • /
    • 2004
  • Due to the exponential increase of information, search and access for the Web information or service takes much time. Web information is represented through several web pages using hyperlinks and each web page is contains several topics. However. most existing web tools don't reflect such web authoring tendencies and treat it as an independent information unit. This inconsistency yields inherent problems in web browsing and searching. In this paper, we propose a web surfing assistant called LinkBroker that provides collodion pages. They are composed of relevant information extracted from several web pages that have table and frame structure in order to improve accessibility to web information. Especially, the system extracts a set of web pages that are logically connected and groups those pages using table and frame tags. Then, essential information blocks in each page of a group are extracted to construct an integrated summary page. It Provides a comprehensive view to user and one cut way to access distributed information. Experimental results show the effectiveness and usefulness of LinkBroker system.

Implementation of big web logs analyzer in estimating preferences for web contents (웹 컨텐츠 선호도 측정을 위한 대용량 웹로그 분석기 구현)

  • Choi, Eun Jung;Kim, Myuhng Joo
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.4
    • /
    • pp.83-90
    • /
    • 2012
  • With the rapid growth of internet infrastructure, World Wide Web is evolving recently into various services such as cloud computing, social network services. It simply go beyond the sharing of information. It started to provide new services such as E-business, remote control or management, providing virtual services, and recently it is evolving into new services such as cloud computing and social network services. These kinds of communications through World Wide Web have been interested in and have developed user-centric customized services rather than providing provider-centric informations. In these environments, it is very important to check and analyze the user requests to a website. Especially, estimating user preferences is most important. For these reasons, analyzing web logs is being done, however, it has limitations that the most of data to analyze are based on page unit statistics. Therefore, it is not enough to evaluate user preferences only by statistics of specific page. Because recent main contents of web page design are being made of media files such as image files, and of dynamic pages utilizing the techniques of CSS, Div, iFrame etc. In this paper, large log analyzer was designed and executed to analyze web server log to estimate web contents preferences of users. With mapreduce which is based on Hadoop, large logs were analyzed and web contents preferences of media files such as image files, sounds and videos were estimated.

Development of Web Contents for Statistical Analysis Using Statistical Package and Active Server Page (통계패키지와 Active Server Page를 이용한 통계 분석 웹 컨텐츠 개발)

  • Kang, Tae-Gu;Lee, Jae-Kwan;Kim, Mi-Ah;Park, Chan-Keun;Heo, Tae-Young
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.15 no.1
    • /
    • pp.109-114
    • /
    • 2010
  • In this paper, we developed the web content of statistical analysis using statistical package and Active Server Page (ASP). A statistical package is very difficult to learn and use for non-statisticians, however, non-statisticians want to do analyze the data without learning statistical packages such as SAS, S-plus, and R. Therefore, we developed the web based statistical analysis contents using S-plus which is the popular statistical package and ASP. In real application, we developed the web content for various statistical analyses such as exploratory data analysis, analysis of variance, and time series on the web using water quality data. The developed statistical analysis web content is very useful for non-statisticians such as public service person and researcher. Consequently, combining a web based contents with a statistical package, the users can access the site quickly and analyze data easily.

Automatic Reconstruction of Web Pages for Mobile Devices (무선 단말기를 위한 웹 페이지의 자동 재구성)

  • Song, Dong-Rhee;Hwang, Een-Jun
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.523-532
    • /
    • 2002
  • Recently, with the wide spread of the Internet and development of wireless network technology, it has now become possible to access web pages anytime, anywhere through devices with small display such as PDA But, since most existing web pages are optimized for desktop computers, browsing web pages on the small screen through wireless network requires more scrolling and longer loading time. In this paper, we propose a page reconstruction scheme called PageMap to make it feasible to navigate existing web pages through small screen devices even on the wireless connection. Reconstructed pages reduce the file and page size and thus eventually reduce resource requirements. We have Implemented a prototype system and performed several experiments for typical web sites. We report some of the results.

Web Page Similarity based on Size and Frequency of Tokens (토큰 크기 및 출현 빈도에 기반한 웹 페이지 유사도)

  • Lee, Eun-Joo;Jung, Woo-Sung
    • Journal of Information Technology Services
    • /
    • v.11 no.4
    • /
    • pp.263-275
    • /
    • 2012
  • It is becoming hard to maintain web applications because of high complexity and duplication of web pages. However, most of research about code clone is focusing on code hunks, and their target is limited to a specific language. Thus, we propose GSIM, a language-independent statistical approach to detect similar pages based on scarcity and frequency of customized tokens. The tokens, which can be obtained from pages splitted by a set of given separators, are defined as atomic elements for calculating similarity between two pages. In this paper, the domain definition for web applications and algorithms for collecting tokens, making matrics, calculating similarity are given. We also conducted experiments on open source codes for evaluation, with our GSIM tool. The results show the applicability of the proposed method and the effects of parameters such as threshold, toughness, length of tokens, on their quality and performance.