• Title/Summary/Keyword: 웹 분할

Search Result 212, Processing Time 0.042 seconds

Urinary Stones Segmentation Model and AI Web Application Development in Abdominal CT Images Through Machine Learning (기계학습을 통한 복부 CT영상에서 요로결석 분할 모델 및 AI 웹 애플리케이션 개발)

  • Lee, Chung-Sub;Lim, Dong-Wook;Noh, Si-Hyeong;Kim, Tae-Hoon;Park, Sung-Bin;Yoon, Kwon-Ha;Jeong, Chang-Won
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.11
    • /
    • pp.305-310
    • /
    • 2021
  • Artificial intelligence technology in the medical field initially focused on analysis and algorithm development, but it is gradually changing to web application development for service as a product. This paper describes a Urinary Stone segmentation model in abdominal CT images and an artificial intelligence web application based on it. To implement this, a model was developed using U-Net, a fully-convolutional network-based model of the end-to-end method proposed for the purpose of image segmentation in the medical imaging field. And for web service development, it was developed based on AWS cloud using a Python-based micro web framework called Flask. Finally, the result predicted by the urolithiasis segmentation model by model serving is shown as the result of performing the AI web application service. We expect that our proposed AI web application service will be utilized for screening test.

Caret Unit Generation Method from PC Web for Mobile Device (캐럿 단위를 이용한 PC 웹 컨텐츠를 모바일 단말기에 서비스 하는 방법)

  • Park, Dae-Hyuck;Kang, Eui-Sun;Lim, Young-Hwan
    • The KIPS Transactions:PartD
    • /
    • v.14D no.3 s.113
    • /
    • pp.339-346
    • /
    • 2007
  • The objective of this study is to satisfy requirements for a variety of terminals to play wired web page contents in ubiquitous environment constantly connected to network. In other words, this study intended to automatically transcode wired web page into mobile web page in order to receive service by using mobile base to carry contents in Internet web page. To achieve this objective, we suggest the method that is to directly enter URL of web page in mobile device and to check contents of the current web page. For this, web page is converted into an image and configured into a mobile web page suitable for personal terminals. Users can obtain the effect of having web services provided by using computer with interfaces to expand, reduce and move the web page as desired. This is a caret unit play method, with which contents of web page are transcoded and played to suit each user According to the method proposed in this study, contents of wired web page can be played by using a mobile device. This study confirms that a single content can be serviced to suit users of various terminals. Through this, it will be able to reuse numerous wired web contents as mobile web contents.

Performance Isolation Technique Considering Service Level Agreement in Web Servers (SLA를 고려한 웹 서버 성능 분할 기법)

  • 고현주;박기진;박미선
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10a
    • /
    • pp.652-654
    • /
    • 2004
  • 클라이언트와 서비스 제공자간의 서비스 수준 계약인 SLA(Service Level Agreement)를 만족시키기 위해서는 클라이언트 요청을 우선순위 클래스로 구분하여, 낮은 수준의 서비스를 요청하는 클라이언트보다는 고수준의 서비스를 요구하는 클라이언트에게 우선적으로 서비스를 제공할 수 있는 기술이 필요하다. 본 논문에서는 서비스 제공자의 웹 서버 노드를 우선 순위에 따라 정적.동적으로 분할하는 방법을 연구하였으며, 시뮬레이션을 통해 SLA를 고려할 수 있는 응답시간 성능을 분석하였다.

  • PDF

A Design and Performance Analysis of Web Cache Replacement Policy Based on the Size Heterogeneity of the Web Object (웹 객체 크기 이질성 기반의 웹 캐시 대체 기법의 설계와 성능 평가)

  • Na Yun Ji;Ko Il Seok;Cho Dong Uk
    • The KIPS Transactions:PartC
    • /
    • v.12C no.3 s.99
    • /
    • pp.443-448
    • /
    • 2005
  • Efficient using of the web cache is becoming important factors that decide system management efficiency in the web base system. The cache performance depends heavily on the replacement algorithm which dynamically selects a suitable subset of objects for caching in a finite cache space. In this paper, the web caching algorithm is proposed for the efficient operation of the web base system. The algorithm is designed based on a divided scope that considered size reference characteristic and heterogeneity on web object. With the experiment environment, the algorithm is compared with conservative replacement algorithms, we have confirmed more than $15\%$ of an performance improvement.

The Structure of a Web site and Navigability (웹 사이트의 구조와 항해가능성)

  • Min, Kyung-Sil;Chun, Sung-Kyu;Jang, Gi-Ho;Jung, Hyo-Sook;Park, Seong-Bin
    • The Journal of Korean Association of Computer Education
    • /
    • v.14 no.3
    • /
    • pp.51-62
    • /
    • 2011
  • Navigability refers to how easy a user can find desired information in a web site and is influenced by the structure of a web site. In this paper, we created three types of Web sites, that is a Web site whose structure forms a small world, a Web site whose structure forms a semi-matroid, and a Web site based on an ontology and measured the navigability of each Web site based on two criteria (the number of hyperlinks clicked by users to find the desired information and the elapsed time for finding the desired information). The reason that we selected three structures is because hyperlinks can be created in a way that helps a user find desired information in each site. From the experiments, we found that the average number of hyperlinks which a user clicked to find out the desired information was as follows: a Web site that had semi-matroid property (100.37 hyperlinks) < a Web site that was created based on an ontology (117.63 hyperlinks) < a Web site that had small-world property (236.17 hyperlinks). In addition, we found that the average elapsed time during which a user found out the desired information was as follows: a Web site that was created based on an ontology (20 min 26 sec) < a Web site that had semi-matroid property (23 min 6 sec) < a Web site that had small-world property (30 min 47 sec). Therefore, we can consider a Web site that is created based on a semi-matroid or an ontology is relatively more navigable than a Web site that has small-world property. In this paper, we also propose a way by which our experimental results can be reflected in designing an educational Web site.

  • PDF

A Focused Crawler by Segmentation of Context Information (주변정보 분할을 이용한 주제 중심 웹 문서 수집기)

  • Cho, Chang-Hee;Lee, Nam-Yong;Kang, Jin-Bum;Yang, Jae-Young;Choi, Joong-Min
    • The KIPS Transactions:PartB
    • /
    • v.12B no.6 s.102
    • /
    • pp.697-702
    • /
    • 2005
  • The focused crawler is a topic-driven document-collecting crawler that was suggested as a promising alternative of maintaining up-to-date web document Indices in search engines. A major problem inherent in previous focused crawlers is the liability of missing highly relevant documents that are linked from off-topic documents. This problem mainly originated from the lack of consideration of structural information in a document. Traditional weighting method such as TFIDF employed in document classification can lead to this problem. In order to improve the performance of focused crawlers, this paper proposes a scheme of locality-based document segmentation to determine the relevance of a document to a specific topic. We segment a document into a set of sub-documents using contextual features around the hyperlinks. This information is used to determine whether the crawler would fetch the documents that are linked from hyperlinks in an off-topic document.

Effective Web Crawling Orderings from Graph Search Techniques (그래프 탐색 기법을 이용한 효율적인 웹 크롤링 방법들)

  • Kim, Jin-Il;Kwon, Yoo-Jin;Kim, Jin-Wook;Kim, Sung-Ryul;Park, Kun-Soo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.1
    • /
    • pp.27-34
    • /
    • 2010
  • Web crawlers are fundamental programs which iteratively download web pages by following links of web pages starting from a small set of initial URLs. Previously several web crawling orderings have been proposed to crawl popular web pages in preference to other pages, but some graph search techniques whose characteristics and efficient implementations had been studied in graph theory community have not been applied yet for web crawling orderings. In this paper we consider various graph search techniques including lexicographic breadth-first search, lexicographic depth-first search and maximum cardinality search as well as well-known breadth-first search and depth-first search, and then choose effective web crawling orderings which have linear time complexity and crawl popular pages early. Especially, for maximum cardinality search and lexicographic breadth-first search whose implementations are non-trivial, we propose linear-time web crawling orderings by applying the partition refinement method. Experimental results show that maximum cardinality search has desirable properties in both time complexity and the quality of crawled pages.

Integrated Sentence Preprocessing System for Web Indexing (웹 인덱싱을 위한 통합 전처리 시스템의 개발)

  • 심준혁;차정원;이근배
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2000.06a
    • /
    • pp.216-223
    • /
    • 2000
  • 웹 문서는 일반 문서들과 달리 자유로운 형식으로 기술되어 있고, 원문에 태그나 코드 등 불필요한 내용들을 많이 포함하고 있어 언어 처리에 바로 사용하기에 적합하지 못하다. 본 논문은 인덱싱 대상 문서로 사용되는 웹 문서를 자동으로 수집하여, 문장 단위로 정렬된 문서로 제작, 관리하는 통합 전처리 시스템인 Web Tagger의 구조와 전처리 방법을 소개한다. Web Tagger는 문서 정제, 문장 분할, 띄어쓰기의 과정을 거쳐 웹 문서에서 표준화된 정보를 추출하고, 형태소 분석기를 포함한 응용 시스템의 목적에 맞게 XML형식의 원문 코퍼스를 자동으로 생성하고 관리한다. '정규문법(Regexp)', '휴리스틱', '품사 인덱스 참조', 'C4.5를 사용한 학습 규칙' 등의 다양한 전처리 기법은 형태소 분석 정확도 향상과 시스템 안정성 보장에 기여한다.

  • PDF

Integrated Sentence Preprocessing System for Web Indexing (웹 인덱싱을 위한 통합 전처리 시스템의 개발)

  • Shim, Jun-Hyuk;Cha, Jong-Won;Lee, Geun-Bae
    • Annual Conference on Human and Language Technology
    • /
    • 2000.10d
    • /
    • pp.216-223
    • /
    • 2000
  • 웹 문서는 일반 문서들과 달리 자유로운 형식으로 기술되어 있고, 원문에 태그나 코드 등 불필요한 내용들을 많이 포함하고 있어 언어 처리에 바로 사용하기에 적합하지 못하다. 본 논문은 인덱싱 대상 문서로 사용되는 웹 문서를 자동으로 수집하여, 문장 단위로 정렬된 문서로 제작, 관리하는 통합 전처리 시스템인 Web Tagger의 구조와 전처리 방법을 소개한다. Web Tagger는 문서 정제, 문장 분할, 띄어쓰기의 과정을 거쳐 웹 문서에서 표준화된 정보를 추출하고, 형태소 분석기를 포함한 응용 시스템의 목적에 맞게 XML 형식의 원문 코퍼스를 자동으로 생성하고 관리한다. '정규문법(Regexp)', '휴리스틱', '품사 인덱스 참조', 'C4.5를 사용한 학습 규칙' 등의 다양한 전처리 기법은 형태소 분석 정확도 향상과 시스템 안정성 보장에 기여한다.

  • PDF

Implementation of Map Service based on Splitting Method of GeoRSS (GeoRSS 분할기법 기반의 지도 서비스 구현)

  • Lee, Bum-Suk;Hwang, Byung-Yeon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.7
    • /
    • pp.728-732
    • /
    • 2008
  • GeoRSS is RSS which contains geospatial information, and it supports to supply diverse kinds of data through mash-up map service like the google map. It helps to implement efficient web service with simple java-scripts, but sometimes initial page loading takes too much time and causes heavy traffic when it includes lots of multimedia data. In this paper, we propose a splitting method of GeoRSS for fast accessing. As the result of experiments, we achieved improvement of browsing speed and reducing of initial data traffic.