• Title/Summary/Keyword: 심층검색

Search Result 76, Processing Time 0.021 seconds

Image Super Resolution Using Neural Architecture Search (심층 신경망 검색 기법을 통한 이미지 고해상도화)

  • Ahn, Joon Young;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.11a
    • /
    • pp.102-105
    • /
    • 2019
  • 본 논문에서는 심층 신경망 검색 방법을 사용하여 이미지 고해상도화를 위한 심층 신경망을 설계하는 방법을 구현하였다. 일반적으로 이미지 고해상도화, 잡음 제거 및 번짐 제거를 위한 심층신경망 구조는 사람이 설계하였다. 최근에는 이미지 분류 등 다른 영상처리 기법에서 사용하는 심층 신경망 구조를 검색하기 위한 방법이 연구되었다. 본 논문에서는 강화학습을 사용하여 이미지 고해상도화를 위한 심층 신경망 구조를 검색하는 방법을 제안하였다. 제안된 방법은 policy gradient 방법의 일종인 REINFORCE 알고리즘을 사용하여 심층 신경망 구조를 출력하여 주는 제어용 RNN(recurrent neural network)을 학습하고, 최종적으로 이미지 고해상도화를 잘 실현할 수 있는 심층 신경망 구조를 검색하여 설계하였다. 제안된 심층 신경망 구조를 사용하여 이미지 고해상도화를 구현하였고, 약 36.54dB 의 피크 신호 대비 잡음 비율(PSNR)을 가지는 것을 확인할 수 있었다.

  • PDF

Linkage Expansion in Linked Open Data Cloud using Link Policy (연결정책을 이용한 개방형 연결 데이터 클라우드에서의 연결성 확충)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of KIISE
    • /
    • v.44 no.10
    • /
    • pp.1045-1061
    • /
    • 2017
  • This paper suggests a method to expand linkages in a Linked Open Data(LOD) cloud that is a practical consequence of a semantic web. LOD cloud, contrary to the first expectation, has not been used actively because of the lack of linkages. Current method for establishing links by applying to explicit links and attaching the links to LODs have restrictions on reflecting target LODs' changes in a timely manner and maintaining them periodically. Instead of attaching them, this paper suggests that each LOD should prepare a link policy and publish it together with the LOD. The link policy specifies target LODs, predicate pairs, and similarity degrees to decide on the establishment of links. We have implemented a system that performs in-depth searching through LODs using their link policies. We have published APIs of the system to Github. Results of the experiment on the in-depth searching system with similarity degrees of 1.0 ~ 0.8 and depth level of 4 provides searching results that include 91% ~ 98% of the trustworthy links and about 170% of triples expanded.

Crawling Algorithm Design for Deep Web Document Collection (심층 웹 문서 수집을 위한 크롤링 알고리즘 설계)

  • Won, Dong-Hyun;Kang, Yun-Jeong;Park, Hyuk-Gyu
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.367-369
    • /
    • 2022
  • With the development of web technology, the web provides customized information that meets the needs of users. Information is provided according to the input form and the user's query, and a web service that provides information that is difficult to search with a search engine is called an in-depth web. These deep webs contain more information than surface webs, but it is difficult to collect information with general crawling, which collects information at the time of the visit. The deep web provides users with information on the server by running script languages such as javascript in their browsers. In this paper, we propose an algorithm capable of exploring dynamically changing websites and collecting information by analyzing scripts for deep web collection. In this paper, the script of the bulletin board of the Korea Centers for Disease Control and Prevention was analyzed for experiments.

  • PDF

Implementation of Policy based In-depth Searching for Identical Entities and Cleansing System in LOD Cloud (LOD 클라우드에서의 연결정책 기반 동일개체 심층검색 및 정제 시스템 구현)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Internet Computing and Services
    • /
    • v.19 no.3
    • /
    • pp.67-77
    • /
    • 2018
  • This paper suggests that LOD establishes its own link policy and publishes it to LOD cloud to provide identity among entities in different LODs. For specifying the link policy, we proposed vocabulary set founded on RDF model as well. We implemented Policy based In-depth Searching and Cleansing(PISC for short) system that proceeds in-depth searching across LODs by referencing the link policies. PISC has been published on Github. LODs have participated voluntarily to LOD cloud so that degree of the entity identity needs to be evaluated. PISC, therefore, evaluates the identities and cleanses the searched entities to confine them to that exceed user's criterion of entity identity level. As for searching results, PISC provides entity's detailed contents which have been collected from diverse LODs and ontology customized to the content. Simulation of PISC has been performed on DBpedia's 5 LODs. We found that similarity of 0.9 of source and target RDF triples' objects provided appropriate expansion ratio and inclusion ratio of searching result. For sufficient identity of searched entities, 3 or more target LODs are required to be specified in link policy.

Crawling algorithm design and experiment for automatic deep web document collection (심층 웹 문서 자동 수집을 위한 크롤링 알고리즘 설계 및 실험)

  • Yun-Jeong, Kang;Min-Hye, Lee;Dong-Hyun, Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.27 no.1
    • /
    • pp.1-7
    • /
    • 2023
  • Deep web collection means entering a query in a search form and collecting response results. It is estimated that the information possessed by the deep web has about 450 to 550 times more information than the statically constructed surface web. The static method does not show the changed information until the web page is refreshed, but the dynamic web page method updates the necessary information in real time and provides real-time information without reloading the web page, but crawler has difficulty accessing the updated information. Therefore, there is a need for a way to automatically collect information on these deep webs using a crawler. Therefore, this paper proposes a method of utilizing scripts as general links, and for this purpose, an algorithm that can utilize client scripts like regular URLs is proposed and experimented. The proposed algorithm focused on collecting web information by menu navigation and script execution instead of the usual method of entering data into search forms.

Technology Trends and Patents Analysis of Auger bit for Deep Cement Mixing (DCM) Method (심층혼합처리 공법용 오거비트의 기술동향 및 특허분석)

  • Min, Kyongnam;Lee, Dongwon;Lee, Jaewon;Kim, Keeseok;Yu, Jihyung;Jung, Chanmuk;Hoang, Truong Xuan;Kwon, Yong Kyu
    • The Journal of Engineering Geology
    • /
    • v.28 no.3
    • /
    • pp.431-441
    • /
    • 2018
  • To set up the future research and development direction for Auger bit, this study analyzed publicized patent trends of Deep Cement Mixing method (DCM) in Korea, USA, Japan, and Europe. DCM method was firstly classified into wing shapes and the number of rods according to the technical scope, and secondly, classified into 8 types according to type of screw and rotation axial. A total of 2,815 patents were searched and 448 validated patents were selected through de-duplication and filtering. As a result of the analysis of the portfolio through the number of patents and growth stages, it was selected as the core technology that auger is deemed to have high growth potential and if there is a patent similar to core technology through a patent barrier analysis, the basic data is suggested to develop the design around and differentiated technologies.

Deep Analysis on Index Terms Using Baysian Inference Network (베이지안 추론망 기반 색인어의 심층 분석 방법)

  • Song, Sa-Kwang;Lee, Seungwoo;Jung, Hanmin
    • Annual Conference on Human and Language Technology
    • /
    • 2012.10a
    • /
    • pp.84-87
    • /
    • 2012
  • 대분분의 검색 엔진에서 색인어의 추출 및 가중치의 부여방법은 매우 중요한 연구주제로, 검색 엔진의 성능에 큰 영항을 미친다. 일반적으로, 불용어 리스트를 통해 성능에 긍정적인 영향을 미치지 않는 색인어를 제거하거나, 핵심어 또는 전문용어 등 상대적으로 중요한 색인어를 강조하는 방식을 사용하여 검색엔진의 성능을 향상시킨다. 하지만, 어절 분리, 형태소 분석, 불용어 처리 등 검색엔진의 단계열 처리 과정에서, 개별적인 색인어가 검색엔진에 미치는 영향을 분석하고 이를 반영한 검색 엔진 성능 향상 기법은 제시되지 않고 있다. 따라서 본 연구에서는 각 단계별 처리 과정에서 생성된 색인어가 미치는 영항을 계랑화하여 긍정적/부정적 색인어를 분류하는 방법론을 소개하고, 이를 기반으로 색인어 가중치를 조절함으로써 검색 엔진의 성능 또한 향상 가능한 방법을 소개한다.

  • PDF

Change Acceptable In-Depth Searching in LOD Cloud for Efficient Knowledge Expansion (효과적인 지식확장을 위한 LOD 클라우드에서의 변화수용적 심층검색)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.171-193
    • /
    • 2018
  • LOD(Linked Open Data) cloud is a practical implementation of semantic web. We suggested a new method that provides identity links conveniently in LOD cloud. It also allows changes in LOD to be reflected to searching results without any omissions. LOD provides detail descriptions of entities to public in RDF triple form. RDF triple is composed of subject, predicates, and objects and presents detail description for an entity. Links in LOD cloud, named identity links, are realized by asserting entities of different RDF triples to be identical. Currently, the identity link is provided with creating a link triple explicitly in which associates its subject and object with source and target entities. Link triples are appended to LOD. With identity links, a knowledge achieves from an LOD can be expanded with different knowledge from different LODs. The goal of LOD cloud is providing opportunity of knowledge expansion to users. Appending link triples to LOD, however, has serious difficulties in discovering identity links between entities one by one notwithstanding the enormous scale of LOD. Newly added entities cannot be reflected to searching results until identity links heading for them are serialized and published to LOD cloud. Instead of creating enormous identity links, we propose LOD to prepare its own link policy. The link policy specifies a set of target LODs to link and constraints necessary to discover identity links to entities on target LODs. On searching, it becomes possible to access newly added entities and reflect them to searching results without any omissions by referencing the link policies. Link policy specifies a set of predicate pairs for discovering identity between associated entities in source and target LODs. For the link policy specification, we have suggested a set of vocabularies that conform to RDFS and OWL. Identity between entities is evaluated in accordance with a similarity of the source and the target entities' objects which have been associated with the predicates' pair in the link policy. We implemented a system "Change Acceptable In-Depth Searching System(CAIDS)". With CAIDS, user's searching request starts from depth_0 LOD, i.e. surface searching. Referencing the link policies of LODs, CAIDS proceeds in-depth searching, next LODs of next depths. To supplement identity links derived from the link policies, CAIDS uses explicit link triples as well. Following the identity links, CAIDS's in-depth searching progresses. Content of an entity obtained from depth_0 LOD expands with the contents of entities of other LODs which have been discovered to be identical to depth_0 LOD entity. Expanding content of depth_0 LOD entity without user's cognition of such other LODs is the implementation of knowledge expansion. It is the goal of LOD cloud. The more identity links in LOD cloud, the wider content expansions in LOD cloud. We have suggested a new way to create identity links abundantly and supply them to LOD cloud. Experiments on CAIDS performed against DBpedia LODs of Korea, France, Italy, Spain, and Portugal. They present that CAIDS provides appropriate expansion ratio and inclusion ratio as long as degree of similarity between source and target objects is 0.8 ~ 0.9. Expansion ratio, for each depth, depicts the ratio of the entities discovered at the depth to the entities of depth_0 LOD. For each depth, inclusion ratio illustrates the ratio of the entities discovered only with explicit links to the entities discovered only with link policies. In cases of similarity degrees with under 0.8, expansion becomes excessive and thus contents become distorted. Similarity degree of 0.8 ~ 0.9 provides appropriate amount of RDF triples searched as well. Experiments have evaluated confidence degree of contents which have been expanded in accordance with in-depth searching. Confidence degree of content is directly coupled with identity ratio of an entity, which means the degree of identity to the entity of depth_0 LOD. Identity ratio of an entity is obtained by multiplying source LOD's confidence and source entity's identity ratio. By tracing the identity links in advance, LOD's confidence is evaluated in accordance with the amount of identity links incoming to the entities in the LOD. While evaluating the identity ratio, concept of identity agreement, which means that multiple identity links head to a common entity, has been considered. With the identity agreement concept, experimental results present that identity ratio decreases as depth deepens, but rebounds as the depth deepens more. For each entity, as the number of identity links increases, identity ratio rebounds early and reaches at 1 finally. We found out that more than 8 identity links for each entity would lead users to give their confidence to the contents expanded. Link policy based in-depth searching method, we proposed, is expected to contribute to abundant identity links provisions to LOD cloud.

심층 진단-편집용 소프트웨어

  • Jang, Hong-Il
    • 프린팅코리아
    • /
    • s.42
    • /
    • pp.158-161
    • /
    • 2005
  • 검색 엔진을 강화한 구글, 공개 소프트웨어라는 강력한 무기를 장착한 리눅스가 M/S사의 아성을 위협하고 있는 가운데 국내 소프트웨어 부문에도 큰 지각변동이 있을 것으로 보인다. 그러나 이미 포화 직전까지 도달한 국내 시장과 외산제품의 일방적인 선호도를 벗어나지 못하는 소비자들의 인식, 유지보수, 기술력 미흡 등을 이유로 장기적인 관점에서의 손질이 필요하다. 정통부와 업체, 사용자가 도마 위에 오른 소프트웨어 시장을 어떻게 풀어갈 지 알아본다

  • PDF

An Exploratory Investigation on Multimedia Information Needs and Searching Behavior among College Students (멀티미디어 정보요구와 검색행태에 관한 탐색적 연구)

  • Chung, Eun-Kyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.46 no.3
    • /
    • pp.251-270
    • /
    • 2012
  • Multimedia needs and searching have become important in everyday life, especially in a younger generation. The characteristics of multimedia needs and searching behaviors are distinctive compared to textual information needs and searching behaviors in a wide variety of ways. By interviewing and observing multimedia needs and searching behaviors of college students from 20 areas in Seoul, this study aims to improve the understanding on users' multimedia needs and how users search multimedia. The findings are presented in terms of searching sources, multimedia needs, relevance criteria and searching barriers. For multimedia, the searching sources are found primarily as Naver and Google and the distinguished features are presented depending on the individual multimedia types. As multimedia needs are categorized into generic, specific and abstract, most of the needs are classified as specific needs rather than generic needs, but there exist differences depending on the types of multimedia. In addition, the aspects of relevance criteria and searching barriers are reflected with the characteristics of individual multimedia types. The findings of this study demonstrate that distinctive indexing and searching environments depending on the types of multimedia might be necessary to improve the quality of multimedia searching.