• 제목/요약/키워드: Search software

Search Result 782, Processing Time 0.035 seconds

HIGH-SPEED SOFTWARE FRAME SYNCHRONIZER USING SSE2 TECHNOLOGY

  • Koo, In-Hoi;Ahn, Sang-Il;Kim, Tae-Hoon;Sakong, Young-Ho
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.522-525
    • /
    • 2007
  • Frame Synchronization is applied to not only digital data transmission for data synchronization between transmitter and receiver but also data communication with satellite. When satellite image data with high resolution and mass storage is transmitted, hardware frame synchronizer for real-time processing or software frame synchronizer for post-processing is used. In case of hardware, processing with high speed is available but data loss may happen for Search of Frame Synchronization. In case of software, data loss does not happen but speed is relatively slow. In this paper, Pending Buffer concept was proposed to cope with data loss according to processing status of Frame Synchronization. Algorithm to process Frame synchronization with high speed using bit threshold search algorithm with pattern search technique and SIMD is also proposed.

  • PDF

Improving Accuracy over Parameter through Channel Pruning based on Neural Architecture Search in Object Detection (물체 탐지에서 Neural Architecture Search 기반 Channel Pruning 을 통한 Parameter 수 대비 정확도 개선)

  • Jaehyeon Roh;Seunghyun Yu;Seungwook Son;Yongwha Chung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.512-513
    • /
    • 2023
  • CNN 기반 Deep Learning 분야에서 객체 탐지 정확도를 높이기 위해 모델의 많은 Parameter 가 사용된다. 많은 Parameter 를 사용하게 되면 최소 하드웨어 성능 요구치가 상승하고 처리속도도 감소한다는 문제가 있어, 최소한의 정확도 하락으로 Parameter 를 줄이기 위한 여러 Pruning 기법이 사용된다. 본 연구에서는 Neural Architecture Search(NAS) 기반 Channel Pruning 인 Artificial Bee Colony(ABC) 알고리즘을 사용하였고, 기존 NAS 기반 Channel Pruning 논문들이 Classification Task 에서만 실험한 것과 달리 Object Detection Task 에서도 NAS 기반 Channel Pruning 을 적용하여 기존 Uniform Pruning 과 비교할 때 파라미터 수 대비 정확도가 개선됨을 확인하였다.

Multi-level Scheduling Algorithm Based on Storm

  • Wang, Jie;Hang, Siguang;Liu, Jiwei;Chen, Weihao;Hou, Gang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1091-1110
    • /
    • 2016
  • Hybrid deployment under current cloud data centers is a combination of online and offline services, which improves the utilization of the cluster resources. However, the performance of the cluster is often affected by the online services in the hybrid deployment environment. To improve the response time of online service (e.g. search engine), an effective scheduling algorithm based on Storm is proposed. At the component level, the algorithm dispatches the component with more influence to the optimal performance node. Inside the component, a reasonable resource allocation strategy is used. By searching the compressed index first and then filtering the complete index, the execution speed of the component is improved with similar accuracy. Experiments show that our algorithm can guarantee search accuracy of 95.94%, while increasing the response speed by 68.03%.

AI Chatbot Providing Real-Time Public Transportation and Route Information

  • Lee, So Young;Kim, Hye Min;Lee, Si Hyun;Ha, Jung Hyun;Lee, Soowon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.7
    • /
    • pp.9-17
    • /
    • 2019
  • As the artificial intelligence technology has developed recently, researches on chatbots that provide information and contents desired by users through an interactive interface have become active. Since chatbots require a variety of natural language processing technology and domain knowledge including typos and slang, it is currently limited to develop chatbots that can carry on daily conversations in a general-purpose domain. In this study, we propose an artificial intelligence chatbot that can provide real-time public traffic information and route information. The proposed chatbot has an advantage that it can understand the intention and requirements of the user through the conversation on the messenger platform without map application.

An Analysis on the Functions of the Next Generation Library Catalog: With a Focus on SearchWorks (차세대 도서관 목록의 제반 기능에 관한 분석 - SearchWorks를 중심으로 -)

  • Yoon, Cheong-Ok
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.47 no.4
    • /
    • pp.5-23
    • /
    • 2013
  • The purpose of this study is to examine the functions and features of SearchWorks developed as the Next Generation Library Catalog by Stanford University Libraries. It was designed to fully represent the needs and search behaviors of users, with Blacklight, an open source software. Its main features are not different from those standard functions supplied by other commercial packages of Next Generation Library Catalogs, and its continuing improvement and changes, including the addition and expansion of more useful functions and the removal of unnecessary ones, have been observed since the introduction of a beta version in 2010.

Design and Implementation of a Search Engine based on Apache Spark (아파치 스파크 기반 검색엔진의 설계 및 구현)

  • Park, Ki-Sung;Choi, Jae-Hyun;Kim, Jong-Bae;Park, Jae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.1
    • /
    • pp.17-28
    • /
    • 2017
  • Recently, a study on data has been actively conducted because the value of the data has become more useful. Web crawler that is program of data collection recently spotlighted because it can take advantage of the various fields. Web crawler can be defined as a tool to analyze the web pages and collects the URL by traversing the web server in an automated manner. For the treatment of Big-data, distributed Web crawler is widely used which is based on the Hadoop MapReduce. But, it is difficult to use and has constraints on the performance. Apache spark that is the In-memory computing platform is an alternative to MapReduce. The search engine which is one of the main purposes of web crawler displays the information you search by keyword gathered by web crawler. If search engines implement a spark-based web crawler instead of traditional MapReduce-based web crawler, it would be a more rapid data collection.

A Search Method for Components Based-on XML Component Specification (XML 컴포넌트 명세서 기반의 컴포넌트 검색 기법)

  • Park, Seo-Young;Shin, Yoeng-Gil;Wu, Chi-Su
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.2
    • /
    • pp.180-192
    • /
    • 2000
  • Recently, the component technology has played a main role in software reuse. It has changed the code-based reuse into the binary code-based reuse, because components can be easily combined into the developing software only through component interfaces. Since components and component users have increased rapidly, it is necessary that the users of components search for the most proper components for HTML among the enormous number of components on the Internet. It is desirable to use web-document-typed specifications for component specifications on the Internet. This paper proposes to use XML component specifications instead of HTML specifications, because it is impossible to represent the semantics of contexts using HTML. We also propose the XML context-search method based on XML component specifications. Component users use the contexts for the component properties and the terms for the values of component properties in their queries for searching components. The index structure for the context-based search method is the inverted file indexing structure of term-context-component specification. Not only an XML context-based search method but also a variety of search methods based on context-based search, such as keyword, search, faceted search, and browsing search method, are provided for the convenience of users. We use the 3-layer architecture, with an interface layer, a query expansion layer, and an XML search engine layer, of the search engine for the efficient index scheme. In this paper, an XML DTD(Document Type Definition) for component specification is defined and the experimental results of comparing search performance of XML with HTML are discussed.

  • PDF

A VIDEO GEOGRAPHIC INFORMATION SYSTEM FOR SUPPORTING BI-DIRECTIONAL SEARCH FOR VIDEO DATA AND GEOGRAPHIC INFORMATION

  • Yoo, Jea-Jun;Joo, In-Hak;Park, Jong-Huyn;Lee, Jong-Hun
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.151-156
    • /
    • 2002
  • Recently, as the geographic information system (GIS) which searches, manages geographic information is used more widely, there is more requests for some systems which can search and display more actual and realistic information. As a response to these requests, the video geographic information system which connects video data obtained by using cameras and geographic information as it is by displaying the obtained video data is being more popular. However, because most existing video geographic information systems consider video data as an attribute of geographic information or use simple one-way links from geographic information to video data to connect video data with geographic information, they support only displaying video data through searching geographic information. In this paper, we design and implement a video geographic information system which connects video data with geographic information and supports hi-directional search; searching geographic information through searching video data and searching video data through searching geographic information. To do this, we 1) propose an ER data model to represent connection information related to video data, geographic information, 2) propose a process to extract and to construct connection information from video data and geographic information, 3) show a component based system architecture to organize the video geographic information system.

  • PDF

A Fast Intra Skip Detection Algorithm for H.264/AVC Video Encoding

  • Kim, Byung-Gyu;Kim, Jong-Ho;Cho, Chang-Sik
    • ETRI Journal
    • /
    • v.28 no.6
    • /
    • pp.721-731
    • /
    • 2006
  • A fast intra skip detection algorithm based on the ratedistortion (RD) cost for an inter frame (P-slices) is proposed for H.264/AVC video encoding. In the H.264/AVC coding standard, a robust rate-distortion optimization technique is used to select the best coding mode and reference frame for each macroblock (MB). There are three types of intra predictions according to profiles. These are $16{\times}16$ and $4{\times}4$ intra predictions for luminance and an $8{\times}8$ intra prediction for chroma. For the high profile, an $8{\times}8$ intra prediction has been added for luminance. The $4{\times}4$ prediction mode has 9 prediction directions with 4 directions for $16{\times}16$ and $8{\times}8$ luma, and $8{\times}8$ chrominance. In addition to the inter mode search procedure, an intra mode search causes a significant increase in the complexity and computational load for an inter frame. To reduce the computational load of the intra mode search at the inter frame, the RD costs of the neighborhood MBs for the current MB are used and we propose an adaptive thresholding scheme for the intra skip extraction. We verified the performance of the proposed scheme through comparative analysis of experimental results using joint model reference software. The overall encoding time was reduced up to 32% for the IPPP sequence type and 35% for the IBBPBBP sequence type.

  • PDF

Software Effort Estimation Using Artificial Intelligence Approaches (인공지능 접근방법에 의한 S/W 공수예측)

  • Jun, Eung-Sup
    • 한국IT서비스학회:학술대회논문집
    • /
    • 2003.11a
    • /
    • pp.616-623
    • /
    • 2003
  • Since the computing environment changes very rapidly, the estimation of software effort is very difficult because it is not easy to collect a sufficient number of relevant cases from the historical data. If we pinpoint the cases, the number of cases becomes too small. However if we adopt too many cases, the relevance declines. So in this paper we attempt to balance the number of cases and relevance. Since many researches on software effort estimation showed that the neural network models perform at least as well as the other approaches, so we selected the neural network model as the basic estimator. We propose a search method that finds the right level of relevant cases for the neural network model. For the selected case set, eliminating the qualitative input factors with the same values can reduce the scale of the neural network model. Since there exists a multitude of combinations of case sets, we need to search for the optimal reduced neural network model and corresponding case set. To find the quasi-optimal model from the hierarchy of reduced neural network models, we adopted the beam search technique and devised the Case-Set Selection Algorithm. This algorithm can be adopted in the case-adaptive software effort estimation systems.

  • PDF