• Title/Summary/Keyword: Locating system

Search Result 322, Processing Time 0.027 seconds

Highly efficient CMP surveying with ground-penetrating radar utilising real-time kinematic GPS (실시간 GPS를 이용한 고효율 GPR CMP 탐사)

  • Onishi Kyosuke;Yokota Toshiyuki;Maekawa Satoshi;Toshioka Tetsuma;Rokugawa Shuichi
    • Geophysics and Geophysical Exploration
    • /
    • v.8 no.1
    • /
    • pp.59-66
    • /
    • 2005
  • The main purpose of this paper is to describe a highly efficient common mid-point (CMP) data acquisition method for ground-penetrating radar (GPR) surveying, which is intended to widen the application of GPR. The most important innovation to increase the efficiency of CMP data acquisition is continuous monitoring of the GPR antenna positions, using a real-time kinematic Global Positioning System (RTK-GPS). Survey time efficiency is improved because the automatic antenna locating system that we propose frees us from the most time-consuming process-deployment of the antenna at specified positions. Numerical experiments predicted that the data density and the CMP fold would be increased by the increased efficiency of data acquisition, which results in improved signal-to-noise ratios in the resulting data. A field experiment confirmed this hypothesis. The proposed method makes GPR surveys using CMP method more practical and popular. Furthermore, the method has the potential to supply detailed groundwater information. This is because we can convert the spatially dense dielectric constant distribution, obtained by using the CMP method we describe, into a dense physical value distribution that is closely related to such groundwater properties as water saturation.

Ontology-based User Customized Search Service Considering User Intention (온톨로지 기반의 사용자 의도를 고려한 맞춤형 검색 서비스)

  • Kim, Sukyoung;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.129-143
    • /
    • 2012
  • Recently, the rapid progress of a number of standardized web technologies and the proliferation of web users in the world bring an explosive increase of producing and consuming information documents on the web. In addition, most companies have produced, shared, and managed a huge number of information documents that are needed to perform their businesses. They also have discretionally raked, stored and managed a number of web documents published on the web for their business. Along with this increase of information documents that should be managed in the companies, the need of a solution to locate information documents more accurately among a huge number of information sources have increased. In order to satisfy the need of accurate search, the market size of search engine solution market is becoming increasingly expended. The most important functionality among much functionality provided by search engine is to locate accurate information documents from a huge information sources. The major metric to evaluate the accuracy of search engine is relevance that consists of two measures, precision and recall. Precision is thought of as a measure of exactness, that is, what percentage of information considered as true answer are actually such, whereas recall is a measure of completeness, that is, what percentage of true answer are retrieved as such. These two measures can be used differently according to the applied domain. If we need to exhaustively search information such as patent documents and research papers, it is better to increase the recall. On the other hand, when the amount of information is small scale, it is better to increase precision. Most of existing web search engines typically uses a keyword search method that returns web documents including keywords which correspond to search words entered by a user. This method has a virtue of locating all web documents quickly, even though many search words are inputted. However, this method has a fundamental imitation of not considering search intention of a user, thereby retrieving irrelevant results as well as relevant ones. Thus, it takes additional time and effort to set relevant ones out from all results returned by a search engine. That is, keyword search method can increase recall, while it is difficult to locate web documents which a user actually want to find because it does not provide a means of understanding the intention of a user and reflecting it to a progress of searching information. Thus, this research suggests a new method of combining ontology-based search solution with core search functionalities provided by existing search engine solutions. The method enables a search engine to provide optimal search results by inferenceing the search intention of a user. To that end, we build an ontology which contains concepts and relationships among them in a specific domain. The ontology is used to inference synonyms of a set of search keywords inputted by a user, thereby making the search intention of the user reflected into the progress of searching information more actively compared to existing search engines. Based on the proposed method we implement a prototype search system and test the system in the patent domain where we experiment on searching relevant documents associated with a patent. The experiment shows that our system increases the both recall and precision in accuracy and augments the search productivity by using improved user interface that enables a user to interact with our search system effectively. In the future research, we will study a means of validating the better performance of our prototype system by comparing other search engine solution and will extend the applied domain into other domains for searching information such as portal.

Evaluation on the Accuracy of Targeting Error Correction Through the Application of Target Locating System in Robotic CyberKnife (로봇 사이버나이프에서 위치인식시스템을 이용한 Targeting Error값 보정의 정확성 평가)

  • Jeong, Young-Joon;Jung, Jae-Hong;Lim, Kwang-Chae;Cho, Eun-Ju
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.21 no.1
    • /
    • pp.1-7
    • /
    • 2009
  • Purpose: The purpose is to evaluate the accuracy of correcting the targeting error through the Target Location System (TLS) for the location change error of the reference point which arises from the movement or motion of patient during the treatment using the CyberKnife. Materials and Methods: In this test, Gafchromic MD-55 film was inserted into the head and neck phantom to analyze the accuracy of the targeting, and then the 6 MV X-ray of CyberKnife (CyberKnife Robotic Radiosurgery System G4, Accuray, US) was irradiated. End to End (E2E) program was used to analyze the accuracy of targeting, which is provided by Accuray Corporation. To compute the error of the targeting, the test was carried out with the films that were irradiated 12 times by maintaining the distance within the rage of $0{\pm}0.2\;mm$ toward x, y, z from the reference point and maintaining the angle within the rage of $0{\pm}0.2^{\circ}$ toward roll, pitch, yaw, and then with the films which were irradiated 6 times by applying intentional movement. And the correlation in the average value of the reference film and the test film were analyzed through independent samples t-test. In addition, the consistency of dose distribution through gamma-index method (dose difference: 3%) was quantified, compared, and analyzed by varying the distance to agreement (DTA) to 1 mm, 1.5 mm, 2 mm, respectively. Results: E2E test result indicated that the average error of the reference film was 0.405 mm and the standard deviation was 0.069 mm. The average error of the test film was 0.413 mm with the standard deviation of 0.121 mm. The result of independent sampling t-test for both averages showed that the significant probability was P=0.836 (confidence level: 95%). Besides, by comparing the consistency of dose distribution of DTA through 1 mm, 1.5 mm, 2 mm, it was found that the average dose distribution of axial film was 95.04%, 97.56%, 98.13%, respectively in 3,314 locations of the reference film, consistent with the average dose distribution of sagittal film that was 95.47%, 97.68%, 98.47%, respectively. By comparing with the test film, it was found that the average dose distribution of axial film was 96.38%, 97.57%, 98.04%, respectively, at 3,323 locations, consistent with the average dose distribution of sagittal film which was 95.50%, 97.87%, 98.36%, respectively. Conclusion: Robotic CyberKnife traces and complements in real time the error in the location change of the reference point caused by the motion or movement of patient during the treatment and provides the accuracy with the consistency of over 95% dose distribution and the targeting error below 1 mm.

  • PDF

A Study on Improving the Position Accuracy of the Magnetic North used in Surveillance Imaging Equipments (통합형 구조의 감시정찰 영상장비에서 자북의 위치 정확도 개선에 관한 연구)

  • Shin, Young-Don;Lee, Jae-Chon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.4
    • /
    • pp.219-228
    • /
    • 2013
  • The surveillance imaging equipments are functioning to observe the shape of the target in real time or to measure its location precisely. The roles of such equipments are becoming more important in today's weapon systems.The aforementioned imaging equipments can be classified based on the modes of operations such as fixed, installed on cars, or composite of those. Also, according to different concepts of sensor operation, a separate type uses independent housing for each sensor whereas in a composite type a set of multiple sensors are housed into a unit altogether. The sensors in general have magnetism, thereby introducing the possible negative effects, particularly in the composite types, in locating the reference position, which is carried out by the digital compass. The use of shielding material/housing could be an option but results in increased weight and reduced portability, restricting its use in composite type equipments. As such, the objective of this paper is to study on how to reduce such magnetic effects on the position location. To do so, in the absence of magnetic shielding, a variety of sensor positions were first modeled. By combing the result with the fact that the functions of PAN & Tilt are used in the equipments, a new position location algorithm is proposed. The use of the new algorithm can automate the position location process as compared to the manual process of the existing approach. In the algorithm developed, twelve locations are measured in connection with both the azimuth and elevation angles in comparison to the six locations alone around the azimuth angle. As a result, it turns out that the measurement range has been widened but the measurement time reduced. Also, note that the effect of errors the operators may make during measurement could be reduced.

A Study on the Parking Supply and Management Strategics for Multi-Family Housing Sites (공동주택 주차공급 및 관리방안 연구)

  • 안정근
    • Journal of Korean Society of Transportation
    • /
    • v.17 no.2
    • /
    • pp.41-53
    • /
    • 1999
  • The rate of automobile ownerships has been increased significantly in multi-family housing sites so that government has made new parking regulations increasing the rate of parking supply by the high demand of parking lots in multi-family housing sites. However, the new regulation of parking supply has several problems that it applies to only new multi-family housing sites and disregards to the locational distinctions around the sites. It also has reduced to the open spaces in the sites and increased the price of housing units especially to the small size units of multi-family housing sites by increasing the number of underground parking lots. Furthermore, the residents have not been equal opportunity to access their parking lots even though they have been charged to equal amount of financial burden for the construction of underground parking lots. This research aims to relieve above problems by analysing parking supply and demand management strategies both domestic and foreign countries, and suggest to new parking management system for multi-family housing sites in 21st Centuries. This research reveals that most of multi-family housing sites want to be applied 1) diverse parking supply regulations considering the locational distinctions of sites, 2) parking lot ownership programs, 3) charging parking fees to second vehicles, 4) increasing parking lots both in the sites and around the sites, 5) enforcing police power to the parking violation vehicles to their sites. Especially, the multi-family housing sites consisting of small & medium size of units and locating in small & medium size of cites strongly want to be accepted new Parking regulations considered their locational and social distinctions and applied police power to the parking violation vehicles in their sites compared to the other multi-family housing sites.

  • PDF

RFID Indoor Location Recognition Using Neural Network (신경망을 이용한 RFID 실내 위치 인식)

  • Lee, Myeong-hyeon;Heo, Joon-bum;Hong, Yeon-chan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.3
    • /
    • pp.141-146
    • /
    • 2018
  • Recently, location recognition technology has attracted much attention, especially for locating people or objects in an indoor environment without being influenced by the surrounding environment GPS technology is widely used as a method of recognizing the position of an object or a person. GPS is a very efficient, but it does not allow the positions of objects or people indoors to be determined. RFID is a technology that identifies the location information of a tagged object or person using radio frequency information. In this study, an RFID system is constructed and the position is measured using tags. At this time, an error occurs between the actual and measured positions. To overcome this problem, a neural network is trained using the measured and actual position data to reduce the error. In this case, since the number of read tags is not constant, they are not suitable as input values for training the neural network, so the neural network is trained by converting them into center-of-gravity inputs and median value inputs. This allows the position error to be reduce by the neural network. In addition, different numbers of trained data are used, viz. 50, 100, 200 and 300, and the correlation between the number of data input values and the error is checked. When the training is performed using the neural network, the errors of the center-of-gravity input and median value input are compared. It was found that the greater the number of trained data, the lower the error, and that the error is lower when the median value input is used than when the center-of-gravity input is used.

Efficient and Privacy-Preserving Near-Duplicate Detection in Cloud Computing (클라우드 환경에서 검색 효율성 개선과 프라이버시를 보장하는 유사 중복 검출 기법)

  • Hahn, Changhee;Shin, Hyung June;Hur, Junbeom
    • Journal of KIISE
    • /
    • v.44 no.10
    • /
    • pp.1112-1123
    • /
    • 2017
  • As content providers further offload content-centric services to the cloud, data retrieval over the cloud typically results in many redundant items because there is a prevalent near-duplication of content on the Internet. Simply fetching all data from the cloud severely degrades efficiency in terms of resource utilization and bandwidth, and data can be encrypted by multiple content providers under different keys to preserve privacy. Thus, locating near-duplicate data in a privacy-preserving way is highly dependent on the ability to deduplicate redundant search results and returns best matches without decrypting data. To this end, we propose an efficient near-duplicate detection scheme for encrypted data in the cloud. Our scheme has the following benefits. First, a single query is enough to locate near-duplicate data even if they are encrypted under different keys of multiple content providers. Second, storage, computation and communication costs are alleviated compared to existing schemes, while achieving the same level of search accuracy. Third, scalability is significantly improved as a result of a novel and efficient two-round detection to locate near-duplicate candidates over large quantities of data in the cloud. An experimental analysis with real-world data demonstrates the applicability of the proposed scheme to a practical cloud system. Last, the proposed scheme is an average of 70.6% faster than an existing scheme.

A Semantic-Based Mashup Development Tool Supporting Various Open API Types (다양한 Open API 타입들을 지원하는 시맨틱 기반 매쉬업 개발 툴)

  • Lee, Yong-Ju
    • Journal of Internet Computing and Services
    • /
    • v.13 no.3
    • /
    • pp.115-126
    • /
    • 2012
  • Mashups have become very popular over the last few years, and their use also varies for IT convergency services. In spite of their popularity, there are several challenging issues when combining Open APIs into mashups, First, since portal sites may have a large number of APIs available for mashups, manually searching and finding compatible APIs can be a tedious and time-consuming task. Second, none of the existing portal sites provides a way to leverage semantic techniques that have been developed to assist users in locating and integrating APIs like those seen in traditional SOAP-based web services. Third, although suitable APIs have been discovered, the integration of these APIs is required for in-depth programming knowledge. To solve these issues, we first show that existing techniques and algorithms used for finding and matching SOAP-based web services can be reused, with only minor changes. Next, we show how the characteristics of APIs can be syntactically defined and semantically described, and how to use the syntactic and semantic descriptions to aid the easy discovery and composition of Open APIs. Finally, we propose a goal-directed interactive approach for the dynamic composition of APIs, where the final mashup is gradually generated by a forward chaining of APIs. At each step, a new API is added to the composition.

An Adaptive Multi-Level Thresholding and Dynamic Matching Unit Selection for IC Package Marking Inspection (IC 패키지 마킹검사를 위한 적응적 다단계 이진화와 정합단위의 동적 선택)

  • Kim, Min-Ki
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.245-254
    • /
    • 2002
  • IC package marking inspection system using machine vision locates and identifies the target elements from input image, and decides the quality of marking by comparing the extracted target elements with the standard patterns. This paper proposes an adaptive multi-level thresholding (AMLT) method which is suitable for a series of operations such as locating the target IC package, extracting the characters, and detecting the Pinl dimple. It also proposes a dynamic matching unit selection (DMUS) method which is robust to noises as well as effective to catch out the local marking errors. The main idea of the AMLT method is to restrict the inputs of Otsu's thresholding algorithm within a specified area and a partial range of gray values. Doing so, it can adapt to the specific domain. The DMUS method dynamically selects the matching unit according to the result of character extraction and layout analysis. Therefore, in spite of the various erroneous situation occurred in the process of character extraction and layout analysis, it can select minimal matching unit in any environment. In an experiment with 280 IC package images of eight types, the correct extracting rate of IC package and Pinl dimple was 100% and the correct decision rate of marking quality was 98.8%. This result shows that the proposed methods are effective to IC package marking inspection.

k-Interest Places Search Algorithm for Location Search Map Service (위치 검색 지도 서비스를 위한 k관심지역 검색 기법)

  • Cho, Sunghwan;Lee, Gyoungju;Yu, Kiyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.4
    • /
    • pp.259-267
    • /
    • 2013
  • GIS-based web map service is all the more accessible to the public. Among others, location query services are most frequently utilized, which are currently restricted to only one keyword search. Although there increases the demand for the service for querying multiple keywords corresponding to sequential activities(banking, having lunch, watching movie, and other activities) in various locations POI, such service is yet to be provided. The objective of the paper is to develop the k-IPS algorithm for quickly and accurately querying multiple POIs that internet users input and locating the search outcomes on a web map. The algorithm is developed by utilizing hierarchical tree structure of $R^*$-tree indexing technique to produce overlapped geometric regions. By using recursive $R^*$-tree index based spatial join process, the performance of the current spatial join operation was improved. The performance of the algorithm is tested by applying 2, 3, and 4 multiple POIs for spatial query selected from 159 keyword set. About 90% of the test outcomes are produced within 0.1 second. The algorithm proposed in this paper is expected to be utilized for providing a variety of location-based query services, of which demand increases to conveniently support for citizens' daily activities.