• Title/Summary/Keyword: large-scale systems

Search Result 1,879, Processing Time 0.03 seconds

Reduction in Sample Size Using Topological Information for Monte Carlo Localization

  • Yang, Ju-Ho;Song, Jae-Bok;Chung, Woo-Jin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.901-905
    • /
    • 2005
  • Monte Carlo localization is known to be one of the most reliable methods for pose estimation of a mobile robot. Much research has been done to improve performance of MCL so far. Although MCL is capable of estimating the robot pose even for a completely unknown initial pose in the known environment, it takes considerable time to give an initial estimate because the number of random samples is usually very large especially for a large-scale environment. For practical implementation of the MCL, therefore, a reduction in sample size is desirable. This paper presents a novel approach to reducing the number of samples used in the particle filter for efficient implementation of MCL. To this end, the topological information generated off- line using a thinning method, which is commonly used in image processing, is employed. The topological map is first created from the given grid map for the environment. The robot scans the local environment using a laser rangefinder and generates a local topological map. The robot then navigates only on this local topological edge, which is likely to be the same as the one obtained off- line from the given grid map. Random samples are drawn near the off-line topological edge instead of being taken with uniform distribution, since the robot traverses along the edge. In this way, the sample size required for MCL can be drastically reduced, thus leading to reduced initial operation time. Experimental results using the proposed method show that the number of samples can be reduced considerably, and the time required for robot pose estimation can also be substantially decreased.

  • PDF

Monitoring-Evaluation System for Lifting Heavy Structures using 3D Location Data (3차원 위치좌표를 이용한 대형 구조물 양중을 위한 계측 - 평가 시스템)

  • Lee, Myung Ho;Chun, Sung Chul;Oh, Bohwan
    • Journal of Korean Society of Steel Construction
    • /
    • v.21 no.4
    • /
    • pp.413-420
    • /
    • 2009
  • Heavy structures such as large roof structures and superbeams were lifted using hydraulic jacks. Moreover, member verticality and horizontality checks were performed at every construction stage to monitor the measuring tapes for their structural safety, using CCTV. When the relative displacement exceeded the predetermined limit, the hydraulic systems were terminated. After adjusting the relative displacement manually, lifting was resumed. The accuracy of the relative displacement was found not to be reliable, however, due to eye check using CCTV, and it took a long time due to manual adjustment. Moreover, real-time monitoring was impossible. To address these problems, the monitoring-evaluation system for the stable lifting of heavy structures was proposed, using the total station of the automatic-target-recognition type, laser-distance-measuring devices, a data logger, a strain gauge, and others. After developing a program for the operation of the system and for the acquisition of data, a mock-up test was conducted in a large-scale structural laboratory to evaluate the accuracy and applicability of the system. The stable acquisition and applicability of data was confirmed in the test.

NVST DATA ARCHIVING SYSTEM BASED ON FASTBIT NOSQL DATABASE

  • Liu, Ying-Bo;Wang, Feng;Ji, Kai-Fan;Deng, Hui;Dai, Wei;Liang, Bo
    • Journal of The Korean Astronomical Society
    • /
    • v.47 no.3
    • /
    • pp.115-122
    • /
    • 2014
  • The New Vacuum Solar Telescope (NVST) is a 1-meter vacuum solar telescope that aims to observe the fine structures of active regions on the Sun. The main tasks of the NVST are high resolution imaging and spectral observations, including the measurements of the solar magnetic field. The NVST has been collecting more than 20 million FITS files since it began routine observations in 2012 and produces maximum observational records of 120 thousand files in a day. Given the large amount of files, the effective archiving and retrieval of files becomes a critical and urgent problem. In this study, we implement a new data archiving system for the NVST based on the Fastbit Not Only Structured Query Language (NoSQL) database. Comparing to the relational database (i.e., MySQL; My Structured Query Language), the Fastbit database manifests distinctive advantages on indexing and querying performance. In a large scale database of 40 million records, the multi-field combined query response time of Fastbit database is about 15 times faster and fully meets the requirements of the NVST. Our slestudy brings a new idea for massive astronomical data archiving and would contribute to the design of data management systems for other astronomical telescopes.

Design and Implementation of A Scan Detection Management System with real time Incidence Response (실시간 e-mail 대응 침입시도탐지 관리시스템의 설계 및 구현)

  • Park, Su-Jin;Park, Myeong-Chan;Lee, Sae-Sae;Choe, Yong-Rak
    • The KIPS Transactions:PartC
    • /
    • v.9C no.3
    • /
    • pp.359-366
    • /
    • 2002
  • Nowadays, the hacking techniques are developed increasingly with wide use of internet. The recent type of scanning attack is appeared in against with multiple target systems on the large scaled domain rather than single network of an organization. The development of scan detection management system which can detect and analyze scan activities is necessary to prevent effectively those attacking at the central system. The scan detection management system is useful for effective utilization of various detection information that received from scan detection agents. Real time scan detection management system that can do the integrated analysis of high lever more that having suitable construction in environment of large scale network is developed.

Development plan for a persistent 1.3 GHz NMR magnet in a new MIRAI project on joint technology for HTS wires/cables in Japan

  • Yanagisawa, Y.;Suetomi, Y.;Piao, R.;Yamagishi, K.;Takao, T.;Hamada, M.;Saito, K.;Ohki, K.;Yamaguchi, T.;Nagaishi, T.;Kitaguchi, H.;Ueda, H.;Shimoyama, J.;Ishii, Y.;Tomita, M.;Maeda, H.
    • Superconductivity and Cryogenics
    • /
    • v.20 no.2
    • /
    • pp.15-22
    • /
    • 2018
  • The present article briefly overviews the plan for a new project on joint technology for HTS wires/cables and describes the development plan for the world's highest field NMR magnet, which is a major development item in the project. For full-fledged social implementation of superconducting devices, high temperature superconducting (HTS) wire is a key technology since they can be cooled by liquid nitrogen and they can generate a super-high magnetic field of >>24 T at liquid helium temperatures. However, one of the major drawbacks of the HTS wires is their availability only in short lengths of a single piece of wire. This necessitates a number of joints being installed in superconducting devices, resulting in a difficult manufacturing process and a large joint resistance. In Japan, a large-scale project has commenced, including two technical demonstration items: (i) Development of superconducting joints between HTS wires, which are used in the world's highest field 1.3 GHz (30.5 T) NMR magnet in persistent current mode; the joints performance is evaluated based on NMR spectra for proteins. (ii) Development of ultra-low resistive joints between DC superconducting feeder cables for railway systems. The project starts a new initiative of next generation super-high field NMR development as well as that of realization of better superconducting power cables.

  • PDF

Fuzzy control of hybrid base-isolator with magnetorheological damper and friction pendulum system (MR 감쇠기와 FPS를 이용한 하이브리드 면진장치의 퍼지제어)

  • Kim, Hyun-Su;Roschke, P.N.;Lin, P.Y.
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.9 no.1 s.41
    • /
    • pp.61-70
    • /
    • 2005
  • Shaking table tests are carried out on a single-degree-of-freedom mass that is equipped with a hybrid base isolation system. The isolator consists of a set of four specially-designed friction pendulum systems (FPS) and a magnetorheological (MR) damper. The structure and its hybrid isolation system are subjected to various intensities of near- and far-fault earthquakes on a large shake table. The proposed fuzzy controller uses feedback from displacement or acceleration transducers attached to the structure to modulate resistance of the semi-active damper to motion. Results from several types of passive and semi-active control strategies are summarized and compared. The study shows that a combination of FPS isolators and an adjustable MR damper can effectively provide robust control of vibration for a large full-scale structure undergoing a wide variety of seismic loads.

Geolocation Spectrum Database Assisted Optimal Power Allocation: Device-to-Device Communications in TV White Space

  • Xue, Zhen;Shen, Liang;Ding, Guoru;Wu, Qihui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.12
    • /
    • pp.4835-4855
    • /
    • 2015
  • TV white space (TVWS) is showing promise to become the first widespread practical application of cognitive technology. In fact, regulators worldwide are beginning to allow access to the TV band for secondary users, on the provision that they access the geolocation database. Device-to-device (D2D) can improve the spectrum efficiency, but large-scale D2D communications that underlie TVWS may generate undesirable interference to TV receivers and cause severe mutual interference. In this paper, we use an established geolocation database to investigate the power allocation problem, in order to maximize the total sum throughput of D2D links in TVWS while guaranteeing the quality-of-service (QoS) requirement for both D2D links and TV receivers. Firstly, we formulate an optimization problem based on the system model, which is nonconvex and intractable. Secondly, we use an effective approach to convert the original problem into a series of convex problems and we solve these problems using interior point methods that have polynomial computational complexity. Additionally, we propose an iterative algorithm based on the barrier method to locate the optimal solution. Simulation results show that the proposed algorithm has strong performance with high approximation accuracy for both small and large dimensional problems, and it is superior to both the active set algorithm and genetic algorithm.

Analysis of massive data in astronomy (천문학에서의 대용량 자료 분석)

  • Shin, Min-Su
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.6
    • /
    • pp.1107-1116
    • /
    • 2016
  • Recent astronomical survey observations have produced substantial amounts of data as well as completely changed conventional methods of analyzing astronomical data. Both classical statistical inference and modern machine learning methods have been used in every step of data analysis that range from data calibration to inferences of physical models. We are seeing the growing popularity of using machine learning methods in classical problems of astronomical data analysis due to low-cost data acquisition using cheap large-scale detectors and fast computer networks that enable us to share large volumes of data. It is common to consider the effects of inhomogeneous spatial and temporal coverage in the analysis of big astronomical data. The growing size of the data requires us to use parallel distributed computing environments as well as machine learning algorithms. Distributed data analysis systems have not been adopted widely for the general analysis of massive astronomical data. Gathering adequate training data is expensive in observation and learning data are generally collected from multiple data sources in astronomy; therefore, semi-supervised and ensemble machine learning methods will become important for the analysis of big astronomical data.

Knocking-in of the Human Thrombopoietin Gene on Beta-casein Locus in Bovine Fibroblasts

  • Chang, Mira;Lee, Jeong-Woong;Koo, Deog-Bon;Shin, Sang Tae;Han, Yong-Mahn
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.23 no.6
    • /
    • pp.806-813
    • /
    • 2010
  • Animal bioreactors have been regarded as alternative tools for the production of limited human therapeutic proteins. The mammary glands of cattle are optimal tissues to produce therapeutic proteins that cannot be produced in large amounts in traditional systems based on microorganisms and eukaryotic cells. In this study, two knock-in vectors, pBCTPOKI-6 and pBCTPOKI-10, which target the hTPO gene on the bovine beta-casein locus, were designed to develop cloned transgenic cattle. The pBCTPOKI-6 and pBCTPOKI-10 vectors expressed hTPO protein in culture medium at a concentration of 774 pg/ml and 1,867 pg/ml, respectively. Successfully, two targeted cell clones were obtained from the bovine fibroblasts transfected with the pBCTPOKI-6 vector. Cloned embryos reconstructed with the targeted nuclei showed a lower in vitro developmental competence than those with the wild-type nuclei. After transfer of the cloned embryos into recipients, 7 pregnancies were detected at 40 to 60 days of gestation, but failed to develop to term. The results are the first trial for targeting of a human gene on the bovine milk protein gene locus, providing the potential for a large-scale production of therapeutic proteins in the animal bioreactor system.

A Study on Distributed Parallel SWRL Inference in an In-Memory-Based Cluster Environment (인메모리 기반의 클러스터 환경에서 분산 병렬 SWRL 추론에 대한 연구)

  • Lee, Wan-Gon;Bae, Seok-Hyun;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.45 no.3
    • /
    • pp.224-233
    • /
    • 2018
  • Recently, there are many of studies on SWRL reasoning engine based on user-defined rules in a distributed environment using a large-scale ontology. Unlike the schema based axiom rules, efficient inference orders cannot be defined in SWRL rules. There is also a large volumet of network shuffled data produced by unnecessary iterative processes. To solve these problems, in this study, we propose a method that uses Map-Reduce algorithm and distributed in-memory framework to deduce multiple rules simultaneously and minimizes the volume data shuffling occurring between distributed machines in the cluster. For the experiment, we use WiseKB ontology composed of 200 million triples and 36 user-defined rules. We found that the proposed reasoner makes inferences in 16 minutes and is 2.7 times faster than previous reasoning systems that used LUBM benchmark dataset.