• Title/Summary/Keyword: 3D information model

Search Result 2,543, Processing Time 0.032 seconds

DEVELOPMENT OF AN AMPHIBIOUS ROBOT FOR VISUAL INSPECTION OF APR1400 NPP IRWST STRAINER ASSEMBLY

  • Jang, You Hyun;Kim, Jong Seog
    • Nuclear Engineering and Technology
    • /
    • v.46 no.3
    • /
    • pp.439-446
    • /
    • 2014
  • An amphibious inspection robot system (hereafter AIROS) is being developed to visually inspect the in-containment refueling storage water tank (hereafter IRWST) strainer in APR1400 instead of a human diver. Four IRWST strainers are located in the IRWST, which is filled with boric acid water. Each strainer has 108 sub-assembly strainer fin modules that should be inspected with the VT-3 method according to Reg. guide 1.82 and the operation manual. AIROS has 6 thrusters for submarine voyage and 4 legs for walking on the top of the strainer. An inverse kinematic algorithm was implemented in the robot controller for exact walking on the top of the IRWST strainer. The IRWST strainer has several top cross braces that are extruded on the top of the strainer, which can be obstacles of walking on the strainer, to maintain the frame of the strainer. Therefore, a robot leg should arrive at the position beside the top cross brace. For this reason, we used an image processing technique to find the top cross brace in the sole camera image. The sole camera image is processed to find the existence of the top cross brace using the cross edge detection algorithm in real time. A 5-DOF robot arm that has multiple camera modules for simultaneous inspection of both sides can penetrate narrow gaps. For intuitive presentation of inspection results and for management of inspection data, inspection images are stored in the control PC with camera angles and positions to synthesize and merge the images. The synthesized images are then mapped in a 3D CAD model of the IRWST strainer with the location information. An IRWST strainer mock-up was fabricated to teach the robot arm scanning and gaiting. It is important to arrive at the designated position for inserting the robot arm into all of the gaps. Exact position control without anchor under the water is not easy. Therefore, we designed the multi leg robot for the role of anchoring and positioning. Quadruped robot design of installing sole cameras was a new approach for the exact and stable position control on the IRWST strainer, unlike a traditional robot for underwater facility inspection. The developed robot will be practically used to enhance the efficiency and reliability of the inspection of nuclear power plant components.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Application of GIS to Select Viewpoints for Landscape Analysis (경관분석 조망점 선정을 위한 GIS의 적용방안)

  • Kang, Tae-Hyun;Leem, Youn-Taik;Lee, Sang-Ho
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.16 no.2
    • /
    • pp.101-113
    • /
    • 2013
  • The concern on environmental quality makes the landscape analysis more important than before ever. For the landscape analysis, selection of viewpoint is one of most important stage. Because of its subjectiveness, the conventional viewpoint selection method often missed some viewpoints of importance. The purpose of this study is to develop a viewpoint selection method for landscape analysis using GIS data and techniques. During the viewpoint selection process, spatial and attribute data from several GIS systems were hired. Query and overlay methods were mainly adapted for analysis to find out meaningful viewpoints. The 3D simulation analysis on DEM(Digital Elevation Model) was used for every selected viewpoint to examine wether the view target is screened out or not. Application study at a sample site showed some omissions of good viewpoints without any screening. It also exhibited the possibility to reduce time and cost for the viewpoint selection process of landscape analysis. For the progress of applicability, GIS data analysis process have to be improved and more modules such as automatic screening analysis system on selected viewpoint have to be developed.

Design and Performance Analysis of EU Directory Service (ENUM 디렉터리 서비스 설계 및 성능 평가)

  • 이혜원;윤미연;신용태;신성우;송관우
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.4
    • /
    • pp.559-571
    • /
    • 2003
  • ENUM(tElephon NUmbering Mapping) is protocol that brings convergence between PSTN Networks and IP Networks using a unique worldwide E.164 telephone number as an identifier between different communication infrastructure. The mechanism provides a bridge between two completely different environments with E.164 number; IP based application services used in PSTN networks, and PSTN based application services used in IP networks. We propose a new way to organize and handle ENUM Tier 2 name servers to improve performance at the name resolution process in ENUM based application service. We build an ENUM based network model when NAPTR(Naming Authority PoinTeR) resource record is registered and managed by area code at the initial registration step. ENUM promises convenience and flexibility to both PSTN and IP users, yet there is no evidence how much patience is required when users decide to use ENUM instead of non-ENUM based applications. We have estimated ENUM response time, and proved how to improve performance up to 3 times when resources are managed by the proposed mechanism. The proposition of this thesis favorably influences users and helps to establish the policy for Tier 2 name server management.

Study on the LOWTRAN7 Simulation of the Atmospheric Radiative Transfer Using CAGEX Data. (CAGEX 관측자료를 이용한 LOWTRAN7의 대기 복사전달 모의에 대한 조사)

  • 장광미;권태영;박경윤
    • Korean Journal of Remote Sensing
    • /
    • v.13 no.2
    • /
    • pp.99-120
    • /
    • 1997
  • Solar radiation is scattered and absorbed atmospheric compositions in the atmosphere before it reaches the surface and, then after reflected at the surface, until it reaches the satellite sensor. Therefore, consideration of the radiative transfer through the atmosphere is essential for the quantitave analysis of the satellite sensed data, specially at shortwave region. This study examined a feasibility of using radiative transfer code for estimating the atmospheric effects on satellite remote sensing data. To do this, the flux simulated by LOWTRAN7 is compared with CAGEX data in shortwave region. The CAGEX (CERES/ARM/GEWEX Experiment) data provides a dataset of (1) atmospheric soundings, aerosol optical depth and albedo, (2) ARM(Aerosol Radiation Measurement) radiation flux measured by pyrgeometers, pyrheliometer and shadow pyranometer and (3) broadband shortwave flux simulated by Fu-Liou's radiative transfer code. To simulate aerosol effect using the radiative transfer model, the aerosol optical characteristics were extracted from observed aerosol column optical depth, Spinhirne's experimental vertical distribution of scattering coefficient and D'Almeida's statistical atmospheric aerosols radiative characteristics. Simulation of LOWTRAN7 are performed on 31 sample of completely clear days. LOWTRAN's result and CAGEX data are compared on upward, downward direct, downward diffuse solar flux at the surface and upward solar flux at the top of the atmosphere(TOA). The standard errors in LOWTRAN7 simulation of the above components are within 5% except for the downward diffuse solar flux at the surface(6.9%). The results show that a large part of error in LOWTRAN7 flux simulation appeared in the diffuse component due to scattering mainly by atmispheric aerosol. For improving the accuracy of radiative transfer simulation by model, there is a need to provide better information about the radiative charateristrics of atmospheric aerosols.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

A Comparative Study on the Improvement of Curriculum in the Junior College for the Industrial Design Major (2년제 대학 산업디자인전공의 교육과정 개선방안에 관한 비교연구)

  • 강사임
    • Archives of design research
    • /
    • v.13 no.1
    • /
    • pp.209-218
    • /
    • 2000
  • The purpose of this study was to improve the curriculum for industrial design department in the junior colleges. In order to achieve the purpose, two methodologies were carried out. First is job analysis of the industrial designers who have worked in the small & medium manufacturing companies, second is survey for the opinions of professors in the junior colleges. Some results were as follows: 1. The period of junior college for industrial designers is 2 years according to present. But selectively 1 year of advanced course can be established. 2. The practice subjects same as computational formative techniques needed to product development have to be increased. In addition kinds of selection subjects same as foreign language, manufacturing process, new product information and consumer behavior investigation have to be extended. 3. The next subjects need to adjust the title, contents and hours. (1) The need of 3.D related subjects same as computer modeling, computer rendering, 3.D modeling was high. The use of computer is required to design presentation subjects. (2)The need of advertising and sale related subjects same as printing, merchandise, package, typography, photography was low, the need of presentation techniques of new product development was high. (3) The need of field practice, special lecture on practice and reading original texts related subjects was same as at present, but these are not attached importance to form. As the designers feel keenly the necessity of using foreign language, the need of language subject was high.

  • PDF

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.

A Study on the Precise Lineament Recovery of Alluvial Deposits Using Satellite Imagery and GIS

  • 이수진;황종선;이동천;김정우;석동우
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2003.04a
    • /
    • pp.62-62
    • /
    • 2003
  • Landsat TM 영상을 이용, 명암차가 높은 산악 지역에 적용해왔던 알고리즘을 개선하여 비교적 명암차가 낮고 넓게 분포하는 충적층 지역의 선구조를 추출하는 알고리즘을 개발하였다. 수치지형모델(OEM)에 대하여 Local Enhancement 를 이용해서 평탄한 지역을 선정하여 이로부터 충적층을 추출하였다. Zevenbergen & Thorno's Method를 3×3 moving windowing을 통해서 최대 경사방향과 경사를 구해서 충적층을 지나는 선구조 요소를 추출하고 다시 Hough 변환을 이용해서 1차 선구조를 추출하였다 이를 이용하여 충적층의 직각방향의 지형단면의 경사를 유추해서 spline 보간법을 이용해 단면의 최저점을 구하고 이 구해진 점들을 다시 Hough 변환을 이용해서 최종 선구조를 추출하였다. 본 연구에서 사용한 알고리즘은 기존 알고리즘에서 사용된 소창문보다 훨씬 큰 충적층이 분포하는 지역의 지형 경사가 수렴하는 부분에 선구조가 뚜렷이 나타남을 볼 수 있다. 최대경사방향과 경사를 구해서 얻어진 1 차선구조와 최종 선구조에서 선구조 방향이 다소 차이를 보인다. 1 차 선구조의 수직방향 지형단면의 자료를 이용함에 있어, 지형 단면의 시작정과 끝지점을 임의적으로 결정하는 것이 아니라, 충적층을 가로질러 최고점까지 또는 다음 충적층이 나을 때까지의 자료를 이용해서 보간법을 사용하였고, 충적층의 넓이에 따라 보간할 자료량의 차이에 의한 오차가 발생할 수 있다. 넓은 충적층에서 선구조가 잘 추출되는 반면에 좁은 충적층이 분포하거나 계곡에 해당하는 지역에l서는 경사수렴부와 일치하지 않는 선구조가 추출되었다. 이는 향후 계속적으로 연구해서 보완되어야 할 것으로 사료된다.페클 잡영 제거 정도에 있어 다른 필터들과 큰 차이가 없지만 경계선보존지수는 다른 필터들에 비하여 가장 우수함을 확인할 수 있었다.rbon 탐식효율을 조사한 결과 B, D 및 E 분획에서 유의적인 효과를 나타내었다. 이상의 결과를 종합해볼 때, ${\beta}$-glucan은 고용량일 때 직접적으로 또는 $IFN-{\gamma}$ 존재시에는 저용량에서도 복강 큰 포식세로를 활성화시킬 뿐 아니라, 탐식효율도 높임으로써 면역기능을 증진 시키는 것으로 나타났고, 그 효과는 crude ${\beta}$-glucan의 추출조건에 따라 달라지는 것을 알 수 있었다.eveloped. Design concepts and control methods of a new crane will be introduced in this paper.and momentum balance was applied to the fluid field of bundle. while the movement of′ individual material was taken into account. The constitutive model relating the surface force and the deformation of bundle was introduced by considering a representative prodedure that stands for the bundle movement. Then a fundamental equations system could be simplified considering a steady state of the process. On the basi

  • PDF

Two-Stage Evolutionary Algorithm for Path-Controllable Virtual Creatures (경로 제어가 가능한 가상생명체를 위한 2단계 진화 알고리즘)

  • Shim Yoon-Sik;Kim Chang-Hun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.11_12
    • /
    • pp.682-691
    • /
    • 2005
  • We present a two-step evolution system that produces controllable virtual creatures in physically simulated 3D environment. Previous evolutionary methods for virtual creatures did not allow any user intervention during evolution process, because they generated a creature's shape, locomotion, and high-level behaviors such as target-following and obstacle avoidance simultaneously by one-time evolution process. In this work, we divide a single system into manageable two sub-systems, and this more likely allowsuser interaction. In the first stage, a body structure and low-level motor controllers of a creature for straight movement are generated by an evolutionary algorithm. Next, a high-level control to follow a given path is achieved by a neural network. The connection weights of the neural network are optimized by a genetic algorithm. The evolved controller could follow any given path fairly well. Moreover, users can choose or abort creatures according to their taste before the entire evolution process is finished. This paper also presents a new sinusoidal controller and a simplified hydrodynamics model for a capped-cylinder, which is the basic body primitive of a creature.