• 제목/요약/키워드: Network mapping

검색결과 682건 처리시간 0.027초

DECODE: A Novel Method of DEep CNN-based Object DEtection using Chirps Emission and Echo Signals in Indoor Environment (실내 환경에서 Chirp Emission과 Echo Signal을 이용한 심층신경망 기반 객체 감지 기법)

  • Nam, Hyunsoo;Jeong, Jongpil
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • 제21권3호
    • /
    • pp.59-66
    • /
    • 2021
  • Humans mainly recognize surrounding objects using visual and auditory information among the five senses (sight, hearing, smell, touch, taste). Major research related to the latest object recognition mainly focuses on analysis using image sensor information. In this paper, after emitting various chirp audio signals into the observation space, collecting echoes through a 2-channel receiving sensor, converting them into spectral images, an object recognition experiment in 3D space was conducted using an image learning algorithm based on deep learning. Through this experiment, the experiment was conducted in a situation where there is noise and echo generated in a general indoor environment, not in the ideal condition of an anechoic room, and the object recognition through echo was able to estimate the position of the object with 83% accuracy. In addition, it was possible to obtain visual information through sound through learning of 3D sound by mapping the inference result to the observation space and the 3D sound spatial signal and outputting it as sound. This means that the use of various echo information along with image information is required for object recognition research, and it is thought that this technology can be used for augmented reality through 3D sound.

MLP-based 3D Geotechnical Layer Mapping Using Borehole Database in Seoul, South Korea (MLP 기반의 서울시 3차원 지반공간모델링 연구)

  • Ji, Yoonsoo;Kim, Han-Saem;Lee, Moon-Gyo;Cho, Hyung-Ik;Sun, Chang-Guk
    • Journal of the Korean Geotechnical Society
    • /
    • 제37권5호
    • /
    • pp.47-63
    • /
    • 2021
  • Recently, the demand for three-dimensional (3D) underground maps from the perspective of digital twins and the demand for linkage utilization are increasing. However, the vastness of national geotechnical survey data and the uncertainty in applying geostatistical techniques pose challenges in modeling underground regional geotechnical characteristics. In this study, an optimal learning model based on multi-layer perceptron (MLP) was constructed for 3D subsurface lithological and geotechnical classification in Seoul, South Korea. First, the geotechnical layer and 3D spatial coordinates of each borehole dataset in the Seoul area were constructed as a geotechnical database according to a standardized format, and data pre-processing such as correction and normalization of missing values for machine learning was performed. An optimal fitting model was designed through hyperparameter optimization of the MLP model and model performance evaluation, such as precision and accuracy tests. Then, a 3D grid network locally assigning geotechnical layer classification was constructed by applying an MLP-based bet-fitting model for each unit lattice. The constructed 3D geotechnical layer map was evaluated by comparing the results of a geostatistical interpolation technique and the topsoil properties of the geological map.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • 제26권2호
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

The PRISM-based Rainfall Mapping at an Enhanced Grid Cell Resolution in Complex Terrain (복잡지형 고해상도 격자망에서의 PRISM 기반 강수추정법)

  • Chung, U-Ran;Yun, Kyung-Dahm;Cho, Kyung-Sook;Yi, Jae-Hyun;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • 제11권2호
    • /
    • pp.72-78
    • /
    • 2009
  • The demand for rainfall data in gridded digital formats has increased in recent years due to the close linkage between hydrological models and decision support systems using the geographic information system. One of the most widely used tools for digital rainfall mapping is the PRISM (parameter-elevation regressions on independent slopes model) which uses point data (rain gauge stations), a digital elevation model (DEM), and other spatial datasets to generate repeatable estimates of monthly and annual precipitation. In the PRISM, rain gauge stations are assigned with weights that account for other climatically important factors besides elevation, and aspects and the topographic exposure are simulated by dividing the terrain into topographic facets. The size of facet or grid cell resolution is determined by the density of rain gauge stations and a $5{\times}5km$ grid cell is considered as the lowest limit under the situation in Korea. The PRISM algorithms using a 270m DEM for South Korea were implemented in a script language environment (Python) and relevant weights for each 270m grid cell were derived from the monthly data from 432 official rain gauge stations. Weighted monthly precipitation data from at least 5 nearby stations for each grid cell were regressed to the elevation and the selected linear regression equations with the 270m DEM were used to generate a digital precipitation map of South Korea at 270m resolution. Among 1.25 million grid cells, precipitation estimates at 166 cells, where the measurements were made by the Korea Water Corporation rain gauge network, were extracted and the monthly estimation errors were evaluated. An average of 10% reduction in the root mean square error (RMSE) was found for any months with more than 100mm monthly precipitation compared to the RMSE associated with the original 5km PRISM estimates. This modified PRISM may be used for rainfall mapping in rainy season (May to September) at much higher spatial resolution than the original PRISM without losing the data accuracy.

A Microgravity for Mapping and Monitoring the Subsurface Cavities (지하 공동의 탐지와 모니터링을 위한 고정밀 중력탐사)

  • Park, Yeong-Sue;Rim, Hyoung-Rae;Lim, Mu-Taek;Koo, Sung-Bon
    • Geophysics and Geophysical Exploration
    • /
    • 제10권4호
    • /
    • pp.383-392
    • /
    • 2007
  • Karstic features and mining-related cavities not only lead to severe restrictions in land utilizations, but also constitute serious concern about geohazard and groundwater contamination. A microgravity survey was applied for detecting, mapping and monitoring karstic cavities in the test site at Muan prepared by KIGAM. The gravity data were collected using an AutoGrav CG-3 gravimeter at about 800 stations by 5 m interval along paddy paths. The density distribution beneath the profiles was drawn by two dimensional inversion based on the minimum support stabilizing functional, which generated better focused images of density discontinuities. We also imaged three dimensional density distribution by growing body inversion with solution from Euler deconvolution as a priori information. The density image showed that the cavities were dissolved, enlarged and connected into a cavity network system, which was supported by drill hole logs. A time-lapse microgravity was executed on the road in the test site for monitoring the change of the subsurface density distribution before and after grouting. The data were adjusted for reducing the effects due to the different condition of each survey, and inverted to density distributions. They show the change of density structure during the lapsed time, which implies the effects of grouting. This case history at the Muan test site showed that the microgravity with accuracy and precision of ${\mu}Gal$ is an effective and practical tool for detecting, mapping and monitoring the subsurface cavities.

Eye Gaze Tracking System Under Natural Head Movements (머리 움직임이 자유로운 안구 응시 추정 시스템)

  • ;Matthew, Sked;Qiang, Ji
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • 제41권5호
    • /
    • pp.57-64
    • /
    • 2004
  • We proposed the eye gaze tracking system under natural head movements, which consists of one narrow-view field CCD camera, two mirrors which of reflective angles are controlled and active infra-red illumination. The mirrors' angles were computed by geometric and linear algebra calculations to put the pupil images on the optical axis of the camera. Our system allowed the subjects head to move 90cm horizontally and 60cm vertically, and the spatial resolutions were about 6$^{\circ}$ and 7$^{\circ}$, respectively. The frame rate for estimating gaze points was 10~15 frames/sec. As gaze mapping function, we used the hierarchical generalized regression neural networks (H-GRNN) based on the two-pass GRNN. The gaze accuracy showed 94% by H-GRNN improved 9% more than 85% of GRNN even though the head or face was a little rotated. Our system does not have a high spatial gaze resolution, but it allows natural head movements, robust and accurate gaze tracking. In addition there is no need to re-calibrate the system when subjects are changed.

A Study on the Multiple Texture Rendering System for 3D Image Signal Recognition (3차원 영상인식을 위한 다중영상매핑 시스템에 대한 연구)

  • Kim, Sangjune;Park, Chunseok
    • Journal of the Society of Disaster Information
    • /
    • 제12권1호
    • /
    • pp.47-53
    • /
    • 2016
  • Techniques to be developed in this study is intended to apply to an existing integrated control system to "A Study on the multiple Texture Rendering system for three-dimensional Image Signal Recognition" technology or become a center of the building control system in real time video. so, If the study plan multi-image mapping system developed, CCTV camera technology and network technology alone that is, will be a number of security do not have to build a linked system personnel provide services that control while the actual patrol, the other if necessary systems and linked to will develop a system that can reflect the intention Ranger.

2D Human Pose Estimation based on Object Detection using RGB-D information

  • Park, Seohee;Ji, Myunggeun;Chun, Junchul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권2호
    • /
    • pp.800-816
    • /
    • 2018
  • In recent years, video surveillance research has been able to recognize various behaviors of pedestrians and analyze the overall situation of objects by combining image analysis technology and deep learning method. Human Activity Recognition (HAR), which is important issue in video surveillance research, is a field to detect abnormal behavior of pedestrians in CCTV environment. In order to recognize human behavior, it is necessary to detect the human in the image and to estimate the pose from the detected human. In this paper, we propose a novel approach for 2D Human Pose Estimation based on object detection using RGB-D information. By adding depth information to the RGB information that has some limitation in detecting object due to lack of topological information, we can improve the detecting accuracy. Subsequently, the rescaled region of the detected object is applied to ConVol.utional Pose Machines (CPM) which is a sequential prediction structure based on ConVol.utional Neural Network. We utilize CPM to generate belief maps to predict the positions of keypoint representing human body parts and to estimate human pose by detecting 14 key body points. From the experimental results, we can prove that the proposed method detects target objects robustly in occlusion. It is also possible to perform 2D human pose estimation by providing an accurately detected region as an input of the CPM. As for the future work, we will estimate the 3D human pose by mapping the 2D coordinate information on the body part onto the 3D space. Consequently, we can provide useful human behavior information in the research of HAR.

A Study on the Structures and Characteristics of National Policy Knowledge (국가 정책지식의 구조와 특성에 관한 연구)

  • Lee, Ji-Sue;Chung, Young-Mee
    • Journal of Information Management
    • /
    • 제41권2호
    • /
    • pp.1-30
    • /
    • 2010
  • This study analyzed research output in dominant research areas of 19 national research institutions. Policy knowledge produced by the institutions during the past 5 years mainly concerned 10 policies dealing with economy and society issues. Similarities between the research subjects of the institutions were displayed by MDS mapping. The study also identified issue attention cycles of the 5 chosen policies and examined the correlation between the issue attention cycles and the yields of policy knowledge. The knowledge structure of each policy was mapped using co-word analysis and Ward's clustering. It was also found that the institutions performing research on similar subjects demonstrated citation preferences for each other.

Performance Improvement of Radial Basis Function Neural Networks Using Adaptive Feature Extraction (적응적 특징추출을 이용한 Radial Basis Function 신경망의 성능개선)

  • 조용현
    • Journal of Korea Multimedia Society
    • /
    • 제3권3호
    • /
    • pp.253-262
    • /
    • 2000
  • This paper proposes a new RBF neural network that determines the number and the center of hidden neurons based on the adaptive feature extraction for the input data. The principal component analysis is applied for extracting adaptively the features by reducing the dimension of the given input data. It can simultaneously achieve a superior property of both the principal component analysis by mapping input data into set of statistically independent features and the RBF neural networks. The proposed neural networks has been applied to classify the 200 breast cancer databases by 2-class. The simulation results shows that the proposed neural networks has better performances of the learning time and the classification for test data, in comparison with those using the k-means clustering algorithm. And it is affected less than the k-means clustering algorithm by the initial weight setting and the scope of the smoothing factor.

  • PDF