• Title/Summary/Keyword: software sensor

Search Result 1,191, Processing Time 0.033 seconds

Design of Hardware(Hacker Board) for IoT Security Education Utilizing Dual MCUs (이중 MCU를 활용한 IoT 보안 교육용 하드웨어(해커보드) 설계)

  • Dong-Won Kim
    • Convergence Security Journal
    • /
    • v.24 no.1
    • /
    • pp.43-49
    • /
    • 2024
  • The convergence of education and technology has been emphasized, leading to the application of educational technology (EdTech) in the field of education. EdTech provides learner-centered, customized learning environments through various media and learning situations. In this paper, we designed hardware for EdTech-based educational tools for IoT security education in the field of cybersecurity education. The hardware is based on a dual microcontroller unit (MCU) within a single board, allowing for both attack and defense to be performed. To leverage various sensors in the Internet of Things (IoT), the hardware is modularly designed. From an educational perspective, utilizing EdTech in cybersecurity education enhances engagement by incorporating tangible physical teaching aids. The proposed research suggests that the design of IoT security education hardware can serve as a reference for simplifying the creation of a security education environment for embedded hardware, software, sensor networks, and other areas that are challenging to address in traditional education..

Building Fire Monitoring and Escape Navigation System Based on AR and IoT Technologies (AR과 IoT 기술을 기반으로 한 건물 화재 모니터링 및 탈출 내비게이션 시스템)

  • Wentao Wang;Seung-Yong Lee;Sanghun Park;Seung-Hyun Yoon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.159-169
    • /
    • 2024
  • This paper proposes a new real-time fire monitoring and evacuation navigation system by integrating Augmented Reality (AR) technology with Internet of Things (IoT) technology. The proposed system collects temperature data through IoT temperature measurement devices installed in buildings and automatically transmits it to a MySQL cloud database via an IoT platform, enabling real-time and accurate data monitoring. Subsequently, the real-time IoT data is visualized on a 3D building model generated through Building Information Modeling (BIM), and the model is represented in the real world using AR technology, allowing intuitive identification of the fire origin. Furthermore, by utilizing Vuforia engine's Device Tracking and Area Targets features, the system tracks the user's real-time location and employs an enhanced A* algorithm to find the optimal evacuation route among multiple exits. The paper evaluates the proposed system's practicality and demonstrates its effectiveness in rapid and safe evacuation through user experiments based on various virtual fire scenarios.

A Basic Guide to Network Simulation Using OMNeT++ (OMNeT++을 이용한 네크워크 시뮬레이션 기초 가이드)

  • Sooyeon Park
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.1-6
    • /
    • 2024
  • OMNeT++ (Objective Modular Network Testbed in C++) is an extensible and modular C++ simulation library and framework for building network simulators. OMNeT++ provides simulation models independently developed for various fields, including sensor networks, and Internet protocols. This enables researchers to use the tools and features required for their desired simulations. OMNeT++ uses NED (Network Description) Language to define nodes and network topologies, and it is able to implement the creation and behavior of defined network objects in C++. Moreover, the INET framework is an open-source model library for the OMNeT++ simulation environment, containing models for various networking protocols and components, making it convenient for designing and validating new network protocols. This paper aims to explain the concepts of OMNeT++ and the procedures for network simulation using the INET framework to assist novice researchers in modeling and analyzing various network scenarios.

T-Cache: a Fast Cache Manager for Pipeline Time-Series Data (T-Cache: 시계열 배관 데이타를 위한 고성능 캐시 관리자)

  • Shin, Je-Yong;Lee, Jin-Soo;Kim, Won-Sik;Kim, Seon-Hyo;Yoon, Min-A;Han, Wook-Shin;Jung, Soon-Ki;Park, Se-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.293-299
    • /
    • 2007
  • Intelligent pipeline inspection gauges (PIGs) are inspection vehicles that move along within a (gas or oil) pipeline and acquire signals (also called sensor data) from their surrounding rings of sensors. By analyzing the signals captured in intelligent PIGs, we can detect pipeline defects, such as holes and curvatures and other potential causes of gas explosions. There are two major data access patterns apparent when an analyzer accesses the pipeline signal data. The first is a sequential pattern where an analyst reads the sensor data one time only in a sequential fashion. The second is the repetitive pattern where an analyzer repeatedly reads the signal data within a fixed range; this is the dominant pattern in analyzing the signal data. The existing PIG software reads signal data directly from the server at every user#s request, requiring network transfer and disk access cost. It works well only for the sequential pattern, but not for the more dominant repetitive pattern. This problem becomes very serious in a client/server environment where several analysts analyze the signal data concurrently. To tackle this problem, we devise a fast in-memory cache manager, called T-Cache, by considering pipeline sensor data as multiple time-series data and by efficiently caching the time-series data at T-Cache. To the best of the authors# knowledge, this is the first research on caching pipeline signals on the client-side. We propose a new concept of the signal cache line as a caching unit, which is a set of time-series signal data for a fixed distance. We also provide the various data structures including smart cursors and algorithms used in T-Cache. Experimental results show that T-Cache performs much better for the repetitive pattern in terms of disk I/Os and the elapsed time. Even with the sequential pattern, T-Cache shows almost the same performance as a system that does not use any caching, indicating the caching overhead in T-Cache is negligible.

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.

An Implementation of OTB Extension to Produce TOA and TOC Reflectance of LANDSAT-8 OLI Images and Its Product Verification Using RadCalNet RVUS Data (Landsat-8 OLI 영상정보의 대기 및 지표반사도 산출을 위한 OTB Extension 구현과 RadCalNet RVUS 자료를 이용한 성과검증)

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.449-461
    • /
    • 2021
  • Analysis Ready Data (ARD) for optical satellite images represents a pre-processed product by applying spectral characteristics and viewing parameters for each sensor. The atmospheric correction is one of the fundamental and complicated topics, which helps to produce Top-of-Atmosphere (TOA) and Top-of-Canopy (TOC) reflectance from multi-spectral image sets. Most remote sensing software provides algorithms or processing schemes dedicated to those corrections of the Landsat-8 OLI sensors. Furthermore, Google Earth Engine (GEE), provides direct access to Landsat reflectance products, USGS-based ARD (USGS-ARD), on the cloud environment. We implemented the Orfeo ToolBox (OTB) atmospheric correction extension, an open-source remote sensing software for manipulating and analyzing high-resolution satellite images. This is the first tool because OTB has not provided calibration modules for any Landsat sensors. Using this extension software, we conducted the absolute atmospheric correction on the Landsat-8 OLI images of Railroad Valley, United States (RVUS) to validate their reflectance products using reflectance data sets of RVUS in the RadCalNet portal. The results showed that the reflectance products using the OTB extension for Landsat revealed a difference by less than 5% compared to RadCalNet RVUS data. In addition, we performed a comparative analysis with reflectance products obtained from other open-source tools such as a QGIS semi-automatic classification plugin and SAGA, besides USGS-ARD products. The reflectance products by the OTB extension showed a high consistency to those of USGS-ARD within the acceptable level in the measurement data range of the RadCalNet RVUS, compared to those of the other two open-source tools. In this study, the verification of the atmospheric calibration processor in OTB extension was carried out, and it proved the application possibility for other satellite sensors in the Compact Advanced Satellite (CAS)-500 or new optical satellites.

Acquisition of Subcentimeter GSD Images Using UAV and Analysis of Visual Resolution (UAV를 이용한 Subcentimeter GSD 영상의 취득 및 시각적 해상도 분석)

  • Han, Soohee;Hong, Chang-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.6
    • /
    • pp.563-572
    • /
    • 2017
  • The purpose of the study is to investigate the effect of flight height, flight speed, exposure time of camera shutter and autofocusing on the visual resolution of the image in order to obtain ultra-high resolution images with a GSD less than 1cm. It is also aimed to evaluate the ease of recognition of various types of aerial targets. For this purpose, we measured the visual resolution using a 7952*5304 pixel 35mm CMOS sensor and a 55mm prime lens at 20m intervals from 20m to 120m above ground. As a result, with automatic focusing, the visual resolution is measured 1.1~1.6 times as the theoretical GSD, and without automatic focusing, 1.5~3.5 times. Next, the camera was shot at 80m above ground at a constant flight speed of 5m/s, while reducing the exposure time by 1/2 from 1/60sec to 1/2000sec. Assuming that blur is allowed within 1 pixel, the visual resolution is 1.3~1.5 times larger than the theoretical GSD when the exposure time is kept within the longest exposure time, and 1.4~3.0 times larger when it is not kept. If the aerial targets are printed on A4 paper and they are shot within 80m above ground, the encoded targets can be recognized automatically by commercial software, and various types of general targets and coded ones can be manually recognized with ease.

Development of Autonomous Bio-Mimetic Ornamental Aquarium Fish Robotic (생체 모방형의 아쿠아리움 관상어 로봇 개발)

  • Shin, Kyoo Jae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.5
    • /
    • pp.219-224
    • /
    • 2015
  • In this paper, the designed fish robots DOMI ver1.0 is researched and development for aquarium underwater robot. The presented fish robot consists of the head, 1'st stage body, 2nd stage body and tail, which is connected two point driving joints. The model of the robot fish is analysis to maximize the momentum of the robot fish and the body of the robot is designed through the analysis of the biological fish swimming. Also, Lighthill was applied to the kinematics analysis of robot fish swimming algorithms, we are applied to the approximate method of the streamer model that utilizes techniques mimic the biological fish. The swimming robot has two operating mode such as manual and autonomous operation modes. In manual mode the fish robot is operated to using the RF transceiver, and in autonomous mode the robot is controlled by microprocessor board that is consist PSD sensor for the object recognition and avoidance. In order to the submerged and emerged, the robot has the bladder device in a head portion. The robot gravity center weight is transferred to a one-axis sliding and it is possible to the submerged and emerged of DOMI robot by the breath unit. It was verified by the performance test of this design robot DOMI ver1.0. It was confirmed that excellent performance, such as driving force, durability and water resistance through the underwater field test.

Development of Image Quality Enhancement of a Digital Camera with the Application of Exposure To The Right Exposure Method (ETTR 노출 방법을 활용한 디지털 카메라의 화질 향상)

  • Park, Hyung-Ju;Har, Dong-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.8
    • /
    • pp.95-103
    • /
    • 2010
  • Raw files record luminance values corresponds to each pixel of a digital camera sensor. In digital imaging, controlling exposure to capture the first highlight stop is important on linear-distribution of raw file characteristic. This study sought to verify the efficiency of ETTR method and found the optimum over-exposure amount to maintain the first highlight stop to be the largest number of levels. This was achieved by over-exposing a scene with a raw file and converting it to under-exposure in a raw file converting software. Our paper verified the efficiency of ETTR by controlling the exposure range and ISOs. Throughout the results, if exposure increases gradually 6 steps, dynamic range is also increased. And it shows that the optimized exposure value is around + $1\frac{2}{3}$ stop over compared to the normal exposure with the high ISOs simultaneously. We compared visual noise value at $1\frac{2}{3}$ stop to the normal exposure visual noise. Based on the normal exposure's visual noise, we can confirm that visual noise decrement is increased by increasing ISOs. In this experimental result, we confirm that overexposure about + $1\frac{2}{3}$ stop is the optimum value to make the widest dynamic range and lower visual noise in high ISOs. Based on the study results, we can provide the effective ETTR information to consumers and manufacturers. This method will contribute to the optimum image performance in maximizing dynamic range and minimizing noise in a digital imaging.

The Design of Wide Angle Mobile Camera Corrected Optical Distortion for Peripheral Area (주변부 상의 왜곡을 보정한 모바일 광각 카메라의 광학적 설계)

  • Kim, Se-Jin;Jeong, Hye-Jung;Lim, Hyeon-Seon
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.18 no.4
    • /
    • pp.503-507
    • /
    • 2013
  • Purpose: This study was to design wide angle mobile camera corrected optical distortion for peripheral area, which were reduced optical distortion and TV distortion by using 4 aspherical lenses. Methods: The optical design was satisfied with ${\pm}1%$ optical distortion in viewing angle of $95^{\circ}$ and total length of optical system was less than 4.5 mm which was considering a thickness of mobile camera. 1/3.2 inch (5M) CCD sensor was used in the optical system and set design condition to satisfy MTF which was over than 20% in 140 lp/mm. Results: Optimized wide angle mobile camera showed ${\pm}1%$ optical distortion in full field of $95^{\circ}$ viewing angle and TV distortion was 0.46% so that distortion of peripheral area was reduce. MTF showed over than 20% in every field. Ray aberration and astigmatism were small amount so that it showed stable performance. Conclusions: Obtain wider and clearer view which is reduced image distortion of surrounding area via optical method in wide angle mobile camera which has wider view angle than current mobile camera. And it was able to fix a demerit when it occurred via software correction. It is able to apply to study of camera which is related to spectacles.