• Title/Summary/Keyword: 센서 확장

Search Result 702, Processing Time 0.025 seconds

Counting and Localizing Occupants using IR-UWB Radar and Machine Learning

  • Ji, Geonwoo;Lee, Changwon;Yun, Jaeseok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.5
    • /
    • pp.1-9
    • /
    • 2022
  • Localization systems can be used with various circumstances like measuring population movement and rescue technology, even in security technology (like infiltration detection system). Vision sensors such as camera often used for localization is susceptible with light and temperature, and can cause invasion of privacy. In this paper, we used ultra-wideband radar technology (which is not limited by aforementioned problems) and machine learning techniques to measure the number and location of occupants in other indoor spaces behind the wall. We used four different algorithms and compared their results, including extremely randomized tree for four different situations; detect the number of occupants in a classroom, split the classroom into 28 locations and check the position of occupant, select one out of the 28 locations, divide it into 16 fine-grained locations, and check the position of occupant, and checking the positions of two occupants (existing in different locations). Overall, four algorithms showed good results and we verified that detecting the number and location of occupants are possible with high accuracy using machine learning. Also we have considered the possibility of service expansion using the oneM2M standard platform and expect to develop more service and products if this technology is used in various fields.

Development of an Ensemble-Based Multi-Region Integrated Odor Concentration Prediction Model (앙상블 기반의 악취 농도 다지역 통합 예측 모델 개발)

  • Seong-Ju Cho;Woo-seok Choi;Sang-hyun Choi
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.383-400
    • /
    • 2023
  • Air pollution-related diseases are escalating worldwide, with the World Health Organization (WHO) estimating approximately 7 million annual deaths in 2022. The rapid expansion of industrial facilities, increased emissions from various sources, and uncontrolled release of odorous substances have brought air pollution to the forefront of societal concerns. In South Korea, odor is categorized as an independent environmental pollutant, alongside air and water pollution, directly impacting the health of local residents by causing discomfort and aversion. However, the current odor management system in Korea remains inadequate, necessitating improvements. This study aims to enhance the odor management system by analyzing 1,010,749 data points collected from odor sensors located in Osong, Chungcheongbuk-do, using an Ensemble-Based Multi-Region Integrated Odor Concentration Prediction Model. The research results demonstrate that the model based on the XGBoost algorithm exhibited superior performance, with an RMSE of 0.0096, significantly outperforming the single-region model (0.0146) with a 51.9% reduction in mean error size. This underscores the potential for increasing data volume, improving accuracy, and enabling odor prediction in diverse regions using a unified model through the standardization of odor concentration data collected from various regions.

Transparent Near-infrared Absorbing Dyes and Applications (투명 근적외선 흡수 염료 및 응용 분야)

  • Hyocheol Jung;Ji-Eun Jeong;Sang-Ho Lee;Jin Chul Kim;Young Il Park
    • Applied Chemistry for Engineering
    • /
    • v.34 no.3
    • /
    • pp.207-212
    • /
    • 2023
  • Near-infrared (NIR) absorbing dyes have been applied to various applications such as optical filters, biotechnology, energy storage and conversion, coating additive, and traditionally information-storage materials. Because image sensors used in cellphones and digital cameras have sensitivity in the NIR region, the NIR cut-off filter is essential to achieving more clear images. As energy storage and conversion have been important, diverse NIR absorbing materials have been developed to extend the absorption region to the NIR region, and NIR absorbing materials-based research has proceeded to improve device performances. Adding NIR-absorbing dye with a photo-thermal effect to a self-healable coating system has been attractive for future mobility technology, and more effective self-healing properties have been reported. In this report, the chemical structures of representative NIR-absorbing dyes and state of the art research based on NIR-absorbing dyes are introduced.

Automatic Collection of Production Performance Data Based on Multi-Object Tracking Algorithms (다중 객체 추적 알고리즘을 이용한 가공품 흐름 정보 기반 생산 실적 데이터 자동 수집)

  • Lim, Hyuna;Oh, Seojeong;Son, Hyeongjun;Oh, Yosep
    • The Journal of Society for e-Business Studies
    • /
    • v.27 no.2
    • /
    • pp.205-218
    • /
    • 2022
  • Recently, digital transformation in manufacturing has been accelerating. It results in that the data collection technologies from the shop-floor is becoming important. These approaches focus primarily on obtaining specific manufacturing data using various sensors and communication technologies. In order to expand the channel of field data collection, this study proposes a method to automatically collect manufacturing data based on vision-based artificial intelligence. This is to analyze real-time image information with the object detection and tracking technologies and to obtain manufacturing data. The research team collects object motion information for each frame by applying YOLO (You Only Look Once) and DeepSORT as object detection and tracking algorithms. Thereafter, the motion information is converted into two pieces of manufacturing data (production performance and time) through post-processing. A dynamically moving factory model is created to obtain training data for deep learning. In addition, operating scenarios are proposed to reproduce the shop-floor situation in the real world. The operating scenario assumes a flow-shop consisting of six facilities. As a result of collecting manufacturing data according to the operating scenarios, the accuracy was 96.3%.

Development of a Deep-Learning Model with Maritime Environment Simulation for Detection of Distress Ships from Drone Images (드론 영상 기반 조난 선박 탐지를 위한 해양 환경 시뮬레이션을 활용한 딥러닝 모델 개발)

  • Jeonghyo Oh;Juhee Lee;Euiik Jeon;Impyeong Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1451-1466
    • /
    • 2023
  • In the context of maritime emergencies, the utilization of drones has rapidly increased, with a particular focus on their application in search and rescue operations. Deep learning models utilizing drone images for the rapid detection of distressed vessels and other maritime drift objects are gaining attention. However, effective training of such models necessitates a substantial amount of diverse training data that considers various weather conditions and vessel states. The lack of such data can lead to a degradation in the performance of trained models. This study aims to enhance the performance of deep learning models for distress ship detection by developing a maritime environment simulator to augment the dataset. The simulator allows for the configuration of various weather conditions, vessel states such as sinking or capsizing, and specifications and characteristics of drones and sensors. Training the deep learning model with the dataset generated through simulation resulted in improved detection performance, including accuracy and recall, when compared to models trained solely on actual drone image datasets. In particular, the accuracy of distress ship detection in adverse weather conditions, such as rain or fog, increased by approximately 2-5%, with a significant reduction in the rate of undetected instances. These results demonstrate the practical and effective contribution of the developed simulator in simulating diverse scenarios for model training. Furthermore, the distress ship detection deep learning model based on this approach is expected to be efficiently applied in maritime search and rescue operations.

Perceptions on Microcomputer-Based Laboratory Experiments of Science Teachers that Participated in In-Service Training (연수에 참여한 교사들의 MBL실험에 대한 인식)

  • Park, Kum-Hong;Ku, Yang-Sam;Choi, Byung-Soon;Shin, Ae-Kyung;Lee, Kuk-Haeng;Ko, Suk-Beum
    • Journal of The Korean Association For Science Education
    • /
    • v.27 no.1
    • /
    • pp.59-69
    • /
    • 2007
  • The aim of this study was to investigate teachers' perceptions on MBL (microcomputer-based laboratory) experiment training program for teachers, the expecting effects of MBL experiment and application of MBL experiment after conducting MBL experiment training for science classes in schools. This study showed that most of the teachers who participated in the training program thought that the MBL experiment training program was very useful and instructive. Many teachers considered that MBL experiments using a computer could decrease time spent in the experiment by accurate and fast data collection and analysis. They also thought that the reduced time could be used more effectively in the analysis of experimental data and discussion activities leading to correct concept formation as well as in the development of graphical analysis and science process skills. However, they thought that MBL experiments were ineffective in learning how to operate experiment apparatus. This study also revealed that most teachers intended to apply MBL experiments in real classrooms context right after the training course and they pointed out many obstacles in introducing MBL experiments into their classrooms such as a budget to purchase equipment, poor laboratory conditions, and few MBL experiment training opportunities. In order to apply MBL experiment into the real classrooms, further changes were suggested as follows; development of technologies to reduce unit cost of equipment for MBL experiments, production and supply of many kinds of sensors, development of MBL experiment materials, and expansion of the training program for teachers.

Intelligent Transportation System (ITS) research optimized for autonomous driving using edge computing (엣지 컴퓨팅을 이용하여 자율주행에 최적화된 지능형 교통 시스템 연구(ITS))

  • Sunghyuck Hong
    • Advanced Industrial SCIence
    • /
    • v.3 no.1
    • /
    • pp.23-29
    • /
    • 2024
  • In this scholarly investigation, the focus is placed on the transformative potential of edge computing in enhancing Intelligent Transportation Systems (ITS) for the facilitation of autonomous driving. The intrinsic capability of edge computing to process voluminous datasets locally and in a real-time manner is identified as paramount in meeting the exigent requirements of autonomous vehicles, encompassing expedited decision-making processes and the bolstering of safety protocols. This inquiry delves into the synergy between edge computing and extant ITS infrastructures, elucidating the manner in which localized data processing can substantially diminish latency, thereby augmenting the responsiveness of autonomous vehicles. Further, the study scrutinizes the deployment of edge servers, an array of sensors, and Vehicle-to-Everything (V2X) communication technologies, positing these elements as constituents of a robust framework designed to support instantaneous traffic management, collision avoidance mechanisms, and the dynamic optimization of vehicular routes. Moreover, this research addresses the principal challenges encountered in the incorporation of edge computing within ITS, including issues related to security, the integration of data, and the scalability of systems. It proffers insights into viable solutions and delineates directions for future scholarly inquiry.

Evaluation of Road and Traffic Information Use Efficiency on Changes in LDM-based Electronic Horizon through Microscopic Simulation Model (미시적 교통 시뮬레이션을 활용한 LDM 기반 도로·교통정보 활성화 구간 변화에 따른 정보 이용 효율성 평가)

  • Kim, Hoe Kyoung;Chung, Younshik;Park, Jaehyung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.2
    • /
    • pp.231-238
    • /
    • 2023
  • Since there is a limit to the physically visible horizon that sensors for autonomous driving can perceive, complementary utilization of digital map data such as a Local Dynamic Map (LDM) along the probable route of an Autonomous Vehicle (AV) is proposed for safe and efficient driving. Although the amount of digital map data may be insignificant compared to the amount of information collected from the sensors of an AV, efficient management of map data is inevitable for the efficient information processing of AVs. The objective of this study is to analyze the efficiency of information use and information processing time of AV according to the expansion of the active section of LDM-based static road and traffic information. To carry out this objective, a microscopic simulator model, VISSIM and VISSIM COM, was employed, and an area of about 9 km × 13 km was selected in the Busan Metropolitan Area, which includes heterogeneous traffic flows (i.e., uninterrupted and interrupted flows) as well as various road geometries. In addition, the LDM information used in AVs refers to the real high-definition map (HDM) built on the basis of ISO 22726-1. As a result of the analysis, as the electronic horizon area increases, while short links are intensively recognized on interrupted urban roads and the sum of link lengths increases as well, the number of recognized links is relatively small on uninterrupted traffic road but the sum of link lengths is large due to a small number of long links. Therefore, this study showed that an efficient range of electronic horizon for HDM data collection, processing, and management are set as 600 m on interrupted urban roads considering the 12 links corresponding to three downstream intersections and 700 m on uninterrupted traffic road associated with the 10 km sum of link lengths, respectively.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

Study of Prediction Model Improvement for Apple Soluble Solids Content Using a Ground-based Hyperspectral Scanner (지상용 초분광 스캐너를 활용한 사과의 당도예측 모델의 성능향상을 위한 연구)

  • Song, Ahram;Jeon, Woohyun;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_1
    • /
    • pp.559-570
    • /
    • 2017
  • A partial least squares regression (PLSR) model was developed to map the internal soluble solids content (SSC) of apples using a ground-based hyperspectral scanner that could simultaneously acquire outdoor data and capture images of large quantities of apples. We evaluated the applicability of various preprocessing techniques to construct an optimal prediction model and calculated the optimal band through a variable importance in projection (VIP)score. From the 515 bands of hyperspectral images extracted at wavelengths of 360-1019 nm, 70 reflectance spectra of apples were extracted, and the SSC ($^{\circ}Brix$) was measured using a digital photometer. The optimal prediction model wasselected considering the root-mean-square error of cross-validation (RMSECV), root-mean-square error of prediction (RMSEP) and coefficient of determination of prediction $r_p^2$. As a result, multiplicative scatter correction (MSC)-based preprocessing methods were better than others. For example, when a combination of MSC and standard normal variate (SNV) was used, RMSECV and RMSEP were the lowest at 0.8551 and 0.8561 and $r_c^2$ and $r_p^2$ were the highest at 0.8533 and 0.6546; wavelength ranges of 360-380, 546-690, 760, 915, 931-939, 942, 953, 971, 978, 981, 988, and 992-1019 nm were most influential for SSC determination. The PLSR model with the spectral value of the corresponding region confirmed that the RMSEP decreased to 0.6841 and $r_p^2$ increased to 0.7795 as compared to the values of the entire wavelength band. In this study, we confirmed the feasibility of using a hyperspectral scanner image obtained from outdoors for the SSC measurement of apples. These results indicate that the application of field data and sensors could possibly expand in the future.