• Title/Summary/Keyword: Edge-Computing

Search Result 508, Processing Time 0.023 seconds

A Worker-Driven Approach for Opening Detection by Integrating Computer Vision and Built-in Inertia Sensors on Embedded Devices

  • Anjum, Sharjeel;Sibtain, Muhammad;Khalid, Rabia;Khan, Muhammad;Lee, Doyeop;Park, Chansik
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.353-360
    • /
    • 2022
  • Due to the dense and complicated working environment, the construction industry is susceptible to many accidents. Worker's fall is a severe problem at the construction site, including falling into holes or openings because of the inadequate coverings as per the safety rules. During the construction or demolition of a building, openings and holes are formed in the floors and roofs. Many workers neglect to cover openings for ease of work while being aware of the risks of holes, openings, and gaps at heights. However, there are safety rules for worker safety; the holes and openings must be covered to prevent falls. The safety inspector typically examines it by visiting the construction site, which is time-consuming and requires safety manager efforts. Therefore, this study presented a worker-driven approach (the worker is involved in the reporting process) to facilitate safety managers by developing integrated computer vision and inertia sensors-based mobile applications to identify openings. The TensorFlow framework is used to design Convolutional Neural Network (CNN); the designed CNN is trained on a custom dataset for binary class openings and covered and deployed on an android smartphone. When an application captures an image, the device also extracts the accelerometer values to determine the inclination in parallel with the classification task of the device to predict the final output as floor (openings/ covered), wall (openings/covered), and roof (openings / covered). The proposed worker-driven approach will be extended with other case scenarios at the construction site.

  • PDF

Performance Evaluation Using Neural Network Learning of Indoor Autonomous Vehicle Based on LiDAR (라이다 기반 실내 자율주행 차량에서 신경망 학습을 사용한 성능평가 )

  • Yonghun Kwon;Inbum Jung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.3
    • /
    • pp.93-102
    • /
    • 2023
  • Data processing through the cloud causes many problems, such as latency and increased communication costs in the communication process. Therefore, many researchers study edge computing in the IoT, and autonomous driving is a representative application. In indoor self-driving, unlike outdoor, GPS and traffic information cannot be used, so the surrounding environment must be recognized using sensors. An efficient autonomous driving system is required because it is a mobile environment with resource constraints. This paper proposes a machine-learning method using neural networks for autonomous driving in an indoor environment. The neural network model predicts the most appropriate driving command for the current location based on the distance data measured by the LiDAR sensor. We designed six learning models to evaluate according to the number of input data of the proposed neural networks. In addition, we made an autonomous vehicle based on Raspberry Pi for driving and learning and an indoor driving track produced for collecting data and evaluation. Finally, we compared six neural network models in terms of accuracy, response time, and battery consumption, and the effect of the number of input data on performance was confirmed.

Development of Crosswalk Situation Recognition Device (횡단보도 상황 인식 디바이스 개발)

  • Yun, Tae-Jin;No, Mu-Ho;Yeo, Jeong-Hun;Kim, Jae-Yun;Lee, Yeong-Hoon;Hwang, Seung-Hyeok;Kim, Hyeon-Su;Kim, Hyeong-Jun;Park, Seung-Ryeol;Bae, Chang-Hui
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.01a
    • /
    • pp.143-144
    • /
    • 2020
  • 4차 산업 시대가 도래하여 빅데이터와 딥러닝 기술은 다양한 분야에서 아주 중요한 기술로 자리 잡고 있으며, 현재 세계 여러 분야에서 이 기술들을 이용하여 일상, 산업 분야에 적용을 시키고자 한다. 국내에서는 스마트 팩토리, 스마트 시티와 같은 분야에 적용하고 있다. 본 논문에서는 스마트 시티에 적용할 수 있는 횡단보도 상황을 인지하여 교통제어에 활용할 수 있는 빅데이터를 생산하거나 효율적인 교통제어에 활용할 수 있도록 Nvidia Jetson TX2와 실시간 객체 감지 기술인 YOLO v3를 이용하여 횡단보도용 상황 인식을 위한 영상인식 장치를 개발하였다. 제안하는 기술들을 이용하여 스마트시티 구축에 활용할 수 있고, 실시간으로 추가적으로 필요한 객체를 감지하여 확장이 용이한 장점이 있다. 또한 구현에서 효율성을 높이기 위하여 에지 컴퓨팅, 스페이스 디텍션과 같은 기술들을 활용하였다.

  • PDF

Efficient Data Preprocessing Scheme for Audio Deep Learning in Solar-Powered IoT Edge Computing Environment (태양 에너지 수집형 IoT 엣지 컴퓨팅 환경에서 효율적인 오디오 딥러닝을 위한 데이터 전처리 기법)

  • Yeon-Tae Yoo;Chang-Han Lee;Seok-Mun Heo;Na-Kyung You;Ki-Hoon Kim;Chan-Seo Lee;Dong-Kun Noh
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.81-83
    • /
    • 2023
  • 태양 에너지 수집형 IoT 기기는 주기적으로 재충전되는 태양 에너지의 특성상, 에너지 소모를 최소화하기보다는 수집된 에너지를 최대한 유용하게 사용하는 것이 중요하다. 한편, 데이터 기밀성과 프라이버시, 응답속도, 비용 등의 이유로 클라우드가 아닌 데이터 소스 근처에서 머신러닝을 수행하는 엣지 AI에 대한 연구도 활발한데, 그 중 하나는 여러 IoT 장치들이 수집한 오디오 데이터를 활용하여, 다양한 AI 응용들을 IoT 엣지 컴퓨팅 환경에서 제공하는 것이다. 그러나, 이와 관련된 많은 연구에서, IoT 기기들은 에너지의 제약으로 인하여, 엣지 서버(IoT 서버)로의 센싱 데이터 전송만을 수행하고, 데이터 전처리를 포함한 모든 AI 과정은 엣지 서버에서 수행한다. 이 경우, 엣지 서버의 과부하 문제 뿐 아니라, 학습 및 추론에 불필요한 데이터까지도 서버에 그대로 전송되므로 네트워크 과부하 문제도 야기한다. 또한, 이를 해결하고자, 데이터 전처리 과정을 각 IoT 기기에 모두 맡긴다면, 기기의 에너지 부족으로 정전시간이 증가하는 또 다른 문제가 발생한다. 본 논문에서는 각 IoT 기기의 에너지 상태에 따라 데이터 전처리 여부를 결정함으로써, 기기들의 정전시간 증가 문제를 완화시키면서 서버 집중형 엣지 AI 환경의 문제들(엣지 서버 및 네트워크 과부하)을 완화시키고자 한다. 제안기법에서 IoT 장치는 기기가 기본적으로 동작하는 데 필요한 에너지 외의 여분의 에너지 양을 예측하고, 이 여분의 에너지가 있는 경우에만 이를 사용하여 기기에서 전처리 과정, 즉 수집 대상 소리 판별과 잡음 제거 과정을 거친 후 서버에 전송함으로써, IoT기기의 정전시간에 영향을 주지 않으면서, 에너지 적응적으로 데이터 전처리 위치(IoT기기 또는 엣지 서버)를 결정하여 수행한다.

Task offloading scheme based on the DRL of Connected Home using MEC (MEC를 활용한 커넥티드 홈의 DRL 기반 태스크 오프로딩 기법)

  • Ducsun Lim;Kyu-Seek Sohn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.61-67
    • /
    • 2023
  • The rise of 5G and the proliferation of smart devices have underscored the significance of multi-access edge computing (MEC). Amidst this trend, interest in effectively processing computation-intensive and latency-sensitive applications has increased. This study investigated a novel task offloading strategy considering the probabilistic MEC environment to address these challenges. Initially, we considered the frequency of dynamic task requests and the unstable conditions of wireless channels to propose a method for minimizing vehicle power consumption and latency. Subsequently, our research delved into a deep reinforcement learning (DRL) based offloading technique, offering a way to achieve equilibrium between local computation and offloading transmission power. We analyzed the power consumption and queuing latency of vehicles using the deep deterministic policy gradient (DDPG) and deep Q-network (DQN) techniques. Finally, we derived and validated the optimal performance enhancement strategy in a vehicle based MEC environment.

Cyber Threat Intelligence Traffic Through Black Widow Optimisation by Applying RNN-BiLSTM Recognition Model

  • Kanti Singh Sangher;Archana Singh;Hari Mohan Pandey
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.99-109
    • /
    • 2023
  • The darknet is frequently referred to as the hub of illicit online activity. In order to keep track of real-time applications and activities taking place on Darknet, traffic on that network must be analysed. It is without a doubt important to recognise network traffic tied to an unused Internet address in order to spot and investigate malicious online activity. Any observed network traffic is the result of mis-configuration from faked source addresses and another methods that monitor the unused space address because there are no genuine devices or hosts in an unused address block. Digital systems can now detect and identify darknet activity on their own thanks to recent advances in artificial intelligence. In this paper, offer a generalised method for deep learning-based detection and classification of darknet traffic. Furthermore, analyse a cutting-edge complicated dataset that contains a lot of information about darknet traffic. Next, examine various feature selection strategies to choose a best attribute for detecting and classifying darknet traffic. For the purpose of identifying threats using network properties acquired from darknet traffic, devised a hybrid deep learning (DL) approach that combines Recurrent Neural Network (RNN) and Bidirectional LSTM (BiLSTM). This probing technique can tell malicious traffic from legitimate traffic. The results show that the suggested strategy works better than the existing ways by producing the highest level of accuracy for categorising darknet traffic using the Black widow optimization algorithm as a feature selection approach and RNN-BiLSTM as a recognition model.

Optimizing User Experience While Interacting with IR Systems in Big Data Environments

  • Minsoo Park
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.104-110
    • /
    • 2023
  • In the user-centered design paradigm, information systems are created entirely tailored to the users who will use them. When the functions of a complex system meet a simple user interface, users can use the system conveniently. While web personalization services are emerging as a major trend in portal services, portal companies are competing for a second service, such as introducing 'integrated communication platforms'. Until now, the role of the portal has been content and search, but this time, the goal is to create and provide the personalized services that users want through a single platform. Personalization service is a login-based cloud computing service. It has the characteristic of being able to enjoy the same experience at any time in any space with internet access. Personalized web services like this have the advantage of attracting highly loyal users, making them a new service trend that portal companies are paying attention to. Researchers spend a lot of time collecting research-related information by accessing multiple information sources. There is a need to automatically build interest information profiles for each researcher based on personal presentation materials (papers, research projects, patents). There is a need to provide an advanced customized information service that regularly provides the latest information matched with various information sources. Continuous modification and supplementation of each researcher's information profile of interest is the most important factor in increasing suitability when searching for information. As researchers' interest in unstructured information such as technology markets and research trends is gradually increasing from standardized academic information such as patents, it is necessary to expand information sources such as cutting-edge technology markets and research trends. Through this, it is possible to shorten the time required to search and obtain the latest information for research purposes. The interest information profile for each researcher that has already been established can be used in the future to determine the degree of relationship between researchers and to build a database. If this customized information service continues to be provided, it will be useful for research activities.

Design and Implementation of a Lightweight On-Device AI-Based Real-time Fault Diagnosis System using Continual Learning (연속학습을 활용한 경량 온-디바이스 AI 기반 실시간 기계 결함 진단 시스템 설계 및 구현)

  • Youngjun Kim;Taewan Kim;Suhyun Kim;Seongjae Lee;Taehyoun Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.3
    • /
    • pp.151-158
    • /
    • 2024
  • Although on-device artificial intelligence (AI) has gained attention to diagnosing machine faults in real time, most previous studies did not consider the model retraining and redeployment processes that must be performed in real-world industrial environments. Our study addresses this challenge by proposing an on-device AI-based real-time machine fault diagnosis system that utilizes continual learning. Our proposed system includes a lightweight convolutional neural network (CNN) model, a continual learning algorithm, and a real-time monitoring service. First, we developed a lightweight 1D CNN model to reduce the cost of model deployment and enable real-time inference on the target edge device with limited computing resources. We then compared the performance of five continual learning algorithms with three public bearing fault datasets and selected the most effective algorithm for our system. Finally, we implemented a real-time monitoring service using an open-source data visualization framework. In the performance comparison results between continual learning algorithms, we found that the replay-based algorithms outperformed the regularization-based algorithms, and the experience replay (ER) algorithm had the best diagnostic accuracy. We further tuned the number and length of data samples used for a memory buffer of the ER algorithm to maximize its performance. We confirmed that the performance of the ER algorithm becomes higher when a longer data length is used. Consequently, the proposed system showed an accuracy of 98.7%, while only 16.5% of the previous data was stored in memory buffer. Our lightweight CNN model was also able to diagnose a fault type of one data sample within 3.76 ms on the Raspberry Pi 4B device.

Privacy-Preserving Collection and Analysis of Medical Microdata

  • Jong Wook Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.5
    • /
    • pp.93-100
    • /
    • 2024
  • With the advent of the Fourth Industrial Revolution, cutting-edge technologies such as artificial intelligence, big data, the Internet of Things, and cloud computing are driving innovation across industries. These technologies are generating massive amounts of data that many companies are leveraging. However, there is a notable reluctance among users to share sensitive information due to the privacy risks associated with collecting personal data. This is particularly evident in the healthcare sector, where the collection of sensitive information such as patients' medical conditions poses significant challenges, with privacy concerns hindering data collection and analysis. This research presents a novel technique for collecting and analyzing medical data that not only preserves privacy, but also effectively extracts statistical information. This method goes beyond basic data collection by incorporating a strategy to efficiently mine statistical data while maintaining privacy. Performance evaluations using real-world data have shown that the propose technique outperforms existing methods in extracting meaningful statistical insights.

Research on Performance of Graph Algorithm using Deep Learning Technology (딥러닝 기술을 적용한 그래프 알고리즘 성능 연구)

  • Giseop Noh
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.471-476
    • /
    • 2024
  • With the spread of various smart devices and computing devices, big data generation is occurring widely. Machine learning is an algorithm that performs reasoning by learning data patterns. Among the various machine learning algorithms, the algorithm that attracts attention is deep learning based on neural networks. Deep learning is achieving rapid performance improvement with the release of various applications. Recently, among deep learning algorithms, attempts to analyze data using graph structures are increasing. In this study, we present a graph generation method for transferring to a deep learning network. This paper proposes a method of generalizing node properties and edge weights in the graph generation process and converting them into a structure for deep learning input by presenting a matricization We present a method of applying a linear transformation matrix that can preserve attribute and weight information in the graph generation process. Finally, we present a deep learning input structure of a general graph and present an approach for performance analysis.