• Title/Summary/Keyword: IoT Applications

Search Result 478, Processing Time 0.033 seconds

IoT-Based Automatic Water Quality Monitoring System with Optimized Neural Network

  • Anusha Bamini A M;Chitra R;Saurabh Agarwal;Hyunsung Kim;Punitha Stephan;Thompson Stephan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.46-63
    • /
    • 2024
  • One of the biggest dangers in the globe is water contamination. Water is a necessity for human survival. In most cities, the digging of borewells is restricted. In some cities, the borewell is allowed for only drinking water. Hence, the scarcity of drinking water is a vital issue for industries and villas. Most of the water sources in and around the cities are also polluted, and it will cause significant health issues. Real-time quality observation is necessary to guarantee a secure supply of drinking water. We offer a model of a low-cost system of monitoring real-time water quality using IoT to address this issue. The potential for supporting the real world has expanded with the introduction of IoT and other sensors. Multiple sensors make up the suggested system, which is utilized to identify the physical and chemical features of the water. Various sensors can measure the parameters such as temperature, pH, and turbidity. The core controller can process the values measured by sensors. An Arduino model is implemented in the core controller. The sensor data is forwarded to the cloud database using a WI-FI setup. The observed data will be transferred and stored in a cloud-based database for further processing. It wasn't easy to analyze the water quality every time. Hence, an Optimized Neural Network-based automation system identifies water quality from remote locations. The performance of the feed-forward neural network classifier is further enhanced with a hybrid GA- PSO algorithm. The optimized neural network outperforms water quality prediction applications and yields 91% accuracy. The accuracy of the developed model is increased by 20% because of optimizing network parameters compared to the traditional feed-forward neural network. Significant improvement in precision and recall is also evidenced in the proposed work.

Modeling and Calibration of Wrist Magnetic Sensor for Measuring Wrist Gesture (손목운동 측정을 위한 손목 자기장 센서의 모델링 및 캘리브레이션)

  • Yeo, Hee-Joo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.4
    • /
    • pp.26-32
    • /
    • 2020
  • Recently, as various wearable devices and IoT technologies have emerged and been applied to real applications, various sensors have been developed to satisfy their purposes and applied. In even In medical applications, IoT technologies have been applied gradually, and particularly, magnets and magnetic sensors have already been playing an important role in the medical industry. In wrist rehabilitation, this kind of sensor technology has enabled us to easily and conveniently measure wrist movement and gestures because there are no tangled lines required between the magnet and sensor. However, one of the drawbacks is that nonlinear output is generated because of the characteristics of a magnetic field. Also, the movement of the wrist joint involves small bones, and so it is not easy to simply model the movement. In order to resolve these issues and accurately measure sensor data, a calibration procedure is inevitable in the measurement. Thus, this paper proposes a practical model and simple calibration methods for measuring the distance between a magnet and a magnetic sensor.

EXECUTION TIME AND POWER CONSUMPTION OPTIMIZATION in FOG COMPUTING ENVIRONMENT

  • Alghamdi, Anwar;Alzahrani, Ahmed;Thayananthan, Vijey
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.137-142
    • /
    • 2021
  • The Internet of Things (IoT) paradigm is at the forefront of present and future research activities. The huge amount of sensing data from IoT devices needing to be processed is increasing dramatically in volume, variety, and velocity. In response, cloud computing was involved in handling the challenges of collecting, storing, and processing jobs. The fog computing technology is a model that is used to support cloud computing by implementing pre-processing jobs close to the end-user for realizing low latency, less power consumption in the cloud side, and high scalability. However, it may be that some resources in fog computing networks are not suitable for some kind of jobs, or the number of requests increases outside capacity. So, it is more efficient to decrease sending jobs to the cloud. Hence some other fog resources are idle, and it is better to be federated rather than forwarding them to the cloud server. Obviously, this issue affects the performance of the fog environment when dealing with big data applications or applications that are sensitive to time processing. This research aims to build a fog topology job scheduling (FTJS) to schedule the incoming jobs which are generated from the IoT devices and discover all available fog nodes with their capabilities. Also, the fog topology job placement algorithm is introduced to deploy jobs into appropriate resources in the network effectively. Finally, by comparing our result with the state-of-art first come first serve (FCFS) scheduling technique, the overall execution time is reduced significantly by approximately 20%, the energy consumption in the cloud side is reduced by 18%.

Neural networks optimization for multi-dimensional digital signal processing in IoT devices (IoT 디바이스에서 다차원 디지털 신호 처리를 위한 신경망 최적화)

  • Choi, KwonTaeg
    • Journal of Digital Contents Society
    • /
    • v.18 no.6
    • /
    • pp.1165-1173
    • /
    • 2017
  • Deep learning method, which is one of the most famous machine learning algorithms, has proven its applicability in various applications and is widely used in digital signal processing. However, it is difficult to apply deep learning technology to IoT devices with limited CPU performance and memory capacity, because a large number of training samples requires a lot of memory and computation time. In particular, if the Arduino with a very small memory capacity of 2K to 8K, is used, there are many limitations in implementing the algorithm. In this paper, we propose a method to optimize the ELM algorithm, which is proved to be accurate and efficient in various fields, on Arduino board. Experiments have shown that multi-class learning is possible up to 15-dimensional data on Arduino UNO with memory capacity of 2KB and possible up to 42-dimensional data on Arduino MEGA with memory capacity of 8KB. To evaluate the experiment, we proved the effectiveness of the proposed algorithm using the data sets generated using gaussian mixture modeling and the public UCI data sets.

Design and Implementation of Walking Activity Prediction Service for Exercise Motive (운동 동기 부여를 위한 걷기 활동량 예측 서비스 설계 및 구현)

  • Kim, Bogyeong;Lee, Cheolhyo;Kim, DoHyeun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.5
    • /
    • pp.99-104
    • /
    • 2016
  • The walking exercise can alleviate stress and also it can improve health fortheir lifetime. Recent development in Information and Communication Technologies (ICT) has laid the foundation for Internet of Things (IoT) to become the future technology. IoT has many applications in industry automation, security, smart homes and cities, education, health etc. In personal health-care domain, IoT is mainly used for monitoring fitness condition by observing current activity of individual. In this paper, we have proposed a novel IoT based personal wellness care system. Proposed system not only keep track of current fitness level but also provide future activity prediction based on history data along with standard recommendations. Predicted activity helps in motivating the individual to achieve the desired fitness level. Initially, we consider only walking activity for testing purpose and later, other types of activities/exercise will be captured for improved health care support.

A Standard Time Management Scheme in the Internet of Things (사물인터넷에서 표준 시각 관리 방안)

  • Hwang, Soyoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.6
    • /
    • pp.929-934
    • /
    • 2018
  • The Internet of Things (IoT) is the network of devices embedded with electronics, software, sensors, actuators, and connectivity which enables these objects to connect and exchange data without any human intervention. The importance of time information is increased in order to impose order on scattered sensor data streams, resolving conflicts through time stamp information and so on. Time information and time synchronization are critical building blocks in the IoT. They allow devices to share a consistent notion of time and it is easier to build efficient and robust collaborative services. This paper proposes a standard time management scheme in the Internet of Things. Many IoT applications involve collection and forwarding of event data. It is useful to know when an event occurs for the purposes of triggering an action. In order to verify the feasibility of the proposed scheme, it is implemented and evaluated in the Arduino development environment.

Design of a 6-Axis Inertial Sensor IC for Accurate Location and Position Recognition of M2M/IoT Devices (M2M / IoT 디바이스의 정밀 위치와 자세 인식을 위한 6축 관성 센서 IC 설계)

  • Kim, Chang Hyun;Chung, Jong-Moon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39C no.1
    • /
    • pp.82-89
    • /
    • 2014
  • Recently, inertial sensors are popularly used for the location and position recognition of small devices for M2M/IoT. In this paper, we designed low power, low noise, small sized 6-axis inertial sensor IC for mobile applications, which uses a 3-axis piezo-electric gyroscope sensor and a 3-axis piezo-resistive accelerometer sensor. Proposed IC is composed of 3-axis gyroscope readout circuit, two gyroscope sensor driving circuits, 3-axis accelerometer readout circuit, 16bit sigma-delta ADC, digital filter and control circuit and memory. TSMC $0.18{\mu}m$ mixed signal CMOS process was used. Proposed IC reduces 27% of the current consumption of LSM330.

An Efficient Hardware Implementation of Lightweight Block Cipher LEA-128/192/256 for IoT Security Applications (IoT 보안 응용을 위한 경량 블록암호 LEA-128/192/256의 효율적인 하드웨어 구현)

  • Sung, Mi-Ji;Shin, Kyung-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.7
    • /
    • pp.1608-1616
    • /
    • 2015
  • This paper describes an efficient hardware implementation of lightweight encryption algorithm LEA-128/192/256 which supports for three master key lengths of 128/192/256-bit. To achieve area-efficient and low-power implementation of LEA crypto- processor, the key scheduler block is optimized to share hardware resources for encryption/decryption key scheduling of three master key lengths. In addition, a parallel register structure and novel operating scheme for key scheduler is devised to reduce clock cycles required for key scheduling, which results in an increase of encryption/decryption speed by 20~30%. The designed LEA crypto-processor has been verified by FPGA implementation. The estimated performances according to master key lengths of 128/192/256-bit are 181/162/109 Mbps, respectively, at 113 MHz clock frequency.

A Study on the Application Trends of Next-Generation Solar Cells and the Future Prospects of Smart Textile Hybrid Energy Harvesting Devices : Focusing on Convergence with Industrial Materials (차세대 태양전지의 활용 동향 및 스마트 텍스타일 하이브리드 에너지 하베스팅 소자의 미래전망에 관한 연구 : 산업 소재와의 융합 중심)

  • Park, Boong-Ik
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.11
    • /
    • pp.151-158
    • /
    • 2021
  • In this paper, we analyzed the latest research trends, challenges, and potential applications of next-generation solar cell materials in various industrial fields. In addition, future prospects and possibilities of Smart Textile Hybrid Energy Harvesting Devices that will supply electricity by combining with wearable IoT devices are presented. The hybrid textile energy harvesting device fused next-generation solar cells with tribo-piezoelectric devices will develop into new 'Convergence Integrated Smart Wear' by combining the material itself with wearable IoT devices in the era of the 4th industrial revolution. The next-generation nanotechnology and devices proposed in this paper will be applied to the field of smart textile with an energy harvesting function. And we hope it will be a paradigm shift that evolves into creative products which provide AI services such as medical & healthcare by convergence with the future smart wear industry.

Remote Control System using Face and Gesture Recognition based on Deep Learning (딥러닝 기반의 얼굴과 제스처 인식을 활용한 원격 제어)

  • Hwang, Kitae;Lee, Jae-Moon;Jung, Inhwan
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.115-121
    • /
    • 2020
  • With the spread of IoT technology, various IoT applications using facial recognition are emerging. This paper describes the design and implementation of a remote control system using deep learning-based face recognition and hand gesture recognition. In general, an application system using face recognition consists of a part that takes an image in real time from a camera, a part that recognizes a face from the image, and a part that utilizes the recognized result. Raspberry PI, a single board computer that can be mounted anywhere, has been used to shoot images in real time, and face recognition software has been developed using tensorflow's FaceNet model for server computers and hand gesture recognition software using OpenCV. We classified users into three groups: Known users, Danger users, and Unknown users, and designed and implemented an application that opens automatic door locks only for Known users who have passed both face recognition and hand gestures.