• Title/Summary/Keyword: 2D Convolutional Neural Network

Search Result 99, Processing Time 0.023 seconds

ROV Manipulation from Observation and Exploration using Deep Reinforcement Learning

  • Jadhav, Yashashree Rajendra;Moon, Yong Seon
    • Journal of Advanced Research in Ocean Engineering
    • /
    • v.3 no.3
    • /
    • pp.136-148
    • /
    • 2017
  • The paper presents dual arm ROV manipulation using deep reinforcement learning. The purpose of this underwater manipulator is to investigate and excavate natural resources in ocean, finding lost aircraft blackboxes and for performing other extremely dangerous tasks without endangering humans. This research work emphasizes on a self-learning approach using Deep Reinforcement Learning (DRL). DRL technique allows ROV to learn the policy of performing manipulation task directly, from raw image data. Our proposed architecture maps the visual inputs (images) to control actions (output) and get reward after each action, which allows an agent to learn manipulation skill through trial and error method. We have trained our network in simulation. The raw images and rewards are directly provided by our simple Lua simulator. Our simulator achieve accuracy by considering underwater dynamic environmental conditions. Major goal of this research is to provide a smart self-learning way to achieve manipulation in highly dynamic underwater environment. The results showed that a dual robotic arm trained for a 3DOF movement successfully achieved target reaching task in a 2D space by considering real environmental factor.

2D and 3D Hand Pose Estimation Based on Skip Connection Form (스킵 연결 형태 기반의 손 관절 2D 및 3D 검출 기법)

  • Ku, Jong-Hoe;Kim, Mi-Kyung;Cha, Eui-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.12
    • /
    • pp.1574-1580
    • /
    • 2020
  • Traditional pose estimation methods include using special devices or images through image processing. The disadvantage of using a device is that the environment in which the device can be used is limited and costly. The use of cameras and image processing has the advantage of reducing environmental constraints and costs, but the performance is lower. CNN(Convolutional Neural Networks) were studied for pose estimation just using only camera without these disadvantage. Various techniques were proposed to increase cognitive performance. In this paper, the effect of the skip connection on the network was experimented by using various skip connections on the joint recognition of the hand. Experiments have confirmed that the presence of additional skip connections other than the basic skip connections has a better effect on performance, but the network with downward skip connections is the best performance.

Human Skeleton Keypoints based Fall Detection using GRU (PoseNet과 GRU를 이용한 Skeleton Keypoints 기반 낙상 감지)

  • Kang, Yoon Kyu;Kang, Hee Yong;Weon, Dal Soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.127-133
    • /
    • 2021
  • A recent study of people physically falling focused on analyzing the motions of the falls using a recurrent neural network (RNN) and a deep learning approach to get good results from detecting 2D human poses from a single color image. In this paper, we investigate a detection method for estimating the position of the head and shoulder keypoints and the acceleration of positional change using the skeletal keypoints information extracted using PoseNet from an image obtained with a low-cost 2D RGB camera, increasing the accuracy of judgments about the falls. In particular, we propose a fall detection method based on the characteristics of post-fall posture in the fall motion-analysis method. A public data set was used to extract human skeletal features, and as a result of an experiment to find a feature extraction method that can achieve high classification accuracy, the proposed method showed a 99.8% success rate in detecting falls more effectively than a conventional, primitive skeletal data-use method.

Multiple Sclerosis Lesion Detection using 3D Autoencoder in Brain Magnetic Resonance Images (3D 오토인코더 기반의 뇌 자기공명영상에서 다발성 경화증 병변 검출)

  • Choi, Wonjune;Park, Seongsu;Kim, Yunsoo;Gahm, Jin Kyu
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.979-987
    • /
    • 2021
  • Multiple Sclerosis (MS) can be early diagnosed by detecting lesions in brain magnetic resonance images (MRI). Unsupervised anomaly detection methods based on autoencoder have been recently proposed for automated detection of MS lesions. However, these autoencoder-based methods were developed only for 2D images (e.g. 2D cross-sectional slices) of MRI, so do not utilize the full 3D information of MRI. In this paper, therefore, we propose a novel 3D autoencoder-based framework for detection of the lesion volume of MS in MRI. We first define a 3D convolutional neural network (CNN) for full MRI volumes, and build each encoder and decoder layer of the 3D autoencoder based on 3D CNN. We also add a skip connection between the encoder and decoder layer for effective data reconstruction. In the experimental results, we compare the 3D autoencoder-based method with the 2D autoencoder models using the training datasets of 80 healthy subjects from the Human Connectome Project (HCP) and the testing datasets of 25 MS patients from the Longitudinal multiple sclerosis lesion segmentation challenge, and show that the proposed method achieves superior performance in prediction of MS lesion by up to 15%.

Towards Low Complexity Model for Audio Event Detection

  • Saleem, Muhammad;Shah, Syed Muhammad Shehram;Saba, Erum;Pirzada, Nasrullah;Ahmed, Masood
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.175-182
    • /
    • 2022
  • In our daily life, we come across different types of information, for example in the format of multimedia and text. We all need different types of information for our common routines as watching/reading the news, listening to the radio, and watching different types of videos. However, sometimes we could run into problems when a certain type of information is required. For example, someone is listening to the radio and wants to listen to jazz, and unfortunately, all the radio channels play pop music mixed with advertisements. The listener gets stuck with pop music and gives up searching for jazz. So, the above example can be solved with an automatic audio classification system. Deep Learning (DL) models could make human life easy by using audio classifications, but it is expensive and difficult to deploy such models at edge devices like nano BLE sense raspberry pi, because these models require huge computational power like graphics processing unit (G.P.U), to solve the problem, we proposed DL model. In our proposed work, we had gone for a low complexity model for Audio Event Detection (AED), we extracted Mel-spectrograms of dimension 128×431×1 from audio signals and applied normalization. A total of 3 data augmentation methods were applied as follows: frequency masking, time masking, and mixup. In addition, we designed Convolutional Neural Network (CNN) with spatial dropout, batch normalization, and separable 2D inspired by VGGnet [1]. In addition, we reduced the model size by using model quantization of float16 to the trained model. Experiments were conducted on the updated dataset provided by the Detection and Classification of Acoustic Events and Scenes (DCASE) 2020 challenge. We confirm that our model achieved a val_loss of 0.33 and an accuracy of 90.34% within the 132.50KB model size.

A Sketch-based 3D Object Retrieval Approach for Augmented Reality Models Using Deep Learning

  • Ji, Myunggeun;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.21 no.1
    • /
    • pp.33-43
    • /
    • 2020
  • Retrieving a 3D model from a 3D database and augmenting the retrieved model in the Augmented Reality system simultaneously became an issue in developing the plausible AR environments in a convenient fashion. It is considered that the sketch-based 3D object retrieval is an intuitive way for searching 3D objects based on human-drawn sketches as query. In this paper, we propose a novel deep learning based approach of retrieving a sketch-based 3D object as for an Augmented Reality Model. For this work, we introduce a new method which uses Sketch CNN, Wasserstein CNN and Wasserstein center loss for retrieving a sketch-based 3D object. Especially, Wasserstein center loss is used for learning the center of each object category and reducing the Wasserstein distance between center and features of the same category. The proposed 3D object retrieval and augmentation consist of three major steps as follows. Firstly, Wasserstein CNN extracts 2D images taken from various directions of 3D object using CNN, and extracts features of 3D data by computing the Wasserstein barycenters of features of each image. Secondly, the features of the sketch are extracted using a separate Sketch CNN. Finally, we adopt sketch-based object matching method to localize the natural marker of the images to register a 3D virtual object in AR system. Using the detected marker, the retrieved 3D virtual object is augmented in AR system automatically. By the experiments, we prove that the proposed method is efficiency for retrieving and augmenting objects.

Deep learning based Person Re-identification with RGB-D sensors

  • Kim, Min;Park, Dong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.35-42
    • /
    • 2021
  • In this paper, we propose a deep learning-based person re-identification method using a three-dimensional RGB-Depth Xtion2 camera considering joint coordinates and dynamic features(velocity, acceleration). The main idea of the proposed identification methodology is to easily extract gait data such as joint coordinates, dynamic features with an RGB-D camera and automatically identify gait patterns through a self-designed one-dimensional convolutional neural network classifier(1D-ConvNet). The accuracy was measured based on the F1 Score, and the influence was measured by comparing the accuracy with the classifier model (JC) that did not consider dynamic characteristics. As a result, our proposed classifier model in the case of considering the dynamic characteristics(JCSpeed) showed about 8% higher F1-Score than JC.

Investigation of image preprocessing and face covering influences on motion recognition by a 2D human pose estimation algorithm (모션 인식을 위한 2D 자세 추정 알고리듬의 이미지 전처리 및 얼굴 가림에 대한 영향도 분석)

  • Noh, Eunsol;Yi, Sarang;Hong, Seokmoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.7
    • /
    • pp.285-291
    • /
    • 2020
  • In manufacturing, humans are being replaced with robots, but expert skills remain difficult to convert to data, making them difficult to apply to industrial robots. One method is by visual motion recognition, but physical features may be judged differently depending on the image data. This study aimed to improve the accuracy of vision methods for estimating the posture of humans. Three OpenPose vision models were applied: MPII, COCO, and COCO+foot. To identify the effects of face-covering accessories and image preprocessing on the Convolutional Neural Network (CNN) structure, the presence/non-presence of accessories, image size, and filtering were set as the parameters affecting the identification of a human's posture. For each parameter, image data were applied to the three models, and the errors between the actual and predicted values, as well as the percentage correct keypoints (PCK), were calculated. The COCO+foot model showed the lowest sensitivity to all three parameters. A <50% (from 3024×4032 to 1512×2016 pixels) reduction in image size was considered acceptable. Emboss filtering, in combination with MPII, provided the best results (reduced error of <60 pixels).

Visual Object Tracking Fusing CNN and Color Histogram based Tracker and Depth Estimation for Automatic Immersive Audio Mixing

  • Park, Sung-Jun;Islam, Md. Mahbubul;Baek, Joong-Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.1121-1141
    • /
    • 2020
  • We propose a robust visual object tracking algorithm fusing a convolutional neural network tracker trained offline from a large number of video repositories and a color histogram based tracker to track objects for mixing immersive audio. Our algorithm addresses the problem of occlusion and large movements of the CNN based GOTURN generic object tracker. The key idea is the offline training of a binary classifier with the color histogram similarity values estimated via both trackers used in this method to opt appropriate tracker for target tracking and update both trackers with the predicted bounding box position of the target to continue tracking. Furthermore, a histogram similarity constraint is applied before updating the trackers to maximize the tracking accuracy. Finally, we compute the depth(z) of the target object by one of the prominent unsupervised monocular depth estimation algorithms to ensure the necessary 3D position of the tracked object to mix the immersive audio into that object. Our proposed algorithm demonstrates about 2% improved accuracy over the outperforming GOTURN algorithm in the existing VOT2014 tracking benchmark. Additionally, our tracker also works well to track multiple objects utilizing the concept of single object tracker but no demonstrations on any MOT benchmark.

Object Pose Estimation and Motion Planning for Service Automation System (서비스 자동화 시스템을 위한 물체 자세 인식 및 동작 계획)

  • Youngwoo Kwon;Dongyoung Lee;Hosun Kang;Jiwook Choi;Inho Lee
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.2
    • /
    • pp.176-187
    • /
    • 2024
  • Recently, automated solutions using collaborative robots have been emerging in various industries. Their primary functions include Pick & Place, Peg in the Hole, fastening and assembly, welding, and more, which are being utilized and researched in various fields. The application of these robots varies depending on the characteristics of the grippers attached to the end of the collaborative robots. To grasp a variety of objects, a gripper with a high degree of freedom is required. In this paper, we propose a service automation system using a multi-degree-of-freedom gripper, collaborative robots, and vision sensors. Assuming various products are placed at a checkout counter, we use three cameras to recognize the objects, estimate their pose, and create grasping points for grasping. The grasping points are grasped by the multi-degree-of-freedom gripper, and experiments are conducted to recognize barcodes, a key task in service automation. To recognize objects, we used a CNN (Convolutional Neural Network) based algorithm and point cloud to estimate the object's 6D pose. Using the recognized object's 6d pose information, we create grasping points for the multi-degree-of-freedom gripper and perform re-grasping in a direction that facilitates barcode scanning. The experiment was conducted with four selected objects, progressing through identification, 6D pose estimation, and grasping, recording the success and failure of barcode recognition to prove the effectiveness of the proposed system.