• Title/Summary/Keyword: Color Broadcasting

Search Result 162, Processing Time 0.024 seconds

Fall Detection Based on Human Skeleton Keypoints Using GRU

  • Kang, Yoon-Kyu;Kang, Hee-Yong;Weon, Dal-Soo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.4
    • /
    • pp.83-92
    • /
    • 2020
  • A recent study to determine the fall is focused on analyzing fall motions using a recurrent neural network (RNN), and uses a deep learning approach to get good results for detecting human poses in 2D from a mono color image. In this paper, we investigated the improved detection method to estimate the position of the head and shoulder key points and the acceleration of position change using the skeletal key points information extracted using PoseNet from the image obtained from the 2D RGB low-cost camera, and to increase the accuracy of the fall judgment. In particular, we propose a fall detection method based on the characteristics of post-fall posture in the fall motion analysis method and on the velocity of human body skeleton key points change as well as the ratio change of body bounding box's width and height. The public data set was used to extract human skeletal features and to train deep learning, GRU, and as a result of an experiment to find a feature extraction method that can achieve high classification accuracy, the proposed method showed a 99.8% success rate in detecting falls more effectively than the conventional primitive skeletal data use method.

2D to 3D Conversion Using The Machine Learning-Based Segmentation And Optical Flow (학습기반의 객체분할과 Optical Flow를 활용한 2D 동영상의 3D 변환)

  • Lee, Sang-Hak
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.3
    • /
    • pp.129-135
    • /
    • 2011
  • In this paper, we propose the algorithm using optical flow and machine learning-based segmentation for the 3D conversion of 2D video. For the segmentation allowing the successful 3D conversion, we design a new energy function, where color/texture features are included through machine learning method and the optical flow is also introduced in order to focus on the regions with the motion. The depth map are then calculated according to the optical flow of segmented regions, and left/right images for the 3D conversion are produced. Experiment on various video shows that the proposed method yields the reliable segmentation result and depth map for the 3D conversion of 2D video.

A Survey on User Interface Design of University Webzines (대학 웹진의 사용자 인터페이스 디자인 조사)

  • Lee, Joo-Hee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.6
    • /
    • pp.303-308
    • /
    • 2014
  • This paper deals with interface design of a university webzines that to search through an internet portal site Naver. It was obtained the following conclusions. First, university webzines are using a hypertext link to such images, text, movie. Second, it could be seen that mainly been using block grid, the module grid, and a transformed layout of 2 tier grid. Third, Seoul woman's University, Kyungpook National University, and Korea Maritime University's webzines found that layout, color, user-friendly access the structure. Fourth, webzine was used the text or image a link, search function, site map, icon, favorites, quick menus, navigation bars, and rollover menu. Last, university webzines were shown to contribute mere to the enhancement of its value as a promotional medium.

H.264 Encoding Technique of Multi-view Video expressed by Layered Depth Image (계층적 깊이 영상으로 표현된 다시점 비디오에 대한 H.264 부호화 기술)

  • Shin, Jong-Hong;Jee, Inn-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.43-51
    • /
    • 2014
  • Multi-view video including depth image is necessary to develop a new compression encoding technique for storage and transmission, because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This efficient method to compress new contents is suggested to use layered depth image representation and to apply for video compression encoding by using 3D warping. This paper proposed enhanced compression method using layered depth image representation and H.264/AVC video coding technology. In experimental results, we confirmed high compression performance and good quality of reconstructed image.

Proposal of Image Segmentation Technique using Persistent Homology (지속적 호몰로지를 이용한 이미지 세그멘테이션 기법 제안)

  • Hahn, Hee Il
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.1
    • /
    • pp.223-229
    • /
    • 2018
  • This paper proposes a robust technique of image segmentation, which can be obtained if the topological persistence of each connected component is used as the feature vector for the graph-based image segmentation. The topological persistence of the components, which are obtained from the super-level set of the image, is computed from the morse function which is associated with the gray-level or color value of each pixel of the image. The procedure for the components to be born and be merged with the other components is presented in terms of zero-dimensional homology group. Extensive experiments are conducted with a variety of images to show the more correct image segmentation can be obtained by merging the components of small persistence into the adjacent components of large persistence.

Smart Emotion Lighting Control System Based on Android Platform (안드로이드 플랫폼 기반의 스마트 감성조명 제어 시스템)

  • Jo, Eun-Ja;Lin, Chi-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.3
    • /
    • pp.147-153
    • /
    • 2014
  • In this paper, we propose smart emotional lighting control system based on the android platform. The proposed smart emotional lighting control system be configure android platform and sensibility lighting equipment, expansion device, zigbee module. Smart emotional lighting control system based on the android platform is automatic control possible using the illumination sensor, and by selecting the desired lighting partial control can be designed. The experimental results of the proposed smart emotional lighting control sensitivity than conventional lighting control system decreased the power consumption and efficient lighting control was possible. Office acts will be suitable conditions to control the color and brightness, so they can be controlled from the improves concentration and ability to work.

A Portable Micro-display Driver and Device for Vision Improvement (시력 향상을 위한 휴대형 마이크로디스플레이 구동 드라이버 및 장치)

  • Ryu, Young-Kee;Oh, Choonsuk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.129-135
    • /
    • 2016
  • There are many visual enhancement devices for people with low vision. However, almost conventional devices have been simple magnifying and high cost. The symptoms of people with low vision are very variety. It needs to control of image magnifying, brightness, and contrast to improve the visuality. We developed a portable microdisplay driver and device for visual enhancement. This device based on our suggested four methods such as image magnifying, specific color control, BLU brightness control, and visual axis control using a prism. The basic clinical experiments of the proposed Head Mounted Visual Enhancement Device (HMVED) have been performed. The results show beneficiary effects compared with conventional devices, and improve the life quality on people with low vision on account of low weight, low cost, and easy portability.

Classification of Livestock Diseases Using GLCM and Artificial Neural Networks

  • Choi, Dong-Oun;Huan, Meng;Kang, Yun-Jeong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.4
    • /
    • pp.173-180
    • /
    • 2022
  • In the naked eye observation, the health of livestock can be controlled by the range of activity, temperature, pulse, cough, snot, eye excrement, ears and feces. In order to confirm the health of livestock, this paper uses calf face image data to classify the health status by image shape, color and texture. A series of images that have been processed in advance and can judge the health status of calves were used in the study, including 177 images of normal calves and 130 images of abnormal calves. We used GLCM calculation and Convolutional Neural Networks to extract 6 texture attributes of GLCM from the dataset containing the health status of calves by detecting the image of calves and learning the composite image of Convolutional Neural Networks. In the research, the classification ability of GLCM-CNN shows a classification rate of 91.3%, and the subsequent research will be further applied to the texture attributes of GLCM. It is hoped that this study can help us master the health status of livestock that cannot be observed by the naked eye.

Image Segmentation for Fire Prediction using Deep Learning (딥러닝을 이용한 화재 발생 예측 이미지 분할)

  • TaeHoon, Kim;JongJin, Park
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.1
    • /
    • pp.65-70
    • /
    • 2023
  • In this paper, we used a deep learning model to detect and segment flame and smoke in real time from fires. To this end, well known U-NET was used to separate and divide the flame and smoke of the fire using multi-class. As a result of learning using the proposed technique, the values of loss error and accuracy are very good at 0.0486 and 0.97996, respectively. The IOU value used in object detection is also very good at 0.849. As a result of predicting fire images that were not used for learning using the learned model, the flame and smoke of fire are well detected and segmented, and smoke color were well distinguished. Proposed method can be used to build fire prediction and detection system.

A Research of Ink and Wash Elements on the 3D Animation Film <Deep Sea>

  • Biying Guo;Xinyi Shan;Jeanhun Chung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.82-87
    • /
    • 2023
  • <Deep Sea> is an 3D animated film that stands out for its exceptional special effects and distinctive artistic style. The film employs a multitude of dazzling and vibrant ink particles, creating a strong sense of three-dimensionality and weightlessness, while simultaneously portraying a dreamlike and elegant representation of a deep sea ink painting. Furthermore, through the utilization of fragmented stream of consciousness narrative technique, the film establishes a unique artistic effect infused with a Chinese atmosphere. This paper by analyzing the unique particle ink art style and color and stream of consciousness narrative methods in film, this paper discusses the innovative art style generated by traditional ink art style combined with three-dimensional technology, and the integration of traditional ink art ideas and artistic conception in animated films. The objective is to cultivate a new ink art style and prove the importance of traditional cultural expression in animated films, while providing new perspectives for the future application of traditional art in animation.