• 제목/요약/키워드: Video Identification

Search Result 175, Processing Time 0.021 seconds

A Blind Video Watermarking Technique Using Luminance Masking and DC Modulus Algorithm (휘도 마스킹과 DC Modulus 알고리즘을 이용한 비디오 워터마킹)

  • Jang Yong-Won;Kim, In-Taek;Han, Seung-Soo
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.7
    • /
    • pp.302-307
    • /
    • 2002
  • Digital watermarking is the technique, which embeds an invisible signal including signal including owner identification and copy control information into multimedia data such as audio, video, and images for copyright protection. A new MPEG watermark embedding algorithm using complex block effect based on the Human Visual System(HVS) is introduced in this paper. In this algorithm, $8{\times}8$ dark blocks are selected, and the watermark is embedded in the DC component of the discrete cosine transform(DCT) by using quantization and modulus calculation. This algorithm uses a blind watermark retrieval technique, which detects the embedded watermark without using the original image. The experimental results show that the proposed watermark technique is robust against MPEG coding, bitrate changes, and various GOP(Group of Picture) changes.

Analysis on Video Image Effect in , China's Performing Arts Work of Cultural Tourism (중국의 문화관광 공연작품 <장한가>에 나타난 영상이미지 효과 분석)

  • Yook, Jung-Hak
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.6
    • /
    • pp.77-85
    • /
    • 2013
  • This study aims to analyze the effects that video image in Seo-an's , claiming to China's first gigantic historic dance drama, has on the performance; it focuses on investigating which video image is used to accomplish the effects in showing specific themes and materials in . Image is meant by 'reflection of object', such as movie, television, dictionary, etc, with its coverage being extensive. The root of a word, image', is founded on imitary, signifying specifically and mentally visual representation. In other words, video image is considered combination of two synonymous words, 'video' and 'image'. Video is not just comprehension of traditional art genre, like literary value, theatrical qualities, and artistry of scenario, but wholeness as product, integrating original functions of all kinds of art and connecting subtle image creation of human being. The effects of video image represented in are as followings; first, expressive effect of the connotative meaning, reflecting the spirit of the age and its culture. Second, imaginary identification. Third, transformation scene. Fourth, dramatic interest through immersion. Last but not least, visual effect by dint of dimension of performance.

Viewpoint Invariant Person Re-Identification for Global Multi-Object Tracking with Non-Overlapping Cameras

  • Gwak, Jeonghwan;Park, Geunpyo;Jeon, Moongu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.4
    • /
    • pp.2075-2092
    • /
    • 2017
  • Person re-identification is to match pedestrians observed from non-overlapping camera views. It has important applications in video surveillance such as person retrieval, person tracking, and activity analysis. However, it is a very challenging problem due to illumination, pose and viewpoint variations between non-overlapping camera views. In this work, we propose a viewpoint invariant method for matching pedestrian images using orientation of pedestrian. First, the proposed method divides a pedestrian image into patches and assigns angle to a patch using the orientation of the pedestrian under the assumption that a person body has the cylindrical shape. The difference between angles are then used to compute the similarity between patches. We applied the proposed method to real-time global multi-object tracking across multiple disjoint cameras with non-overlapping field of views. Re-identification algorithm makes global trajectories by connecting local trajectories obtained by different local trackers. The effectiveness of the viewpoint invariant method for person re-identification was validated on the VIPeR dataset. In addition, we demonstrated the effectiveness of the proposed approach for the inter-camera multiple object tracking on the MCT dataset with ground truth data for local tracking.

The Effects of Group Psychotherapy on Recovery of Self-identification with the Unemployed Homeless (실직 노숙자 자아정체감 회복을 위한 집단정신치료)

  • Lee, Jung-Sook;Kim, Yun-Hee
    • Korean Journal of Human Ecology
    • /
    • v.10 no.3
    • /
    • pp.237-251
    • /
    • 2001
  • The purpose of this study was to verify the effects of group psychotherapy on recovery of self-identification with the unemployed homeless. To this end, 28 attending welfare-centers in Seoul area were sampled to be subject to 12 rounds of group psychotherapy for 6 weeks. In order to determine the effects the, test, preliminary test and post-program test were conducted. Every round of the program activities were video-taped, while being observed. The results of this study were as follows. First, the group psychotherapy influenced positive effect. Especially, the unemployed homeless had a opportunity of self-comprehension, self-insight, catharsis, etc. Second, during group psychotherapy, individual characteristics of the unemployed homeless were determined. Third, during group psychotherapy, the unemployed homeless complained about family problem, health, alcoholism, etc.

  • PDF

A Search Model Using Time Interval Variation to Identify Face Recognition Results

  • Choi, Yun-seok;Lee, Wan Yeon
    • International journal of advanced smart convergence
    • /
    • v.11 no.3
    • /
    • pp.64-71
    • /
    • 2022
  • Various types of attendance management systems are being introduced in a remote working environment and research on using face recognition is in progress. To ensure accurate worker's attendance, a face recognition-based attendance management system must analyze every frame of video, but face recognition is a heavy task, the number of the task should be minimized without affecting accuracy. In this paper, we proposed a search model using time interval variation to minimize the number of face recognition task of recorded videos for attendance management system. The proposed model performs face recognition by changing the interval of the frame identification time when there is no change in the attendance status for a certain period. When a change in the face recognition status occurs, it moves in the reverse direction and performs frame checks to more accurate attendance time checking. The implementation of proposed model performed at least 4.5 times faster than all frame identification and showed at least 97% accuracy.

Study on Scalable Video Coding Signals Transmission Scheme using LED-ID System (LED-ID 시스템을 이용한 SVC 신호의 전송 기법에 관한 연구)

  • Lee, Kyu-Jin;Cha, Dong-Ho;Hwang, Sun-Ha;Lee, Kye-San
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.10B
    • /
    • pp.1258-1267
    • /
    • 2011
  • In this paper, using the indoor LED-ID communication system have researched for how to transmit video signals. In LED-ID communications use the LEDs for lighting features at the same time communication is an effective way to implement. This proposed system using Visible light(RGB) as way to transmit signals, depends on the mixture RGB, which decided the color of light, moreover, each things determined their performance. However, if the video signal were fixed allocated RGB to transmit such as the original system, the importance of the each signals a different occur the limit on the quality of the video than SVC signals. In order to solve this problem in this paper, according to the RGB mixture ratios analyze the performance for the White LED, which analyzed based on allocating the SVC signal by transmitting to improve the quality of the video was about how researched.

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.

Automatic identification of ARPA radar tracking vessels by CCTV camera system (CCTV 카메라 시스템에 의한 ARPA 레이더 추적선박의 자동식별)

  • Lee, Dae-Jae
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.45 no.3
    • /
    • pp.177-187
    • /
    • 2009
  • This paper describes a automatic video surveillance system(AVSS) with long range and 360$^{\circ}$ coverage that is automatically rotated in an elevation over azimuth mode in response to the TTM(tracked target message) signal of vessels tracked by ARPA(automatic radar plotting aids) radar. This AVSS that is a video security and tracking system supported by ARPA radar, CCTV(closed-circuit television) camera system and other sensors to automatically identify and track, detect the potential dangerous situations such as collision accidents at sea and berthing/deberthing accidents in harbor, can be used in monitoring the illegal fishing vessels in inshore and offshore fishing ground, and in more improving the security and safety of domestic fishing vessels in EEZ(exclusive economic zone) area. The movement of the target vessel chosen by the ARPA radar operator in the AVSS can be automatically tracked by a CCTV camera system interfaced to the ECDIS(electronic chart display and information system) with the special functions such as graphic presentation of CCTV image, camera position, camera azimuth and angle of view on the ENC, automatic and manual controls of pan and tilt angles for CCTV system, and the capability that can replay and record continuously all information of a selected target. The test results showed that the AVSS developed experimentally in this study can be used as an extra navigation aid for the operator on the bridge under the confusing traffic situations, to improve the detection efficiency of small targets in sea clutter, to enhance greatly an operator s ability to identify visually vessels tracked by ARPA radar and to provide a recorded history for reference or evidentiary purposes in EEZ area.

Fundamental Research for Video-Integrated Collision Prediction and Fall Detection System to Support Navigation Safety of Vessels

  • Kim, Bae-Sung;Woo, Yun-Tae;Yu, Yung-Ho;Hwang, Hun-Gyu
    • Journal of Ocean Engineering and Technology
    • /
    • v.35 no.1
    • /
    • pp.91-97
    • /
    • 2021
  • Marine accidents caused by ships have brought about economic and social losses as well as human casualties. Most of these accidents are caused by small and medium-sized ships and are due to their poor conditions and insufficient equipment compared with larger vessels. Measures are quickly needed to improve the conditions. This paper discusses a video-integrated collision prediction and fall detection system to support the safe navigation of small- and medium-sized ships. The system predicts the collision of ships and detects falls by crew members using the CCTV, displays the analyzed integrated information using automatic identification system (AIS) messages, and provides alerts for the risks identified. The design consists of an object recognition algorithm, interface module, integrated display module, collision prediction and fall detection module, and an alarm management module. For the basic research, we implemented a deep learning algorithm to recognize the ship and crew from images, and an interface module to manage messages from AIS. To verify the implemented algorithm, we conducted tests using 120 images. Object recognition performance is calculated as mAP by comparing the pre-defined object with the object recognized through the algorithms. As results, the object recognition performance of the ship and the crew were approximately 50.44 mAP and 46.76 mAP each. The interface module showed that messages from the installed AIS were accurately converted according to the international standard. Therefore, we implemented an object recognition algorithm and interface module in the designed collision prediction and fall detection system and validated their usability with testing.

A Study on Identification of the Source of Videos Recorded by Smartphones (스마트폰으로 촬영된 동영상의 출처 식별에 대한 연구)

  • Kim, Hyeon-seung;Choi, Jong-hyun;Lee, Sang-jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.4
    • /
    • pp.885-894
    • /
    • 2016
  • As smartphones become more common, anybody can take pictures and record videos easily nowadays. Video files taken from smartphones can be used as important clues and evidence. While you analyze video files taken from smartphones, there are some occasions where you need to prove that a video file was recorded by a specific smartphone. To do this, you can utilize various fingerprint techniques mentioned in existing research. But you might face the situation where you have to strengthen the result of fingerprinting or fingerprint technique can't be used. Therefore forensic investigation of the smartphone must be done before fingerprinting and the database of metadata of video files should be established. The artifacts in a smartphone after video recording and the database mentioned above are discussed in this paper.