• Title/Summary/Keyword: frame detection

Search Result 920, Processing Time 0.028 seconds

Concrete structural health monitoring using piezoceramic-based wireless sensor networks

  • Li, Peng;Gu, Haichang;Song, Gangbing;Zheng, Rong;Mo, Y.L.
    • Smart Structures and Systems
    • /
    • v.6 no.5_6
    • /
    • pp.731-748
    • /
    • 2010
  • Impact detection and health monitoring are very important tasks for civil infrastructures, such as bridges. Piezoceramic based transducers are widely researched for these tasks due to the piezoceramic material's inherent advantages of dual sensing and actuation ability, which enables the active sensing method for structural health monitoring with a network of piezoceramic transducers. Wireless sensor networks, which are easy for deployment, have great potential in health monitoring systems for large civil infrastructures to identify early-age damages. However, most commercial wireless sensor networks are general purpose and may not be optimized for a network of piezoceramic based transducers. Wireless networks of piezoceramic transducers for active sensing have special requirements, such as relatively high sampling rate (at a few-thousand Hz), incorporation of an amplifier for the piezoceramic element for actuation, and low energy consumption for actuation. In this paper, a wireless network is specially designed for piezoceramic transducers to implement impact detection and active sensing for structural health monitoring. A power efficient embedded system is designed to form the wireless sensor network that is capable of high sampling rate. A 32 bit RISC wireless microcontroller is chosen as the main processor. Detailed design of the hardware system and software system of the wireless sensor network is presented in this paper. To verify the functionality of the wireless sensor network, it is deployed on a two-story concrete frame with embedded piezoceramic transducers, and the active sensing property of piezoceramic material is used to detect the damage in the structure. Experimental results show that the wireless sensor network can effectively implement active sensing and impact detection with high sampling rate while maintaining low power consumption by performing offline data processing and minimizing wireless communication.

Damage detection of shear buildings using frequency-change-ratio and model updating algorithm

  • Liang, Yabin;Feng, Qian;Li, Heng;Jiang, Jian
    • Smart Structures and Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2019
  • As one of the most important parameters in structural health monitoring, structural frequency has many advantages, such as convenient to be measured, high precision, and insensitive to noise. In addition, frequency-change-ratio based method had been validated to have the ability to identify the damage occurrence and location. However, building a precise enough finite elemental model (FEM) for the test structure is still a huge challenge for this frequency-change-ratio based damage detection technique. In order to overcome this disadvantage and extend the application for frequencies in structural health monitoring area, a novel method was developed in this paper by combining the cross-model cross-mode (CMCM) model updating algorithm with the frequency-change-ratio based method. At first, assuming the physical parameters, including the element mass and stiffness, of the test structure had been known with a certain value, then an initial to-be-updated model with these assumed parameters was constructed according to the typical mass and stiffness distribution characteristic of shear buildings. After that, this to-be-updated model was updated using CMCM algorithm by combining with the measured frequencies of the actual structure when no damage was introduced. Thus, this updated model was regarded as a representation of the FEM model of actual structure, because their modal information were almost the same. Finally, based on this updated model, the frequency-change-ratio based method can be further proceed to realize the damage detection and localization. In order to verify the effectiveness of the developed method, a four-level shear building was numerically simulated and two actual shear structures, including a three-level shear model and an eight-story frame, were experimentally test in laboratory, and all the test results demonstrate that the developed method can identify the structural damage occurrence and location effectively, even only very limited modal frequencies of the test structure were provided.

An Efficient Video Sequence Matching Algorithm (효율적인 비디오 시퀀스 정합 알고리즘)

  • 김상현;박래홍
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.45-52
    • /
    • 2004
  • According tothe development of digital media technologies various algorithms for video sequence matching have been proposed to match the video sequences efficiently. A large number of video sequence matching methods have focused on frame-wise query, whereas a relatively few algorithms have been presented for video sequence matching or video shot matching. In this paper, we propose an efficientalgorithm to index the video sequences and to retrieve the sequences for video sequence query. To improve the accuracy and performance of video sequence matching, we employ the Cauchy function as a similarity measure between histograms of consecutive frames, which yields a high performance compared with conventional measures. The key frames extracted from segmented video shots can be used not only for video shot clustering but also for video sequence matching or browsing, where the key frame is defined by the frame that is significantly different from the previous fames. Several key frame extraction algorithms have been proposed, in which similar methods used for shot boundary detection were employed with proper similarity measures. In this paper, we propose the efficient algorithm to extract key frames using the cumulative Cauchy function measure and. compare its performance with that of conventional algorithms. Video sequence matching can be performed by evaluating the similarity between data sets of key frames. To improve the matching efficiency with the set of extracted key frames we employ the Cauchy function and the modified Hausdorff distance. Experimental results with several color video sequences show that the proposed method yields the high matching performance and accuracy with a low computational load compared with conventional algorithms.

Spatiotemporal Removal of Text in Image Sequences (비디오 영상에서 시공간적 문자영역 제거방법)

  • Lee, Chang-Woo;Kang, Hyun;Jung, Kee-Chul;Kim, Hang-Joon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.113-130
    • /
    • 2004
  • Most multimedia data contain text to emphasize the meaning of the data, to present additional explanations about the situation, or to translate different languages. But, the left makes it difficult to reuse the images, and distorts not only the original images but also their meanings. Accordingly, this paper proposes a support vector machines (SVMs) and spatiotemporal restoration-based approach for automatic text detection and removal in video sequences. Given two consecutive frames, first, text regions in the current frame are detected by an SVM-based texture classifier Second, two stages are performed for the restoration of the regions occluded by the detected text regions: temporal restoration in consecutive frames and spatial restoration in the current frame. Utilizing text motion and background difference, an input video sequence is classified and a different temporal restoration scheme is applied to the sequence. Such a combination of temporal restoration and spatial restoration shows great potential for automatic detection and removal of objects of interest in various kinds of video sequences, and is applicable to many applications such as translation of captions and replacement of indirect advertisements in videos.

Robust Scene Change Detection Algorithm for Flashlight (플래시라이트에 강건한 장면전환 검출 알고리즘)

  • Ko, Kyong-Cheol;Choi, Hyung-Il;Rhee, Yang-Weon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.6 s.312
    • /
    • pp.83-91
    • /
    • 2006
  • Flashlights in video has many problem to detect the scene change because of high difference values from successive frames. In this paper propose the reliable scene change detection algorithms by extracting the flashlights. This paper proposes a robust scene change detection technique that uses the weighted chi-square test and the automated threshold-decision algorithms. The weighted chi-square test can subdivide the difference values of individual color channels by calculating the color intensities according to NTSC standard, and it can detect the scene change by joining the weighted color intensities to the predefined chi-square test which emphasize the comparative color difference values. The automated threshold-decision algorithm uses the difference values of frame-to-frame that was obtained by the weighted chi-square test. At first, The Average of total difference values is calculated and then, another average value is calculated using the previous average value from the difference values, finally the most appropriate mid-average value is searched and considered the threshold value. Experimental results show that the proposed algorithms are effective and outperform the previous approaches.

VILODE : A Real-Time Visual Loop Closure Detector Using Key Frames and Bag of Words (VILODE : 키 프레임 영상과 시각 단어들을 이용한 실시간 시각 루프 결합 탐지기)

  • Kim, Hyesuk;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.5
    • /
    • pp.225-230
    • /
    • 2015
  • In this paper, we propose an effective real-time visual loop closure detector, VILODE, which makes use of key frames and bag of visual words (BoW) based on SURF feature points. In order to determine whether the camera has re-visited one of the previously visited places, a loop closure detector has to compare an incoming new image with all previous images collected at every visited place. As the camera passes through new places or locations, the amount of images to be compared continues growing. For this reason, it is difficult for a visual loop closure detector to meet both real-time constraint and high detection accuracy. To address the problem, the proposed system adopts an effective key frame selection strategy which selects and compares only distinct meaningful ones from continuously incoming images during navigation, and so it can reduce greatly image comparisons for loop detection. Moreover, in order to improve detection accuracy and efficiency, the system represents each key frame image as a bag of visual words, and maintains indexes for them using DBoW database system. The experiments with TUM benchmark datasets demonstrates high performance of the proposed visual loop closure detector.

A COMPARISON OF PERIAPICAL RADIOGRAPHS AND THEIR DIGITAL IMAGES FOR THE DETECTION OF SIMULATED INTERPROXIMAL CARIOUS LESIONS (모의 인접면 치아우식병소의 진단을 위한 구내 표준방사선사진과 그 디지털 영상의 비교)

  • Kim Hyun;Chung Hyun-Dae
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.24 no.2
    • /
    • pp.279-290
    • /
    • 1994
  • The purpose of this study was to compare the diagnostic accuracy of periapical radiographs and their digitized images for the detection of simulated interproximal carious lesions. A total of 240 interproximal surfaces was used in this study. The case sample was composed of 80 anterior teeth, 80 bicuspids and 80 molars which were prepared in order to distribute the surfaces from carious free to those containing simulated carious lesions of varying depths (0.5㎜, 0.8㎜, and 1.2㎜). The periapical radiographs were taken by paralleling technique and film used was Kodak Ektaspeed(E group). All radiographs were evaluated by five dentist to recognize the true status of simulated carious lesion. They were asked to give a score of 0, 1, 2, or 3. Digitized images were obtained using a commercial video processor(FOTOVIX Ⅱ- XS). And the computer system was 486 DX PC with PC Vision and frame grabber. The 17' display monitor had a resolution of 1280×1024 pixels(0.26㎜ dot pitch). But the one frame of the intraoral radiograph has a resolution of 700×480 pixels and each pixel has a grey level value of 256. All the radiographs and digital images were viewed under uniform subdued lighting in the same reading room. After a week the second interpretation was performed in the same condition. The detection of lesions on the monitor was compared with the finding of simulated interproximal carious lesions on the film images. The results were as follows: 1. When the scoring criteria was dichotomous ; lesion present and not present 1) The overall sensitivity, specificity and diagnostic accuracy of periapical radiographs and their digital images showed no statistically significant difference. 2) The sensitivity and specificity according to the region of teeth and the grade of lesions showed no statistically significant difference between periapical radiographs and their digital images. 2. When estimate the grade of lesions ; score 0, 1, 2, 3 1) The overall diagnostic accuracy was 53.3% on the intraoral films and 52.9% on digital images. There was no significant difference. 2) The diagnostic accuracy according to the region of teeth showed no statistically significant difference between periapical radiographs and their digital images. 3. The degree of agreement and reliability 1) Using gamma value to show the degree of agreement, there was similarity between periapical films and digital images. 2) The reliability of each twice interpretation of periapical films and digital images showed no statistically significant difference. In all cases P value was greater than 0.05, showing that both techniques can be used to detect the incipient and moderate interproximal carious lesions with similar accuracy.

  • PDF

Lane Detection in Complex Environment Using Grid-Based Morphology and Directional Edge-link Pairs (복잡한 환경에서 Grid기반 모폴리지와 방향성 에지 연결을 이용한 차선 검출 기법)

  • Lin, Qing;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.6
    • /
    • pp.786-792
    • /
    • 2010
  • This paper presents a real-time lane detection method which can accurately find the lane-mark boundaries in complex road environment. Unlike many existing methods that pay much attention on the post-processing stage to fit lane-mark position among a great deal of outliers, the proposed method aims at removing those outliers as much as possible at feature extraction stage, so that the searching space at post-processing stage can be greatly reduced. To achieve this goal, a grid-based morphology operation is firstly used to generate the regions of interest (ROI) dynamically, in which a directional edge-linking algorithm with directional edge-gap closing is proposed to link edge-pixels into edge-links which lie in the valid directions, these directional edge-links are then grouped into pairs by checking the valid lane-mark width at certain height of the image. Finally, lane-mark colors are checked inside edge-link pairs in the YUV color space, and lane-mark types are estimated employing a Bayesian probability model. Experimental results show that the proposed method is effective in identifying lane-mark edges among heavy clutter edges in complex road environment, and the whole algorithm can achieve an accuracy rate around 92% at an average speed of 10ms/frame at the image size of $320{\times}240$.

MPEG Video Segmentation using Two-stage Neural Networks and Hierarchical Frame Search (2단계 신경망과 계층적 프레임 탐색 방법을 이용한 MPEG 비디오 분할)

  • Kim, Joo-Min;Choi, Yeong-Woo;Chung, Ku-Sik
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.1_2
    • /
    • pp.114-125
    • /
    • 2002
  • In this paper, we are proposing a hierarchical segmentation method that first segments the video data into units of shots by detecting cut and dissolve, and then decides types of camera operations or object movements in each shot. In our previous work[1], each picture group is divided into one of the three detailed categories, Shot(in case of scene change), Move(in case of camera operation or object movement) and Static(in case of almost no change between images), by analysing DC(Direct Current) component of I(Intra) frame. In this process, we have designed two-stage hierarchical neural network with inputs of various multiple features combined. Then, the system detects the accurate shot position, types of camera operations or object movements by searching P(Predicted), B(Bi-directional) frames of the current picture group selectively and hierarchically. Also, the statistical distributions of macro block types in P or B frames are used for the accurate detection of cut position, and another neural network with inputs of macro block types and motion vectors method can reduce the processing time by using only DC coefficients of I frames without decoding and by searching P, B frames selectively and hierarchically. The proposed method classified the picture groups in the accuracy of 93.9-100.0% and the cuts in the accuracy of 96.1-100.0% with three different together is used to detect dissolve, types of camera operations and object movements. The proposed types of video data. Also, it classified the types of camera movements or object movements in the accuracy of 90.13% and 89.28% with two different types of video data.

Depth Images-based Human Detection, Tracking and Activity Recognition Using Spatiotemporal Features and Modified HMM

  • Kamal, Shaharyar;Jalal, Ahmad;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.6
    • /
    • pp.1857-1862
    • /
    • 2016
  • Human activity recognition using depth information is an emerging and challenging technology in computer vision due to its considerable attention by many practical applications such as smart home/office system, personal health care and 3D video games. This paper presents a novel framework of 3D human body detection, tracking and recognition from depth video sequences using spatiotemporal features and modified HMM. To detect human silhouette, raw depth data is examined to extract human silhouette by considering spatial continuity and constraints of human motion information. While, frame differentiation is used to track human movements. Features extraction mechanism consists of spatial depth shape features and temporal joints features are used to improve classification performance. Both of these features are fused together to recognize different activities using the modified hidden Markov model (M-HMM). The proposed approach is evaluated on two challenging depth video datasets. Moreover, our system has significant abilities to handle subject's body parts rotation and body parts missing which provide major contributions in human activity recognition.