• Title/Summary/Keyword: frame detection

Search Result 920, Processing Time 0.024 seconds

Development of Precise Point Positioning Solution for Detection of Earthquake and Crustal Movement (지진 및 지각변동 감지를 위한 정밀절대측위 솔루션 개발)

  • Park, Joon-Kyu;Kim, Min-Gyu
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.9
    • /
    • pp.4587-4592
    • /
    • 2013
  • GPS is recognized the essential method to obtain the best result in the sphere of earth science that is setting of International Reference Frame, decision of the rotation coefficient about the earth rotation axis, detection of the crustal deformation, and observation of the diastrophism by high precision positioning except for navigation, geodetic survey and mapping. Therefore, in this study, it was attempted to build an expert service that enables non-experts to use high-precision GPS data processing. As a result, an Precise Point Positioning Solution that can maximize user convenience simply by entering the minimum required information for GPS data processing was developed, and the result of Precise Point Positioning Solution using GPS data provided by National Geographic Information Institute was compared with result of ITRF.

Lane Detection based Open-Source Hardware according to Change Lane Conditions (오픈소스 하드웨어 기반 차선검출 기술에 대한 연구)

  • Kim, Jae Sang;Moon, Hae Min;Pan, Sung Bum
    • Smart Media Journal
    • /
    • v.6 no.3
    • /
    • pp.15-20
    • /
    • 2017
  • Recently, the automotive industry has been studied about driver assistance systems for helping drivers to drive their cars easily by integrating them with the IT technology. This study suggests a method of detecting lanes, robust to road condition changes and applicable to lane departure warning and autonomous vehicles mode. The proposed method uses the method of detecting candidate areas by using the Gaussian filter and by determining the Otsu threshold value and edge. Moreover, the proposed method uses lane gradient and width information through the Hough transform to detect lanes. The method uses road lane information detected before to detect dashed lines as well as solid lines, calculates routes in which the lanes will be located in the next frame to draw virtual lanes. The proposed algorithm was identified to be able to detect lanes in both dashed- and solid-line situations, and implement real-time processing where applied to Raspberry Pi 2 which is open source hardware.

Analysis of Understanding Using Deep Learning Facial Expression Recognition for Real Time Online Lectures (딥러닝 표정 인식을 활용한 실시간 온라인 강의 이해도 분석)

  • Lee, Jaayeon;Jeong, Sohyun;Shin, You Won;Lee, Eunhye;Ha, Yubin;Choi, Jang-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.12
    • /
    • pp.1464-1475
    • /
    • 2020
  • Due to the spread of COVID-19, the online lecture has become more prevalent. However, it was found that a lot of students and professors are experiencing lack of communication. This study is therefore designed to improve interactive communication between professors and students in real-time online lectures. To do so, we explore deep learning approaches for automatic recognition of students' facial expressions and classification of their understanding into 3 classes (Understand / Neutral / Not Understand). We use 'BlazeFace' model for face detection and 'ResNet-GRU' model for facial expression recognition (FER). We name this entire process 'Degree of Understanding (DoU)' algorithm. DoU algorithm can analyze a multitude of students collectively and present the result in visualized statistics. To our knowledge, this study has great significance in that this is the first study offers the statistics of understanding in lectures using FER. As a result, the algorithm achieved rapid speed of 0.098sec/frame with high accuracy of 94.3% in CPU environment, demonstrating the potential to be applied to real-time online lectures. DoU Algorithm can be extended to various fields where facial expressions play important roles in communications such as interactions with hearing impaired people.

A Method of Detection of Deepfake Using Bidirectional Convolutional LSTM (Bidirectional Convolutional LSTM을 이용한 Deepfake 탐지 방법)

  • Lee, Dae-hyeon;Moon, Jong-sub
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.1053-1065
    • /
    • 2020
  • With the recent development of hardware performance and artificial intelligence technology, sophisticated fake videos that are difficult to distinguish with the human's eye are increasing. Face synthesis technology using artificial intelligence is called Deepfake, and anyone with a little programming skill and deep learning knowledge can produce sophisticated fake videos using Deepfake. A number of indiscriminate fake videos has been increased significantly, which may lead to problems such as privacy violations, fake news and fraud. Therefore, it is necessary to detect fake video clips that cannot be discriminated by a human eyes. Thus, in this paper, we propose a deep-fake detection model applied with Bidirectional Convolution LSTM and Attention Module. Unlike LSTM, which considers only the forward sequential procedure, the model proposed in this paper uses the reverse order procedure. The Attention Module is used with a Convolutional neural network model to use the characteristics of each frame for extraction. Experiments have shown that the model proposed has 93.5% accuracy and AUC is up to 50% higher than the results of pre-existing studies.

Lane Model Extraction Based on Combination of Color and Edge Information from Car Black-box Images (차량용 블랙박스 영상으로부터 색상과 에지정보의 조합에 기반한 차선모델 추출)

  • Liang, Han;Seo, Suyoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.1
    • /
    • pp.1-11
    • /
    • 2021
  • This paper presents a procedure to extract lane line models using a set of proposed methods. Firstly, an image warping method based on homography is proposed to transform a target image into an image which is efficient to find lane pixels within a certain region in the image. Secondly, a method to use the combination of the results of edge detection and HSL (Hue, Saturation, and Lightness) transform is proposed to detect lane candidate pixels with reliability. Thirdly, erroneous candidate lane pixels are eliminated using a selection area method. Fourthly, a method to fit lane pixels to quadratic polynomials is proposed. In order to test the validity of the proposed procedure, a set of black-box images captured under varying illumination and noise conditions were used. The experimental results show that the proposed procedure could overcome the problems of color-only and edge-only based methods and extract lane pixels and model the lane line geometry effectively within less than 0.6 seconds per frame under a low-cost computing environment.

Deep Learning-based Gaze Direction Vector Estimation Network Integrated with Eye Landmark Localization (딥 러닝 기반의 눈 랜드마크 위치 검출이 통합된 시선 방향 벡터 추정 네트워크)

  • Joo, Heeyoung;Ko, Min-Soo;Song, Hyok
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.748-757
    • /
    • 2021
  • In this paper, we propose a gaze estimation network in which eye landmark position detection and gaze direction vector estimation are integrated into one deep learning network. The proposed network uses the Stacked Hourglass Network as a backbone structure and is largely composed of three parts: a landmark detector, a feature map extractor, and a gaze direction estimator. The landmark detector estimates the coordinates of 50 eye landmarks, and the feature map extractor generates a feature map of the eye image for estimating the gaze direction. And the gaze direction estimator estimates the final gaze direction vector by combining each output result. The proposed network was trained using virtual synthetic eye images and landmark coordinate data generated through the UnityEyes dataset, and the MPIIGaze dataset consisting of real human eye images was used for performance evaluation. Through the experiment, the gaze estimation error showed a performance of 3.9, and the estimation speed of the network was 42 FPS (Frames per second).

Structural Shape Estimation Based on 3D LiDAR Scanning Method for On-site Safety Diagnostic of Plastic Greenhouse (비닐 온실의 현장 안전진단을 위한 3차원 LiDAR 스캔 기법 기반 구조 형상 추정)

  • Seo, Byung-hun;Lee, Sangik;Lee, Jonghyuk;Kim, Dongsu;Kim, Dongwoo;Jo, Yerim;Kim, Yuyong;Lee, Jeongmin;Choi, Won
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.66 no.5
    • /
    • pp.1-13
    • /
    • 2024
  • In this study, we applied an on-site diagnostic method for estimating the structural safety of a plastic greenhouse. A three-dimensional light detection and ranging (3D LiDAR) sensor was used to scan the greenhouse to extract point cloud data (PCD). Differential thresholds of the color index were applied to the partitions of raw PCD to separate steel frames from plastic films. Additionally, the K-means algorithm was used to convert the steel frame PCD into the nodes of unit members. These nodes were subsequently transformed into structural shape data. To verify greenhouse shape reproducibility, the member lengths of the scan and blueprint models were compared with the measurements along the X-, Y-, and Z-axes. The error of the scan model was accurate at 2%-3%, whereas the error of the blueprint model was 5.4%. At a maximum snow depth of 0.5 m, the scan model revealed asymmetric horizontal deflection and extreme bending stress, which indicated that even minor shape irregularities could result in critical failures in extreme weather. The safety factor for bending stress in the scan model was 18.7% lower than that in the blueprint model. This phenomenon indicated that precise shape estimation is crucial for safety diagnostic. Future studies should focus on the development of an automated process based on supervised learning to ensure the widespread adoption of greenhouse safety diagnostics.

Rear Vehicle Detection Method in Harsh Environment Using Improved Image Information (개선된 영상 정보를 이용한 가혹한 환경에서의 후방 차량 감지 방법)

  • Jeong, Jin-Seong;Kim, Hyun-Tae;Jang, Young-Min;Cho, Sang-Bok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.96-110
    • /
    • 2017
  • Most of vehicle detection studies using the existing general lens or wide-angle lens have a blind spot in the rear detection situation, the image is vulnerable to noise and a variety of external environments. In this paper, we propose a method that is detection in harsh external environment with noise, blind spots, etc. First, using a fish-eye lens will help minimize blind spots compared to the wide-angle lens. When angle of the lens is growing because nonlinear radial distortion also increase, calibration was used after initializing and optimizing the distortion constant in order to ensure accuracy. In addition, the original image was analyzed along with calibration to remove fog and calibrate brightness and thereby enable detection even when visibility is obstructed due to light and dark adaptations from foggy situations or sudden changes in illumination. Fog removal generally takes a considerably significant amount of time to calculate. Thus in order to reduce the calculation time, remove the fog used the major fog removal algorithm Dark Channel Prior. While Gamma Correction was used to calibrate brightness, a brightness and contrast evaluation was conducted on the image in order to determine the Gamma Value needed for correction. The evaluation used only a part instead of the entirety of the image in order to reduce the time allotted to calculation. When the brightness and contrast values were calculated, those values were used to decided Gamma value and to correct the entire image. The brightness correction and fog removal were processed in parallel, and the images were registered as a single image to minimize the calculation time needed for all the processes. Then the feature extraction method HOG was used to detect the vehicle in the corrected image. As a result, it took 0.064 seconds per frame to detect the vehicle using image correction as proposed herein, which showed a 7.5% improvement in detection rate compared to the existing vehicle detection method.

An OTHBVS Cell Line Expresses the Human HBV Middle S Protein

  • Park, Sung-Gyoo;Guhung Jung
    • Journal of Microbiology
    • /
    • v.37 no.2
    • /
    • pp.86-89
    • /
    • 1999
  • An OTHBVS cell line from HepG2 was established. This cell line stably expresses the human hepatitis B virus (HBV) middle S protein that includes the preS2 region which is important for HBV particle entry into the hepatocyte. To establish this cell line, the middle S open reading frame (ORF), with a promoter located in the 5' region and enhancer located in the 3' region, was cloned downstream from the metallothionine (MT) promoter of the OT1529 vector. In this vector, expression of the middle S protein was constructed to be regulated by its own promoter and enhancer. Expression of the large S protein which contains the preS1 region in addition to the middle S protein was designed to be regulated by the MT promoter. When extracts of OTHBVS cells were examined with an S protein detection kit (RPHA, Korea Green Cross Co.), an S protein was detected. Total mRNA of OTHBVS cell examined by northern blot analysis with an S ORF probe revealed small/middle S transcripts (2.1 kb). When the MT promoter was induced by Zn, large S transcripts (2.4 kb) were detected. The GP36 and GP33 middle S proteins were presumably detected, but large S proteins were not detected by immunostain analysis using anti-preS2 antibody.

  • PDF

On the Application and Optimization of M-ary Transmission Techniques to Optical CDMA LANs (Optical CDMA 근거리망을 위한 M-진 전송기술에 대한 연구)

  • 윤용철;최진우;김영록
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.8C
    • /
    • pp.1086-1103
    • /
    • 2004
  • Most research efforts on the OCDMA technology assume single-bit-per-symbol transmission techniques such as on-off keying. However, achieving high spectral efficiency with such transmission techniques is likely to be a challenging task due to the "unipolar" nature of optical signals. In this paper, an M-ary transmission technique using more than two equally-weighted codes is proposed for OCDMA local area networks, and ie BER performance and spectral efficiency are analyzed. Poison frame arrival and randomly generated codes are assumed for the BER analysis, and the probability of incorrect symbol detection is analytically derived. From the approximation, it is found that there exists an optimal code weight that minimizes the BER, and its physical interpretation is drawn in an intuitive and simple statement. Under the assumption of this optimized code weight and sufficiently large code dimension, it is also shown that the spectral efficiency of OCDMA networks can be significantly improved by increasing the number (M) of symbols used. Since the cost of OCDMA transceivers is expected to increase with the code dimension, we finally provide a guideline to determine the optimal number of symbols for a given code dimension and traffic load.