• Title/Summary/Keyword: Texture Detection

Search Result 238, Processing Time 0.027 seconds

3D Depth Measurement System-based Nonliniar Trail Recognition for Mobile Robots (3 차원 거리 측정 장치 기반 이동로봇용 비선형 도로 인식)

  • Kim, Jong-Man;Kim, Won-Sop;Shin, Dong-Yong
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2007.06a
    • /
    • pp.517-518
    • /
    • 2007
  • A method to recognize unpaved road region using a 3D depth measurement system is proposed for mobile robots. For autonomous maneuvering of mobile robots, recognition of obstacles or recognition of road region is the essential task. In this paper, the 3D depth measurement system which is composed of a rotating mirror, a line laser and mono-camera is employed to detect depth, where the laser light is reflected by the mirror and projected to the scene objects whose locations are to be determined. The obtained depth information is converted into an image. Such depth images of the road region represent even and plane while that of off-road region is irregular or textured. Therefore, the problem falls into a texture identification problem. Road region is detected employing a simple spatial differentiation technique to detect the plain textured area. Identification results of the diverse situation of Nonlinear trail are included in this paper.

  • PDF

A Vehicular License Plate Recognition Framework For Skewed Images

  • Arafat, M.Y.;Khairuddin, A.S.M.;Paramesran, R.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.11
    • /
    • pp.5522-5540
    • /
    • 2018
  • Vehicular license plate (LP) recognition system has risen as a significant field of research recently because various explorations are currently being conducted by the researchers to cope with the challenges of LPs which include different illumination and angular situations. This research focused on restricted conditions such as using image of only one vehicle, stationary background, no angular adjustment of the skewed images. A real time vehicular LP recognition scheme is proposed for the skewed images for detection, segmentation and recognition of LP. In this research, a polar co-ordinate transformation procedure is implemented to adjust the skewed vehicular images. Besides that, window scanning procedure is utilized for the candidate localization that is based on the texture characteristics of the image. Then, connected component analysis (CCA) is implemented to the binary image for character segmentation where the pixels get connected in an eight-point neighbourhood process. Finally, optical character recognition is implemented for the recognition of the characters. For measuring the performance of this experiment, 300 skewed images of different illumination conditions with various tilt angles have been tested. The results show that proposed method able to achieve accuracy of 96.3% in localizing, 95.4% in segmenting and 94.2% in recognizing the LPs with an average localization time of 0.52s.

Color Noise Detection and Image Restoration for Low Illumination Environment (저조도 환경 기반 색상 잡음 검출 및 영상 복원)

  • Oh, Gyoheak;Lee, Jaelin;Jeon, Byeungwoo
    • Journal of Broadcast Engineering
    • /
    • v.26 no.1
    • /
    • pp.88-98
    • /
    • 2021
  • Recently, the crime prevention and culprit identification even in a low illuminated environment by CCTV is becoming ever more important. In a low lighting situation, CCTV applications capture images under infrared lighting since it is unobtrusive to human eye. Although the infrared lighting leads to advantage of capturing an image with abundant fine texture information, it is hard to capture the color information which is very essential in identifying certain objects or persons in CCTV images. In this paper, we propose a method to acquire color information through DCGAN from an image captured by CCTV in a low lighting environment with infrared lighting and a method to remove color noise in the acquired color image.

A multisource image fusion method for multimodal pig-body feature detection

  • Zhong, Zhen;Wang, Minjuan;Gao, Wanlin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4395-4412
    • /
    • 2020
  • The multisource image fusion has become an active topic in the last few years owing to its higher segmentation rate. To enhance the accuracy of multimodal pig-body feature segmentation, a multisource image fusion method was employed. Nevertheless, the conventional multisource image fusion methods can not extract superior contrast and abundant details of fused image. To superior segment shape feature and detect temperature feature, a new multisource image fusion method was presented and entitled as NSST-GF-IPCNN. Firstly, the multisource images were resolved into a range of multiscale and multidirectional subbands by Nonsubsampled Shearlet Transform (NSST). Then, to superior describe fine-scale texture and edge information, even-symmetrical Gabor filter and Improved Pulse Coupled Neural Network (IPCNN) were used to fuse low and high-frequency subbands, respectively. Next, the fused coefficients were reconstructed into a fusion image using inverse NSST. Finally, the shape feature was extracted using automatic threshold algorithm and optimized using morphological operation. Nevertheless, the highest temperature of pig-body was gained in view of segmentation results. Experiments revealed that the presented fusion algorithm was able to realize 2.102-4.066% higher average accuracy rate than the traditional algorithms and also enhanced efficiency.

MRU-Net: A remote sensing image segmentation network for enhanced edge contour Detection

  • Jing Han;Weiyu Wang;Yuqi Lin;Xueqiang LYU
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.12
    • /
    • pp.3364-3382
    • /
    • 2023
  • Remote sensing image segmentation plays an important role in realizing intelligent city construction. The current mainstream segmentation networks effectively improve the segmentation effect of remote sensing images by deeply mining the rich texture and semantic features of images. But there are still some problems such as rough results of small target region segmentation and poor edge contour segmentation. To overcome these three challenges, we propose an improved semantic segmentation model, referred to as MRU-Net, which adopts the U-Net architecture as its backbone. Firstly, the convolutional layer is replaced by BasicBlock structure in U-Net network to extract features, then the activation function is replaced to reduce the computational load of model in the network. Secondly, a hybrid multi-scale recognition module is added in the encoder to improve the accuracy of image segmentation of small targets and edge parts. Finally, test on Massachusetts Buildings Dataset and WHU Dataset the experimental results show that compared with the original network the ACC, mIoU and F1 value are improved, and the imposed network shows good robustness and portability in different datasets.

Spatiotemporal Removal of Text in Image Sequences (비디오 영상에서 시공간적 문자영역 제거방법)

  • Lee, Chang-Woo;Kang, Hyun;Jung, Kee-Chul;Kim, Hang-Joon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.113-130
    • /
    • 2004
  • Most multimedia data contain text to emphasize the meaning of the data, to present additional explanations about the situation, or to translate different languages. But, the left makes it difficult to reuse the images, and distorts not only the original images but also their meanings. Accordingly, this paper proposes a support vector machines (SVMs) and spatiotemporal restoration-based approach for automatic text detection and removal in video sequences. Given two consecutive frames, first, text regions in the current frame are detected by an SVM-based texture classifier Second, two stages are performed for the restoration of the regions occluded by the detected text regions: temporal restoration in consecutive frames and spatial restoration in the current frame. Utilizing text motion and background difference, an input video sequence is classified and a different temporal restoration scheme is applied to the sequence. Such a combination of temporal restoration and spatial restoration shows great potential for automatic detection and removal of objects of interest in various kinds of video sequences, and is applicable to many applications such as translation of captions and replacement of indirect advertisements in videos.

Robust Reference Point and Feature Extraction Method for Fingerprint Verification using Gradient Probabilistic Model (지문 인식을 위한 Gradient의 확률 모델을 이용하는 강인한 기준점 검출 및 특징 추출 방법)

  • 박준범;고한석
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.40 no.6
    • /
    • pp.95-105
    • /
    • 2003
  • A novel reference point detection method is proposed by exploiting tile gradient probabilistic model that captures the curvature information of fingerprint. The detection of reference point is accomplished through searching and locating the points of occurrence of the most evenly distributed gradient in a probabilistic sense. The uniformly distributed gradient texture represents either the core point itself or those of similar points that can be used to establish the rigid reference from which to map the features for recognition. Key benefits are reductions in preprocessing and consistency of locating the same points as the reference points even when processing arch type fingerprints. Moreover, the new feature extraction method is proposed by improving the existing feature extraction using filterbank method. Experimental results indicate the superiority of tile proposed scheme in terms of computational time in feature extraction and verification rate in various noisy environments. In particular, the proposed gradient probabilistic model achieved 49% improvement under ambient noise, 39.2% under brightness noise and 15.7% under a salt and pepper noise environment, respectively, in FAR for the arch type fingerprints. Moreover, a reduction of 0.07sec in reference point detection time of the GPM is shown possible compared to using the leading the poincare index method and a reduction of 0.06sec in code extraction time of the new filterbank mettled is shown possible compared to using the leading the existing filterbank method.

Stereo Vision-Based Obstacle Detection and Vehicle Verification Methods Using U-Disparity Map and Bird's-Eye View Mapping (U-시차맵과 조감도를 이용한 스테레오 비전 기반의 장애물체 검출 및 차량 검증 방법)

  • Lee, Chung-Hee;Lim, Young-Chul;Kwon, Soon;Lee, Jong-Hun
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.6
    • /
    • pp.86-96
    • /
    • 2010
  • In this paper, we propose stereo vision-based obstacle detection and vehicle verification methods using U-disparity map and bird's-eye view mapping. First, we extract a road feature using maximum frequent values in each row and column. And we extract obstacle areas on the road using the extracted road feature. To extract obstacle areas exactly we utilize U-disparity map. We can extract obstacle areas exactly on the U-disparity map using threshold value which consists of disparity value and camera parameter. But there are still multiple obstacles in the extracted obstacle areas. Thus, we perform another processing, namely segmentation. We convert the extracted obstacle areas into a bird's-eye view using camera modeling and parameters. We can segment obstacle areas on the bird's-eye view robustly because obstacles are represented on it according to ranges. Finally, we verify the obstacles whether those are vehicles or not using various vehicle features, namely road contacting, constant horizontal length, aspect ratio and texture information. We conduct experiments to prove the performance of our proposed algorithms in real traffic situations.

A new approach for overlay text detection from complex video scene (새로운 비디오 자막 영역 검출 기법)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of Broadcast Engineering
    • /
    • v.13 no.4
    • /
    • pp.544-553
    • /
    • 2008
  • With the development of video editing technology, there are growing uses of overlay text inserted into video contents to provide viewers with better visual understanding. Since the content of the scene or the editor's intention can be well represented by using inserted text, it is useful for video information retrieval and indexing. Most of the previous approaches are based on low-level features, such as edge, color, and texture information. However, existing methods experience difficulties in handling texts with various contrasts or inserted in a complex background. In this paper, we propose a novel framework to localize the overlay text in a video scene. Based on our observation that there exist transient colors between inserted text and its adjacent background a transition map is generated. Then candidate regions are extracted by using the transition map and overlay text is finally determined based on the density of state in each candidate. The proposed method is robust to color, size, position, style, and contrast of overlay text. It is also language free. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

Effect of Acidulant Treatment on the Quality and Storage Period of Topokkidduck (산미료 단독 처리가 떡볶이 떡의 저장기간 및 품질에 미치는 영향)

  • Ra, Ha-Na;Cho, Yong-Sik;Hwang, Young;Jang, Hyun-Wook;Kim, Kyung-Mi
    • Journal of the Korean Society of Food Culture
    • /
    • v.35 no.6
    • /
    • pp.613-618
    • /
    • 2020
  • This study evaluated the effects of acidulant treatment on the quality and storage period of Topokkidduck. Two samples of Topokkidduck were prepared, one soaked in 10% acidulant (10SAT) and the other without soaking in the acidulant (NSAT). During the storage period, the two samples were tested for presence of microorganisms (aerobic bacteria, E.coli, and mold) and physicochemical properties (color value, texture profile analysis (TPA)). The 10SAT could be stored for 49 days without detection of E.coli and a mold level of 1.0 log CFU/g. NSAT could be stored for only 21 days. NSAT had an aerobic count of 2.27 log CFU/g as early as 7 days, and E.coli was detected at 21 days at a level of 4.15 log CFU/g. The presence of E.coli is not permitted according to the Ministry of Food and Drug Safety (MFDS). The hardness of the 10SAT increased during the storage period but to a much lesser extent compared to the NSAT. Thus the preparation of Topokkidduck by soaking in the acidulant controlled microbial growth for up to 49 days which is a much longer period compared to the control. Also, Topokkidduck soaked in the acidulant had a softer texture than the control during the storage period.