• Title/Summary/Keyword: Image feature extraction

Search Result 1,026, Processing Time 0.024 seconds

A Performance Improvement of Automatic Butterfly Identification Method Using Color Intensity Entropy (영상의 색체 강도 엔트로피를 이용한 나비 종 자동 인식 향상 방법)

  • Kang, Seung-Ho;Kim, Tae-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.5
    • /
    • pp.624-632
    • /
    • 2017
  • Automatic butterfly identification using images is one of the interesting research fields because it helps the related researchers studying species diversity and evolutionary and development process a lot in this field. The performance of the butterfly species identification system is dependent heavily on the quality of selected features. In this paper, we propose color intensity (CI) entropy by using the distribution of color intensities in a butterfly image. We show color intensity entropy can increase the recognition rate by 10% if it is used together with previously suggested branch length similarity entropy. In addition, the performance comparison with other features such as Eigenface, 2D Fourier transform, and 2D wavelet transform is conducted against several well known machine learning methods.

FRS-OCC: Face Recognition System for Surveillance Based on Occlusion Invariant Technique

  • Abbas, Qaisar
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.288-296
    • /
    • 2021
  • Automated face recognition in a runtime environment is gaining more and more important in the fields of surveillance and urban security. This is a difficult task keeping in mind the constantly volatile image landscape with varying features and attributes. For a system to be beneficial in industrial settings, it is pertinent that its efficiency isn't compromised when running on roads, intersections, and busy streets. However, recognition in such uncontrolled circumstances is a major problem in real-life applications. In this paper, the main problem of face recognition in which full face is not visible (Occlusion). This is a common occurrence as any person can change his features by wearing a scarf, sunglass or by merely growing a mustache or beard. Such types of discrepancies in facial appearance are frequently stumbled upon in an uncontrolled circumstance and possibly will be a reason to the security systems which are based upon face recognition. These types of variations are very common in a real-life environment. It has been analyzed that it has been studied less in literature but now researchers have a major focus on this type of variation. Existing state-of-the-art techniques suffer from several limitations. Most significant amongst them are low level of usability and poor response time in case of any calamity. In this paper, an improved face recognition system is developed to solve the problem of occlusion known as FRS-OCC. To build the FRS-OCC system, the color and texture features are used and then an incremental learning algorithm (Learn++) to select more informative features. Afterward, the trained stack-based autoencoder (SAE) deep learning algorithm is used to recognize a human face. Overall, the FRS-OCC system is used to introduce such algorithms which enhance the response time to guarantee a benchmark quality of service in any situation. To test and evaluate the performance of the proposed FRS-OCC system, the AR face dataset is utilized. On average, the FRS-OCC system is outperformed and achieved SE of 98.82%, SP of 98.49%, AC of 98.76% and AUC of 0.9995 compared to other state-of-the-art methods. The obtained results indicate that the FRS-OCC system can be used in any surveillance application.

A Blocking Algorithm of a Target Object with Exposed Privacy Information (개인 정보가 노출된 목표 객체의 블로킹 알고리즘)

  • Jang, Seok-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.4
    • /
    • pp.43-49
    • /
    • 2019
  • The wired and wireless Internet is a useful window to easily acquire various types of media data. On the other hand, the public can easily get the media data including the object to which the personal information is exposed, which is a social problem. In this paper, we propose a method to robustly detect a target object that has exposed personal information using a learning algorithm and effectively block the detected target object area. In the proposed method, only the target object containing the personal information is detected using a neural network-based learning algorithm. Then, a grid-like mosaic is created and overlapped on the target object area detected in the previous step, thereby effectively blocking the object area containing the personal information. Experimental results show that the proposed algorithm robustly detects the object area in which personal information is exposed and effectively blocks the detected area through mosaic processing. The object blocking method presented in this paper is expected to be useful in many applications related to computer vision.

A Study of Unified Framework with Light Weight Artificial Intelligence Hardware for Broad range of Applications (다중 애플리케이션 처리를 위한 경량 인공지능 하드웨어 기반 통합 프레임워크 연구)

  • Jeon, Seok-Hun;Lee, Jae-Hack;Han, Ji-Su;Kim, Byung-Soo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.5
    • /
    • pp.969-976
    • /
    • 2019
  • A lightweight artificial intelligence hardware has made great strides in many application areas. In general, a lightweight artificial intelligence system consist of lightweight artificial intelligence engine and preprocessor including feature selection, generation, extraction, and normalization. In order to achieve optimal performance in broad range of applications, lightweight artificial intelligence system needs to choose a good preprocessing function and set their respective hyper-parameters. This paper proposes a unified framework for a lightweight artificial intelligence system and utilization method for finding models with optimal performance to use on a given dataset. The proposed unified framework can easily generate a model combined with preprocessing functions and lightweight artificial intelligence engine. In performance evaluation using handwritten image dataset and fall detection dataset measured with inertial sensor, the proposed unified framework showed building optimal artificial intelligence models with over 90% test accuracy.

2D-MELPP: A two dimensional matrix exponential based extension of locality preserving projections for dimensional reduction

  • Xiong, Zixun;Wan, Minghua;Xue, Rui;Yang, Guowei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.9
    • /
    • pp.2991-3007
    • /
    • 2022
  • Two dimensional locality preserving projections (2D-LPP) is an improved algorithm of 2D image to solve the small sample size (SSS) problems which locality preserving projections (LPP) meets. It's able to find the low dimension manifold mapping that not only preserves local information but also detects manifold embedded in original data spaces. However, 2D-LPP is simple and elegant. So, inspired by the comparison experiments between two dimensional linear discriminant analysis (2D-LDA) and linear discriminant analysis (LDA) which indicated that matrix based methods don't always perform better even when training samples are limited, we surmise 2D-LPP may meet the same limitation as 2D-LDA and propose a novel matrix exponential method to enhance the performance of 2D-LPP. 2D-MELPP is equivalent to employing distance diffusion mapping to transform original images into a new space, and margins between labels are broadened, which is beneficial for solving classification problems. Nonetheless, the computational time complexity of 2D-MELPP is extremely high. In this paper, we replace some of matrix multiplications with multiple multiplications to save the memory cost and provide an efficient way for solving 2D-MELPP. We test it on public databases: random 3D data set, ORL, AR face database and Polyu Palmprint database and compare it with other 2D methods like 2D-LDA, 2D-LPP and 1D methods like LPP and exponential locality preserving projections (ELPP), finding it outperforms than others in recognition accuracy. We also compare different dimensions of projection vector and record the cost time on the ORL, AR face database and Polyu Palmprint database. The experiment results above proves that our advanced algorithm has a better performance on 3 independent public databases.

Development of Fast Posture Classification System for Table Tennis Robot (탁구 로봇을 위한 빠른 자세 분류 시스템 개발)

  • Jin, Seongho;Kwon, Yongwoo;Kim, Yoonjeong;Park, Miyoung;An, Jaehoon;Kang, Hosun;Choi, Jiwook;Lee, Inho
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.4
    • /
    • pp.463-476
    • /
    • 2022
  • In this paper, we propose a table tennis posture classification system using a cooperative robot to develop a table tennis robot that can be trained like a real game. The most ideal table tennis robot would be a robot with a high joint driving speed and a high degree of freedom. Therefore, in this paper, we intend to use a cooperative robot with sufficient degrees of freedom to develop a robot that can be trained like a real game. However, cooperative robots have the disadvantage of slow joint driving speed. These shortcomings are expected to be overcome through quick recognition. Therefore, in this paper, we try to quickly classify the opponent's posture to overcome the slow joint driving speed. To this end, learning about dynamic postures was conducted using image data as input, and finally, three classification models were created and comparative experiments and evaluations were performed on the designated dynamic postures. In conclusion, comparative experimental data demonstrate the highest classification accuracy and fastest classification speed in classification models using MLP (Multi-Layer Perceptron), and thus demonstrate the validity of the proposed algorithm.

Neural network with occlusion-resistant and reduced parameters in stereo images (스테레오 영상에서 폐색에 강인하고 축소된 파라미터를 갖는 신경망)

  • Kwang-Yeob Lee;Young-Min Jeon;Jun-Mo Jeong
    • Journal of IKEEE
    • /
    • v.28 no.1
    • /
    • pp.65-71
    • /
    • 2024
  • This paper proposes a neural network that can reduce the number of parameters while reducing matching errors in occluded regions to increase the accuracy of depth maps in stereo matching. Stereo matching-based object recognition is utilized in many fields to more accurately recognize situations using images. When there are many objects in a complex image, an occluded area is generated due to overlap between objects and occlusion by background, thereby lowering the accuracy of the depth map. To solve this problem, existing research methods that create context information and combine it with the cost volume or RoIselect in the occluded area increase the complexity of neural networks, making it difficult to learn and expensive to implement. In this paper, we create a depthwise seperable neural network that enhances regional feature extraction before cost volume generation, reducing the number of parameters and proposing a neural network that is robust to occlusion errors. Compared to PSMNet, the proposed neural network reduced the number of parameters by 30%, improving 5.3% in color error and 3.6% in test loss.

Research on damage detection and assessment of civil engineering structures based on DeepLabV3+ deep learning model

  • Chengyan Song
    • Structural Engineering and Mechanics
    • /
    • v.91 no.5
    • /
    • pp.443-457
    • /
    • 2024
  • At present, the traditional concrete surface inspection methods based on artificial vision have the problems of high cost and insecurity, while the computer vision methods rely on artificial selection features in the case of sensitive environmental changes and difficult promotion. In order to solve these problems, this paper introduces deep learning technology in the field of computer vision to achieve automatic feature extraction of structural damage, with excellent detection speed and strong generalization ability. The main contents of this study are as follows: (1) A method based on DeepLabV3+ convolutional neural network model is proposed for surface detection of post-earthquake structural damage, including surface damage such as concrete cracks, spaling and exposed steel bars. The key semantic information is extracted by different backbone networks, and the data sets containing various surface damage are trained, tested and evaluated. The intersection ratios of 54.4%, 44.2%, and 89.9% in the test set demonstrate the network's capability to accurately identify different types of structural surface damages in pixel-level segmentation, highlighting its effectiveness in varied testing scenarios. (2) A semantic segmentation model based on DeepLabV3+ convolutional neural network is proposed for the detection and evaluation of post-earthquake structural components. Using a dataset that includes building structural components and their damage degrees for training, testing, and evaluation, semantic segmentation detection accuracies were recorded at 98.5% and 56.9%. To provide a comprehensive assessment that considers both false positives and false negatives, the Mean Intersection over Union (Mean IoU) was employed as the primary evaluation metric. This choice ensures that the network's performance in detecting and evaluating pixel-level damage in post-earthquake structural components is evaluated uniformly across all experiments. By incorporating deep learning technology, this study not only offers an innovative solution for accurately identifying post-earthquake damage in civil engineering structures but also contributes significantly to empirical research in automated detection and evaluation within the field of structural health monitoring.

Rear Vehicle Detection Method in Harsh Environment Using Improved Image Information (개선된 영상 정보를 이용한 가혹한 환경에서의 후방 차량 감지 방법)

  • Jeong, Jin-Seong;Kim, Hyun-Tae;Jang, Young-Min;Cho, Sang-Bok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.96-110
    • /
    • 2017
  • Most of vehicle detection studies using the existing general lens or wide-angle lens have a blind spot in the rear detection situation, the image is vulnerable to noise and a variety of external environments. In this paper, we propose a method that is detection in harsh external environment with noise, blind spots, etc. First, using a fish-eye lens will help minimize blind spots compared to the wide-angle lens. When angle of the lens is growing because nonlinear radial distortion also increase, calibration was used after initializing and optimizing the distortion constant in order to ensure accuracy. In addition, the original image was analyzed along with calibration to remove fog and calibrate brightness and thereby enable detection even when visibility is obstructed due to light and dark adaptations from foggy situations or sudden changes in illumination. Fog removal generally takes a considerably significant amount of time to calculate. Thus in order to reduce the calculation time, remove the fog used the major fog removal algorithm Dark Channel Prior. While Gamma Correction was used to calibrate brightness, a brightness and contrast evaluation was conducted on the image in order to determine the Gamma Value needed for correction. The evaluation used only a part instead of the entirety of the image in order to reduce the time allotted to calculation. When the brightness and contrast values were calculated, those values were used to decided Gamma value and to correct the entire image. The brightness correction and fog removal were processed in parallel, and the images were registered as a single image to minimize the calculation time needed for all the processes. Then the feature extraction method HOG was used to detect the vehicle in the corrected image. As a result, it took 0.064 seconds per frame to detect the vehicle using image correction as proposed herein, which showed a 7.5% improvement in detection rate compared to the existing vehicle detection method.

Hardware Design of SURF-based Feature extraction and description for Object Tracking (객체 추적을 위한 SURF 기반 특이점 추출 및 서술자 생성의 하드웨어 설계)

  • Do, Yong-Sig;Jeong, Yong-Jin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.83-93
    • /
    • 2013
  • Recently, the SURF algorithm, which is conjugated for object tracking system as part of many computer vision applications, is a well-known scale- and rotation-invariant feature detection algorithm. The SURF, due to its high computational complexity, there is essential to develop a hardware accelerator in order to be used on an IP in embedded environment. However, the SURF requires a huge local memory, causing many problems that increase the chip size and decrease the value of IP in ASIC and SoC system design. In this paper, we proposed a way to design a SURF algorithm in hardware with greatly reduced local memory by partitioning the algorithms into several Sub-IPs using external memory and a DMA. To justify validity of the proposed method, we developed an example of simplified object tracking algorithm. The execution speed of the hardware IP was about 31 frame/sec, the logic size was about 74Kgate in the 30nm technology with 81Kbytes local memory in the embedded system platform consisting of ARM Cortex-M0 processor, AMBA bus(AHB-lite and APB), DMA and a SDRAM controller. Hence, it can be used to the hardware IP of SoC Chip. If the image processing algorithm akin to SURF is applied to the method proposed in this paper, it is expected that it can implement an efficient hardware design for target application.