• Title/Summary/Keyword: Image identification

Search Result 984, Processing Time 0.024 seconds

Crop Leaf Disease Identification Using Deep Transfer Learning

  • Changjian Zhou;Yutong Zhang;Wenzhong Zhao
    • Journal of Information Processing Systems
    • /
    • v.20 no.2
    • /
    • pp.149-158
    • /
    • 2024
  • Traditional manual identification of crop leaf diseases is challenging. Owing to the limitations in manpower and resources, it is challenging to explore crop diseases on a large scale. The emergence of artificial intelligence technologies, particularly the extensive application of deep learning technologies, is expected to overcome these challenges and greatly improve the accuracy and efficiency of crop disease identification. Crop leaf disease identification models have been designed and trained using large-scale training data, enabling them to predict different categories of diseases from unlabeled crop leaves. However, these models, which possess strong feature representation capabilities, require substantial training data, and there is often a shortage of such datasets in practical farming scenarios. To address this issue and improve the feature learning abilities of models, this study proposes a deep transfer learning adaptation strategy. The novel proposed method aims to transfer the weights and parameters from pre-trained models in similar large-scale training datasets, such as ImageNet. ImageNet pre-trained weights are adopted and fine-tuned with the features of crop leaf diseases to improve prediction ability. In this study, we collected 16,060 crop leaf disease images, spanning 12 categories, for training. The experimental results demonstrate that an impressive accuracy of 98% is achieved using the proposed method on the transferred ResNet-50 model, thereby confirming the effectiveness of our transfer learning approach.

Target identification for visual tracking

  • Lee, Joon-Woong;Yun, Joo-Seop;Kweon, In-So
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10a
    • /
    • pp.145-148
    • /
    • 1996
  • In moving object tracking based on the visual sensory feedback, a prerequisite is to determine which feature or which object is to be tracked and then the feature or the object identification precedes the tracking. In this paper, we focus on the object identification not image feature identification. The target identification is realized by finding out corresponding line segments to the hypothesized model segments of the target. The key idea is the combination of the Mahalanobis distance with the geometrica relationship between model segments and extracted line segments. We demonstrate the robustness and feasibility of the proposed target identification algorithm by a moving vehicle identification and tracking in the video traffic surveillance system over images of a road scene.

  • PDF

Joint Reasoning of Real-time Visual Risk Zone Identification and Numeric Checking for Construction Safety Management

  • Ali, Ahmed Khairadeen;Khan, Numan;Lee, Do Yeop;Park, Chansik
    • International conference on construction engineering and project management
    • /
    • 2020.12a
    • /
    • pp.313-322
    • /
    • 2020
  • The recognition of the risk hazards is a vital step to effectively prevent accidents on a construction site. The advanced development in computer vision systems and the availability of the large visual database related to construction site made it possible to take quick action in the event of human error and disaster situations that may occur during management supervision. Therefore, it is necessary to analyze the risk factors that need to be managed at the construction site and review appropriate and effective technical methods for each risk factor. This research focuses on analyzing Occupational Safety and Health Agency (OSHA) related to risk zone identification rules that can be adopted by the image recognition technology and classify their risk factors depending on the effective technical method. Therefore, this research developed a pattern-oriented classification of OSHA rules that can employ a large scale of safety hazard recognition. This research uses joint reasoning of risk zone Identification and numeric input by utilizing a stereo camera integrated with an image detection algorithm such as (YOLOv3) and Pyramid Stereo Matching Network (PSMNet). The research result identifies risk zones and raises alarm if a target object enters this zone. It also determines numerical information of a target, which recognizes the length, spacing, and angle of the target. Applying image detection joint logic algorithms might leverage the speed and accuracy of hazard detection due to merging more than one factor to prevent accidents in the job site.

  • PDF

Oil Pipeline Weld Defect Identification System Based on Convolutional Neural Network

  • Shang, Jiaze;An, Weipeng;Liu, Yu;Han, Bang;Guo, Yaodan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.1086-1103
    • /
    • 2020
  • The automatic identification and classification of image-based weld defects is a difficult task due to the complex texture of the X-ray images of the weld defect. Several depth learning methods for automatically identifying welds were proposed and tested. In this work, four different depth convolutional neural networks were evaluated and compared on the 1631 image set. The concavity, undercut, bar defects, circular defects, unfused defects and incomplete penetration in the weld image 6 different types of defects are classified. Another contribution of this paper is to train a CNN model "RayNet" for the dataset from scratch. In the experiment part, the parameters of convolution operation are compared and analyzed, in which the experimental part performs a comparative analysis of various parameters in the convolution operation, compares the size of the input image, gives the classification results for each defect, and finally shows the partial feature map during feature extraction with the classification accuracy reaching 96.5%, which is 6.6% higher than the classification accuracy of other existing fine-tuned models, and even improves the classification accuracy compared with the traditional image processing methods, and also proves that the model trained from scratch also has a good performance on small-scale data sets. Our proposed method can assist the evaluators in classifying pipeline welding defects.

Development of Identification Method of Rice Varieties Using Image Processing Technique (화상처리법에 의한 쌀 품종별 판별기술 개발)

  • Kwon, Young-Kil;Cho, Rae-Kwang
    • Applied Biological Chemistry
    • /
    • v.41 no.2
    • /
    • pp.160-165
    • /
    • 1998
  • Current discriminating technique of rice variety is known to be not objective till this time because of depending on naked eye of well trained inspector. DNA finger print method based on genetic character of rice has been indicated inappropriate for on-site application, because the method need much labor and skilled expert. The purpose of this study was to develops the identification technique of polished rice varieties using CCD camera images. To minimize the noise of the captured image, thresholding and median filtering were carried out, and edge was extracted from the image data. Image data after pretreatment of normalize and FFT(fast fourier transform) were used for library model and feedforward backpropagation neural network model. Image processing technique using CCD camera could discriminate the variety of rice with high accuracy in case of quite different rice of shape, but the accuracy was reached at 85% in the similar shape of rice.

  • PDF

Comparison Study of the Performance of CNN Models with Multi-view Image Set on the Classification of Ship Hull Blocks (다시점 영상 집합을 활용한 선체 블록 분류를 위한 CNN 모델 성능 비교 연구)

  • Chon, Haemyung;Noh, Jackyou
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.57 no.3
    • /
    • pp.140-151
    • /
    • 2020
  • It is important to identify the location of ship hull blocks with exact block identification number when scheduling the shipbuilding process. The wrong information on the location and identification number of some hull block can cause low productivity by spending time to find where the exact hull block is. In order to solve this problem, it is necessary to equip the system to track the location of the blocks and to identify the identification numbers of the blocks automatically. There were a lot of researches of location tracking system for the hull blocks on the stockyard. However there has been no research to identify the hull blocks on the stockyard. This study compares the performance of 5 Convolutional Neural Network (CNN) models with multi-view image set on the classification of the hull blocks to identify the blocks on the stockyard. The CNN models are open algorithms of ImageNet Large-Scale Visual Recognition Competition (ILSVRC). Four scaled hull block models are used to acquire the images of ship hull blocks. Learning and transfer learning of the CNN models with original training data and augmented data of the original training data were done. 20 tests and predictions in consideration of five CNN models and four cases of training conditions are performed. In order to compare the classification performance of the CNN models, accuracy and average F1-Score from confusion matrix are adopted as the performance measures. As a result of the comparison, Resnet-152v2 model shows the highest accuracy and average F1-Score with full block prediction image set and with cropped block prediction image set.

Development of Tele-image Processing Algorithm for Automatic Harvesting of House Melon (하우스멜론 수확자동화를 위한 원격영상 처리알고리즘 개발)

  • Kim, S.C.;Im, D.H.;Chung, S.C.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.33 no.3
    • /
    • pp.196-203
    • /
    • 2008
  • Hybrid robust image processing algorithm to extract visual features of melon during the cultivation was developed based on a wireless tele-operative interface. Features of a melon such as size and shape including position were crucial to successful task automation and future development of cultivation data base. An algorithm was developed based on the concept of hybrid decision-making which shares a task between the computer and the operator utilizing man-computer interactive interface. A hybrid decision-making system was composed of three modules such as wireless image transmission, task specification and identification, and man-computer interface modules. Computing burden and the instability of the image processing results caused by the variation of illumination and the complexity of the environment caused by the irregular stem and shapes of leaves and shades were overcome using the proposed algorithm. With utilizing operator's teaching via LCD touch screen of the display monitor, the complexity and instability of the melon identification process has been avoided. Hough transform was modified for the image obtained from the locally specified window to extract the geometric shape and position of the melon. It took less than 200 milliseconds processing time.

Identification Method of Geometric and Filtering Change Regions in Modified Digital Images (수정된 디지털 이미지에서 기하학적 변형 및 필터링 변형 영역을 식별하는 기법)

  • Hwang, Min-Gu;Cho, Byung-Joo;Har, Dong-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.11
    • /
    • pp.1292-1304
    • /
    • 2012
  • Recently, digital images are extremely forged by editors or advertisers. Also, amateurs can modify images throughout easy editing programs. In this study, we propose identification and analytical methods for the modified images to figure out those problems. In modified image analysis, we classify two parts; a filtering change and a geometric change. Those changes have an algorithm based on interpolation so that we propose the algorithm which is able to analyze a trace on a modified area. With this algorithm, we implement a detection map of interpolation using minimum filter, laplacian algorithm, and maximum filter. We apply the proposed algorithm to modified image and are able to analyze its modified trace using the detection map.

Fast and Accurate Visual Place Recognition Using Street-View Images

  • Lee, Keundong;Lee, Seungjae;Jung, Won Jo;Kim, Kee Tae
    • ETRI Journal
    • /
    • v.39 no.1
    • /
    • pp.97-107
    • /
    • 2017
  • A fast and accurate building-level visual place recognition method built on an image-retrieval scheme using street-view images is proposed. Reference images generated from street-view images usually depict multiple buildings and confusing regions, such as roads, sky, and vehicles, which degrades retrieval accuracy and causes matching ambiguity. The proposed practical database refinement method uses informative reference image and keypoint selection. For database refinement, the method uses a spatial layout of the buildings in the reference image, specifically a building-identification mask image, which is obtained from a prebuilt three-dimensional model of the site. A global-positioning-system-aware retrieval structure is incorporated in it. To evaluate the method, we constructed a dataset over an area of $0.26km^2$. It was comprised of 38,700 reference images and corresponding building-identification mask images. The proposed method removed 25% of the database images using informative reference image selection. It achieved 85.6% recall of the top five candidates in 1.25 s of full processing. The method thus achieved high accuracy at a low computational complexity.

Biometric identification of Black Bengal goat: unique iris pattern matching system vs deep learning approach

  • Menalsh Laishram;Satyendra Nath Mandal;Avijit Haldar;Shubhajyoti Das;Santanu Bera;Rajarshi Samanta
    • Animal Bioscience
    • /
    • v.36 no.6
    • /
    • pp.980-989
    • /
    • 2023
  • Objective: Iris pattern recognition system is well developed and practiced in human, however, there is a scarcity of information on application of iris recognition system in animals at the field conditions where the major challenge is to capture a high-quality iris image from a constantly moving non-cooperative animal even when restrained properly. The aim of the study was to validate and identify Black Bengal goat biometrically to improve animal management in its traceability system. Methods: Forty-nine healthy, disease free, 3 months±6 days old female Black Bengal goats were randomly selected at the farmer's field. Eye images were captured from the left eye of an individual goat at 3, 6, 9, and 12 months of age using a specialized camera made for human iris scanning. iGoat software was used for matching the same individual goats at 3, 6, 9, and 12 months of ages. Resnet152V2 deep learning algorithm was further applied on same image sets to predict matching percentages using only captured eye images without extracting their iris features. Results: The matching threshold computed within and between goats was 55%. The accuracies of template matching of goats at 3, 6, 9, and 12 months of ages were recorded as 81.63%, 90.24%, 44.44%, and 16.66%, respectively. As the accuracies of matching the goats at 9 and 12 months of ages were low and below the minimum threshold matching percentage, this process of iris pattern matching was not acceptable. The validation accuracies of resnet152V2 deep learning model were found 82.49%, 92.68%, 77.17%, and 87.76% for identification of goat at 3, 6, 9, and 12 months of ages, respectively after training the model. Conclusion: This study strongly supported that deep learning method using eye images could be used as a signature for biometric identification of an individual goat.