• 제목/요약/키워드: Information Recognition

검색결과 9,120건 처리시간 0.036초

A Study on the Performance Analysis of Entity Name Recognition Techniques Using Korean Patent Literature

  • Gim, Jangwon
    • 한국정보기술학회 영문논문지
    • /
    • 제10권2호
    • /
    • pp.139-151
    • /
    • 2020
  • Entity name recognition is a part of information extraction that extracts entity names from documents and classifies the types of extracted entity names. Entity name recognition technologies are widely used in natural language processing, such as information retrieval, machine translation, and query response systems. Various deep learning-based models exist to improve entity name recognition performance, but studies that compared and analyzed these models on Korean data are insufficient. In this paper, we compare and analyze the performance of CRF, LSTM-CRF, BiLSTM-CRF, and BERT, which are actively used to identify entity names using Korean data. Also, we compare and evaluate whether embedding models, which are variously used in recent natural language processing tasks, can affect the entity name recognition model's performance improvement. As a result of experiments on patent data and Korean corpus, it was confirmed that the BiLSTM-CRF using FastText method showed the highest performance.

인공지능 객체인식에 관한 파라미터 측정 연구 (A Study On Parameter Measurement for Artificial Intelligence Object Recognition)

  • 최병관
    • 디지털산업정보학회논문지
    • /
    • 제15권3호
    • /
    • pp.15-28
    • /
    • 2019
  • Artificial intelligence is evolving rapidly in the ICT field, smart convergence media system and content industry through the fourth industrial revolution, and it is evolving very rapidly through Big Data. In this paper, we propose a face recognition method based on object recognition based on object recognition through artificial intelligence. In this method, Were experimented and studied through the object recognition technique of artificial intelligence. In the conventional 3D image field, general research on object recognition has been carried out variously, and researches have been conducted on the side effects of visual fatigue and dizziness through 3D image. However, in this study, we tried to solve the problem caused by the quantitative difference between object recognition and object recognition for human factor algorithm that measure visual fatigue through cognitive function, morphological analysis and object recognition. Especially, The new method of computer interaction is presented and the results are shown through experiments.

Near-infrared face recognition by fusion of E-GV-LBP and FKNN

  • Li, Weisheng;Wang, Lidou
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권1호
    • /
    • pp.208-223
    • /
    • 2015
  • To solve the problem of face recognition with complex changes and further improve the efficiency, a new near-infrared face recognition algorithm which fuses E-GV-LBP and FKNN algorithm is proposed. Firstly, it transforms near infrared face image by Gabor wavelet. Then, it extracts LBP coding feature that contains space, scale and direction information. Finally, this paper introduces an improved FKNN algorithm which is based on spatial domain. The proposed approach has brought face recognition more quickly and accurately. The experiment results show that the new algorithm has improved the recognition accuracy and computing time under the near-infrared light and other complex changes. In addition, this method can be used for face recognition under visible light as well.

Intelligent Activity Recognition based on Improved Convolutional Neural Network

  • Park, Jin-Ho;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제25권6호
    • /
    • pp.807-818
    • /
    • 2022
  • In order to further improve the accuracy and time efficiency of behavior recognition in intelligent monitoring scenarios, a human behavior recognition algorithm based on YOLO combined with LSTM and CNN is proposed. Using the real-time nature of YOLO target detection, firstly, the specific behavior in the surveillance video is detected in real time, and the depth feature extraction is performed after obtaining the target size, location and other information; Then, remove noise data from irrelevant areas in the image; Finally, combined with LSTM modeling and processing time series, the final behavior discrimination is made for the behavior action sequence in the surveillance video. Experiments in the MSR and KTH datasets show that the average recognition rate of each behavior reaches 98.42% and 96.6%, and the average recognition speed reaches 210ms and 220ms. The method in this paper has a good effect on the intelligence behavior recognition.

A Hand Gesture Recognition Method using Inertial Sensor for Rapid Operation on Embedded Device

  • Lee, Sangyub;Lee, Jaekyu;Cho, Hyeonjoong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권2호
    • /
    • pp.757-770
    • /
    • 2020
  • We propose a hand gesture recognition method that is compatible with a head-up display (HUD) including small processing resource. For fast link adaptation with HUD, it is necessary to rapidly process gesture recognition and send the minimum amount of driver hand gesture data from the wearable device. Therefore, we use a method that recognizes each hand gesture with an inertial measurement unit (IMU) sensor based on revised correlation matching. The method of gesture recognition is executed by calculating the correlation between every axis of the acquired data set. By classifying pre-defined gesture values and actions, the proposed method enables rapid recognition. Furthermore, we evaluate the performance of the algorithm, which can be implanted within wearable bands, requiring a minimal process load. The experimental results evaluated the feasibility and effectiveness of our decomposed correlation matching method. Furthermore, we tested the proposed algorithm to confirm the effectiveness of the system using pre-defined gestures of specific motions with a wearable platform device. The experimental results validated the feasibility and effectiveness of the proposed hand gesture recognition system. Despite being based on a very simple concept, the proposed algorithm showed good performance in recognition accuracy.

패턴인식 필터링을 적용한 물체인식 성능 향상 기법 (A Method for Improving Object Recognition Using Pattern Recognition Filtering)

  • 박진렬;이승기
    • 전자공학회논문지
    • /
    • 제53권6호
    • /
    • pp.122-129
    • /
    • 2016
  • 컴퓨터 비전(Computer vision) 분야에서 물체인식을 위한 많은 알고리즘이 연구되고 있다. 그중 특징점(feature) 기반의 SURF(Speeded Up Robust Features) 알고리즘은 다른 알고리즘에 비해 속도와 정확도 면에서 우수하다. 하지만 SURF 알고리즘은 대응점 검출 시 대응점 오정합으로 물체인식에 실패하는 단점이 있다. 본 논문은 물체 인식률을 향상하기 위하여 SURF와 RANSAC(Random Sample Consensus) 알고리즘을 기반으로 물체인식 시스템을 구현하고, 패턴인식 필터링을 제안하였다. 또한, 실험을 통하여 물체 인식률 향상 결과를 제시하였다.

A Comprehensive Approach for Tamil Handwritten Character Recognition with Feature Selection and Ensemble Learning

  • Manoj K;Iyapparaja M
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권6호
    • /
    • pp.1540-1561
    • /
    • 2024
  • This research proposes a novel approach for Tamil Handwritten Character Recognition (THCR) that combines feature selection and ensemble learning techniques. The Tamil script is complex and highly variable, requiring a robust and accurate recognition system. Feature selection is used to reduce dimensionality while preserving discriminative features, improving classification performance and reducing computational complexity. Several feature selection methods are compared, and individual classifiers (support vector machines, neural networks, and decision trees) are evaluated through extensive experiments. Ensemble learning techniques such as bagging, and boosting are employed to leverage the strengths of multiple classifiers and enhance recognition accuracy. The proposed approach is evaluated on the HP Labs Dataset, achieving an impressive 95.56% accuracy using an ensemble learning framework based on support vector machines. The dataset consists of 82,928 samples with 247 distinct classes, contributed by 500 participants from Tamil Nadu. It includes 40,000 characters with 500 user variations. The results surpass or rival existing methods, demonstrating the effectiveness of the approach. The research also offers insights for developing advanced recognition systems for other complex scripts. Future investigations could explore the integration of deep learning techniques and the extension of the proposed approach to other Indic scripts and languages, advancing the field of handwritten character recognition.

스마트 학습지: 미세 격자 패턴 인식 기반의 지능형 학습 도우미 시스템의 설계와 구현 (Design and Implementation of Smart Self-Learning Aid: Micro Dot Pattern Recognition based Information Embedding Solution)

  • 심재연;김성환
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2011년도 춘계학술발표대회
    • /
    • pp.346-349
    • /
    • 2011
  • In this paper, we design a perceptually invisible dot pattern layout and its recognition scheme, and we apply the recognition scheme into a smart self learning aid for interactive learning aid. To increase maximum information capacity and also increase robustness to the noises, we design a ECC (error correcting code) based dot pattern with directional vector indicator. To make a smart self-learning aid, we embed the micro dot pattern (20 information bit + 15 ECC bits + 9 layout information bit) using K ink (CMYK) and extract the dot pattern using IR (infrared) LED and IR filter based camera, which is embedded in the smart pen. The reason we use K ink is that K ink is a carbon based ink in nature, and carbon is easily recognized with IR even without light. After acquiring IR camera images for the dot patterns, we perform layout adjustment using the 9 layout information bit, and extract 20 information bits from 35 data bits which is composed of 20 information bits and 15 ECC bits. To embed and extract information bits, we use topology based dot pattern recognition scheme which is robust to geometric distortion which is very usual in camera based recognition scheme. Topology based pattern recognition traces next information bit symbols using topological distance measurement from the pivot information bit. We implemented and experimented with sample patterns, and it shows that we can achieve almost 99% recognition for our embedding patterns.

Extended Center-Symmetric Pattern과 2D-PCA를 이용한 얼굴인식 (Face Recognition using Extended Center-Symmetric Pattern and 2D-PCA)

  • 이현구;김동주
    • 디지털산업정보학회논문지
    • /
    • 제9권2호
    • /
    • pp.111-119
    • /
    • 2013
  • Face recognition has recently become one of the most popular research areas in the fields of computer vision, machine learning, and pattern recognition because it spans numerous applications, such as access control, surveillance, security, credit-card verification, and criminal identification. In this paper, we propose a simple descriptor called an ECSP(Extended Center-Symmetric Pattern) for illumination-robust face recognition. The ECSP operator encodes the texture information of a local face region by emphasizing diagonal components of a previous CS-LBP(Center-Symmetric Local Binary Pattern). Here, the diagonal components are emphasized because facial textures along the diagonal direction contain much more information than those of other directions. The facial texture information of the ECSP operator is then used as the input image of an image covariance-based feature extraction algorithm such as 2D-PCA(Two-Dimensional Principal Component Analysis). Performance evaluation of the proposed approach was carried out using various binary pattern operators and recognition algorithms on the Yale B database. The experimental results demonstrated that the proposed approach achieved better recognition accuracy than other approaches, and we confirmed that the proposed approach is effective against illumination variation.

Parallel Multi-task Cascade Convolution Neural Network Optimization Algorithm for Real-time Dynamic Face Recognition

  • Jiang, Bin;Ren, Qiang;Dai, Fei;Zhou, Tian;Gui, Guan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권10호
    • /
    • pp.4117-4135
    • /
    • 2020
  • Due to the angle of view, illumination and scene diversity, real-time dynamic face detection and recognition is no small difficulty in those unrestricted environments. In this study, we used the intrinsic correlation between detection and calibration, using a multi-task cascaded convolutional neural network(MTCNN) to improve the efficiency of face recognition, and the output of each core network is mapped in parallel to a compact Euclidean space, where distance represents the similarity of facial features, so that the target face can be identified as quickly as possible, without waiting for all network iteration calculations to complete the recognition results. And after the angle of the target face and the illumination change, the correlation between the recognition results can be well obtained. In the actual application scenario, we use a multi-camera real-time monitoring system to perform face matching and recognition using successive frames acquired from different angles. The effectiveness of the method was verified by several real-time monitoring experiments, and good results were obtained.