• Title/Summary/Keyword: vision recognition

Search Result 1,048, Processing Time 0.027 seconds

Chinese-clinical-record Named Entity Recognition using IDCNN-BiLSTM-Highway Network

  • Tinglong Tang;Yunqiao Guo;Qixin Li;Mate Zhou;Wei Huang;Yirong Wu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1759-1772
    • /
    • 2023
  • Chinese named entity recognition (NER) is a challenging work that seeks to find, recognize and classify various types of information elements in unstructured text. Due to the Chinese text has no natural boundary like the spaces in the English text, Chinese named entity identification is much more difficult. At present, most deep learning based NER models are developed using a bidirectional long short-term memory network (BiLSTM), yet the performance still has some space to improve. To further improve their performance in Chinese NER tasks, we propose a new NER model, IDCNN-BiLSTM-Highway, which is a combination of the BiLSTM, the iterated dilated convolutional neural network (IDCNN) and the highway network. In our model, IDCNN is used to achieve multiscale context aggregation from a long sequence of words. Highway network is used to effectively connect different layers of networks, allowing information to pass through network layers smoothly without attenuation. Finally, the global optimum tag result is obtained by introducing conditional random field (CRF). The experimental results show that compared with other popular deep learning-based NER models, our model shows superior performance on two Chinese NER data sets: Resume and Yidu-S4k, The F1-scores are 94.98 and 77.59, respectively.

Correlation Extraction from KOSHA to enable the Development of Computer Vision based Risks Recognition System

  • Khan, Numan;Kim, Youjin;Lee, Doyeop;Tran, Si Van-Tien;Park, Chansik
    • International conference on construction engineering and project management
    • /
    • 2020.12a
    • /
    • pp.87-95
    • /
    • 2020
  • Generally, occupational safety and particularly construction safety is an intricate phenomenon. Industry professionals have devoted vital attention to enforcing Occupational Safety and Health (OHS) from the last three decades to enhance safety management in construction. Despite the efforts of the safety professionals and government agencies, current safety management still relies on manual inspections which are infrequent, time-consuming and prone to error. Extensive research has been carried out to deal with high fatality rates confronting by the construction industry. Sensor systems, visualization-based technologies, and tracking techniques have been deployed by researchers in the last decade. Recently in the construction industry, computer vision has attracted significant attention worldwide. However, the literature revealed the narrow scope of the computer vision technology for safety management, hence, broad scope research for safety monitoring is desired to attain a complete automatic job site monitoring. With this regard, the development of a broader scope computer vision-based risk recognition system for correlation detection between the construction entities is inevitable. For this purpose, a detailed analysis has been conducted and related rules which depict the correlations (positive and negative) between the construction entities were extracted. Deep learning supported Mask R-CNN algorithm is applied to train the model. As proof of concept, a prototype is developed based on real scenarios. The proposed approach is expected to enhance the effectiveness of safety inspection and reduce the encountered burden on safety managers. It is anticipated that this approach may enable a reduction in injuries and fatalities by implementing the exact relevant safety rules and will contribute to enhance the overall safety management and monitoring performance.

  • PDF

Deep Convolution Neural Networks in Computer Vision: a Review

  • Yoo, Hyeon-Joong
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.1
    • /
    • pp.35-43
    • /
    • 2015
  • Over the past couple of years, tremendous progress has been made in applying deep learning (DL) techniques to computer vision. Especially, deep convolutional neural networks (DCNNs) have achieved state-of-the-art performance on standard recognition datasets and tasks such as ImageNet Large-Scale Visual Recognition Challenge (ILSVRC). Among them, GoogLeNet network which is a radically redesigned DCNN based on the Hebbian principle and scale invariance set the new state of the art for classification and detection in the ILSVRC 2014. Since there exist various deep learning techniques, this review paper is focusing on techniques directly related to DCNNs, especially those needed to understand the architecture and techniques employed in GoogLeNet network.

Development of a Simple Computer Vision System (컴퓨터 시각 장치의 개발)

  • 박동철;석민수
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.20 no.1
    • /
    • pp.1-6
    • /
    • 1983
  • To give the recognition capability of task objects by computer vision to a sensor-based robot system, an image digitizer and some basic software techniques were developed and repofted here. The image digitizer was developed with the CROMEMCO SYSTEM III microcomputer anti C.C.T.V. camera to convert the analog valued scene into digitized image which could be pro-cessed by a digital computer. Basic software techniques for the computer vision system were aimed at the recognition of 3-dimensional objects. Experiments with these techniques were carried out using the image of a cubicle which could be considered as typical simple 3-dimensional object.

  • PDF

A Study on Detection of Object Position and Displacement for Obstacle Recognition of UCT (무인 컨테이너 운반차량의 장애물 인식을 위한 물체의 위치 및 변위 검출에 관한 연구)

  • 이진우;이영진;조현철;손주한;이권순
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 1999.10a
    • /
    • pp.321-332
    • /
    • 1999
  • It is important to detect objects movement for obstacle recognition and path searching of UCT(unmanned container transporters) with vision sensor. This paper shows the method to draw out objects and to trace the trajectory of the moving object using a CCD camera and it describes the method to recognize the shape of objects by neural network. We can transform pixel points to objects position of the real space using the proposed viewport. This proposed technique is used by the single vision system based on floor map.

  • PDF

A Study on the Environment Recognition System of Biped Robot for Stable Walking (안정적 보행을 위한 이족 로봇의 환경 인식 시스템 연구)

  • Song, Hee-Jun;Lee, Seon-Gu;Kang, Tae-Gu;Kim, Dong-Won;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.07d
    • /
    • pp.1977-1978
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tele-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

Tool Condition Monitoring Technique Using Computer Vision and Pattern Recognition (컴퓨터 비젼 및 패턴인식기법을 이용한 공구상태 판정시스템 개발)

  • 권오달;양민양
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.17 no.1
    • /
    • pp.27-37
    • /
    • 1993
  • In unmanned machining, One of the most essential issue is the tool management system which includes controlling. identification, presetting and monitoring of cutting tools. Especially the monitoring of tool wear and fracture may be the heart of the system. In this study a computer vision based tool monitoring system is developed. Also an algorithm which can determine the tool condition using this system is presented. In order to enhance practical adaptability the vision system through which two modes of images are taken is located over the rake face of a tool insert. And they are analysed quantitatively and qualitatively with image processing technique. In fact the morphologies of tool fracture or wear are occurred so variously that it is difficult to predict them. For the purpose of this problem the pattern recognition is introduced to classify the modes of the tool such as fracture, crater, chipping and flank wear. The experimental results performed in the CNC turning machine have proved the effectiveness of the proposed system.

Vision-based hand Gesture Detection and Tracking System (비전 기반의 손동작 검출 및 추적 시스템)

  • Park Ho-Sik;Bae Cheol-soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.12C
    • /
    • pp.1175-1180
    • /
    • 2005
  • We present a vision-based hand gesture detection and tracking system. Most conventional hand gesture recognition systems utilize a simpler method for hand detection such as background subtractions with assumed static observation conditions and those methods are not robust against camera motions, illumination changes, and so on. Therefore, we propose a statistical method to recognize and detect hand regions in images using geometrical structures. Also, Our hand tracking system employs multiple cameras to reduce occlusion problems and non-synchronous multiple observations enhance system scalability. In this experiment, the proposed method has recognition rate of $99.28\%$ that shows more improved $3.91\%$ than the conventional appearance method.

The Effect of Visual Feedback on One-hand Gesture Performance in Vision-based Gesture Recognition System

  • Kim, Jun-Ho;Lim, Ji-Hyoun;Moon, Sung-Hyun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.551-556
    • /
    • 2012
  • Objective: This study presents the effect of visual feedback on one-hand gesture performance in vision-based gesture recognition system when people use gestures to control a screen device remotely. Backgroud: gesture interaction receives growing attention because it uses advanced sensor technology and it allows users natural interaction using their own body motion. In generating motion, visual feedback has been to considered critical factor affect speed and accuracy. Method: three types of visual feedback(arrow, star, and animation) were selected and 20 gestures were listed. 12 participants perform each 20 gestures while given 3 types of visual feedback in turn. Results: People made longer hand trace and take longer time to make a gesture when they were given arrow shape feedback than star-shape feedback. The animation type feedback was most preferred. Conclusion: The type of visual feedback showed statistically significant effect on the length of hand trace, elapsed time, and speed of motion in performing a gesture. Application: This study could be applied to any device that needs visual feedback for device control. A big feedback generate shorter length of motion trace, less time, faster than smaller one when people performs gestures to control a device. So the big size of visual feedback would be recommended for a situation requiring fast actions. On the other hand, the smaller visual feedback would be recommended for a situation requiring elaborated actions.