• Title/Summary/Keyword: term extraction

Search Result 340, Processing Time 0.023 seconds

National survey of inferior alveolar nerve and lingual nerve damage after lower third molar extraction (하악 제3대구치 발치 후 발생한 하치조신경 및 설신경 손상에 관한 연구)

  • Han, Sung-Hee
    • The Journal of the Korean dental association
    • /
    • v.47 no.4
    • /
    • pp.211-224
    • /
    • 2009
  • This retrospective study was to analyze the inferior alveolar nerve and lingual nerve damage after the removal of mandibular third molars. In this questionnaire study, the subjects chosen for this study were 2472 dentists who answered the questionnaire about numbness after the extraction of lower third molars. The data collected by E-mail and web site included the incidence of removal of the lower third molars, the incidence and the experience of numbness of the inferior alveolar nerve and lingual nerve, rate and duration of recovery, the influence in day life after the long-term sensory loss, the period and amount of the indemnity in the case of medical dispute. The results are summarized as follows. 1. The experience rate and the incidence rate of the inferior alveolar nerve numbness by oral surgeons in the past year were19.9% and 0.14%. Those of the lingual nerve by oral surgeon were 7.7% and 0.05%.2. The experience rate and the incidence rate of the inferior alveolar nerve numbness by the dentists except oral surgeons in the past year were 9.7% and 0.19%. Those of the lingual nerve by the dentists except oral surgeons were 5.5% and 0.11%.3. The recovery rate of the inferior alveolar nerve after 1 year and 2 years were 85.6% and 91.3%. The recovery rate of the lingual nerve after 1 year and 2 years were 84.8% and 89.3%.In conclusion, most of numbness may be recovered within 2 years. However the possibility of long term and persistent numbness should not be neglected. Therefore practitioner must inform the possibility of nerve injury and include this possibility in the consent forms.

  • PDF

Comparative Analysis of Work-Life Balance Issues between Korea and the United States (워라밸 이슈 비교 분석: 한국과 미국)

  • Lee, So-Hyun;Kim, Minsu;Kim, Hee-Woong
    • The Journal of Information Systems
    • /
    • v.28 no.2
    • /
    • pp.153-179
    • /
    • 2019
  • Purpose This study collects the issues about work-life balance in Korea and United States and suggests the specific plans for work-life balance by the comparison and analysis. The objective of this study is to contribute to the improvement of people's life quality by understanding the concept of work-life balance that has become the issue recently and offering the detailed plans to be considered in respect of individual, corporate and governmental level for society of work-life balance. Design/methodology/approach This study collects work-life balance related issues through recruit sites in Korea and United States, compares and analyzes the collected data from the results of three text mining techniques such as LDA topic modeling, term frequency analysis and keyword extraction analysis. Findings According to the text mining results, this study shows that it is important to build corporate culture that support work-life balance in free organizational atmosphere especially in Korea. It also appears that there are the differences against whether work-life balance can be achieved and recognition and satisfaction about work-life balance along type of company or sort of working. In case of United States, it shows that it is important for them to work more efficiently by raising teamwork level among team members who work together as well as the role of the leaders who lead the teams in the organization. It is also significant for the company to provide their employees with the opportunity of education and training that enables them to improve their individual capability or skill. Furthermore, it suggests the roles of individuals, company and government and specific plans based on the analysis of text mining results in both countries.

A Novel RGB Image Steganography Using Simulated Annealing and LCG via LSB

  • Bawaneh, Mohammed J.;Al-Shalabi, Emad Fawzi;Al-Hazaimeh, Obaida M.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.143-151
    • /
    • 2021
  • The enormous prevalence of transferring official confidential digital documents via the Internet shows the urgent need to deliver confidential messages to the recipient without letting any unauthorized person to know contents of the secret messages or detect there existence . Several Steganography techniques such as the least significant Bit (LSB), Secure Cover Selection (SCS), Discrete Cosine Transform (DCT) and Palette Based (PB) were applied to prevent any intruder from analyzing and getting the secret transferred message. The utilized steganography methods should defiance the challenges of Steganalysis techniques in term of analysis and detection. This paper presents a novel and robust framework for color image steganography that combines Linear Congruential Generator (LCG), simulated annealing (SA), Cesar cryptography and LSB substitution method in one system in order to reduce the objection of Steganalysis and deliver data securely to their destination. SA with the support of LCG finds out the optimal minimum sniffing path inside a cover color image (RGB) then the confidential message will be encrypt and embedded within the RGB image path as a host medium by using Cesar and LSB procedures. Embedding and extraction processes of secret message require a common knowledge between sender and receiver; that knowledge are represented by SA initialization parameters, LCG seed, Cesar key agreement and secret message length. Steganalysis intruder will not understand or detect the secret message inside the host image without the correct knowledge about the manipulation process. The constructed system satisfies the main requirements of image steganography in term of robustness against confidential message extraction, high quality visual appearance, little mean square error (MSE) and high peak signal noise ratio (PSNR).

A Term Weight Mensuration based on Popularity for Search Query Expansion (검색 질의 확장을 위한 인기도 기반 단어 가중치 측정)

  • Lee, Jung-Hun;Cheon, Suh-Hyun
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.8
    • /
    • pp.620-628
    • /
    • 2010
  • With the use of the Internet pervasive in everyday life, people are now able to retrieve a lot of information through the web. However, exponential growth in the quantity of information on the web has brought limits to online search engines in their search performance by showing piles and piles of unwanted information. With so much unwanted information, web users nowadays need more time and efforts than in the past to search for needed information. This paper suggests a method of using query expansion in order to quickly bring wanted information to web users. Popularity based Term Weight Mensuration better performance than the TF-IDF and Simple Popularity Term Weight Mensuration to experiments without changes of search subject. When a subject changed during search, Popularity based Term Weight Mensuration's performance change is smaller than others.

Speaker verification system combining attention-long short term memory based speaker embedding and I-vector in far-field and noisy environments (Attention-long short term memory 기반의 화자 임베딩과 I-vector를 결합한 원거리 및 잡음 환경에서의 화자 검증 알고리즘)

  • Bae, Ara;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.2
    • /
    • pp.137-142
    • /
    • 2020
  • Many studies based on I-vector have been conducted in a variety of environments, from text-dependent short-utterance to text-independent long-utterance. In this paper, we propose a speaker verification system employing a combination of I-vector with Probabilistic Linear Discriminant Analysis (PLDA) and speaker embedding of Long Short Term Memory (LSTM) with attention mechanism in far-field and noisy environments. The LSTM model's Equal Error Rate (EER) is 15.52 % and the Attention-LSTM model is 8.46 %, improving by 7.06 %. We show that the proposed method solves the problem of the existing extraction process which defines embedding as a heuristic. The EER of the I-vector/PLDA without combining is 6.18 % that shows the best performance. And combined with attention-LSTM based embedding is 2.57 % that is 3.61 % less than the baseline system, and which improves performance by 58.41 %.

Development of u-Health standard terminology and guidelines for terminology standardization (유헬스 표준용어 및 용어 표준화 가이드라인 개발)

  • Lee, Soo-Kyoung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.6
    • /
    • pp.4056-4066
    • /
    • 2015
  • For understanding of terminology related to u-Health and activating u-Health industry, it is required to develop u-Health standard terminology for communication. The purpose of this study is to develop u-Health standard terminology and provides guidelines for terminology standardization in order to develop the u-Health standard terminology. We finally developed the 187 u-Health standard terminology through the process of data acquisition, term extraction, term refinement, term selection and term management based on reports, glossary and Telecommunications Technology Association (TTA) standards about u-Health. As a result, the standard terminology and guidelines of u-Health optimized to the domestic environment were suggested. They included details of definition, classification, components, the methods and principles of the process for u-Health standard terminology. Presented in this study, u-Health standard terminology and guidelines for terminology standardization would assist the cost-reducing of employing terminology and management of it, while making information transfer easy. This would make possible promoting efficient development of u-Health industry in general.

LSTM(Long Short-Term Memory)-Based Abnormal Behavior Recognition Using AlphaPose (AlphaPose를 활용한 LSTM(Long Short-Term Memory) 기반 이상행동인식)

  • Bae, Hyun-Jae;Jang, Gyu-Jin;Kim, Young-Hun;Kim, Jin-Pyung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.5
    • /
    • pp.187-194
    • /
    • 2021
  • A person's behavioral recognition is the recognition of what a person does according to joint movements. To this end, we utilize computer vision tasks that are utilized in image processing. Human behavior recognition is a safety accident response service that combines deep learning and CCTV, and can be applied within the safety management site. Existing studies are relatively lacking in behavioral recognition studies through human joint keypoint extraction by utilizing deep learning. There were also problems that were difficult to manage workers continuously and systematically at safety management sites. In this paper, to address these problems, we propose a method to recognize risk behavior using only joint keypoints and joint motion information. AlphaPose, one of the pose estimation methods, was used to extract joint keypoints in the body part. The extracted joint keypoints were sequentially entered into the Long Short-Term Memory (LSTM) model to be learned with continuous data. After checking the behavioral recognition accuracy, it was confirmed that the accuracy of the "Lying Down" behavioral recognition results was high.

Document classification using a deep neural network in text mining (텍스트 마이닝에서 심층 신경망을 이용한 문서 분류)

  • Lee, Bo-Hui;Lee, Su-Jin;Choi, Yong-Seok
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.5
    • /
    • pp.615-625
    • /
    • 2020
  • The document-term frequency matrix is a term extracted from documents in which the group information exists in text mining. In this study, we generated the document-term frequency matrix for document classification according to research field. We applied the traditional term weighting function term frequency-inverse document frequency (TF-IDF) to the generated document-term frequency matrix. In addition, we applied term frequency-inverse gravity moment (TF-IGM). We also generated a document-keyword weighted matrix by extracting keywords to improve the document classification accuracy. Based on the keywords matrix extracted, we classify documents using a deep neural network. In order to find the optimal model in the deep neural network, the accuracy of document classification was verified by changing the number of hidden layers and hidden nodes. Consequently, the model with eight hidden layers showed the highest accuracy and all TF-IGM document classification accuracy (according to parameter changes) were higher than TF-IDF. In addition, the deep neural network was confirmed to have better accuracy than the support vector machine. Therefore, we propose a method to apply TF-IGM and a deep neural network in the document classification.

Efficient Parallel TLD on CPU-GPU Platform for Real-Time Tracking

  • Chen, Zhaoyun;Huang, Dafei;Luo, Lei;Wen, Mei;Zhang, Chunyuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.1
    • /
    • pp.201-220
    • /
    • 2020
  • Trackers, especially long-term (LT) trackers, now have a more complex structure and more intensive computation for nowadays' endless pursuit of high accuracy and robustness. However, computing efficiency of LT trackers cannot meet the real-time requirement in various real application scenarios. Considering heterogeneous CPU-GPU platforms have been more popular than ever, it is a challenge to exploit the computing capacity of heterogeneous platform to improve the efficiency of LT trackers for real-time requirement. This paper focuses on TLD, which is the first LT tracking framework, and proposes an efficient parallel implementation based on OpenCL. In this paper, we firstly make an analysis of the TLD tracker and then optimize the computing intensive kernels, including Fern Feature Extraction, Fern Classification, NCC Calculation, Overlaps Calculation, Positive and Negative Samples Extraction. Experimental results demonstrate that our efficient parallel TLD tracker outperforms the original TLD, achieving the 3.92 speedup on CPU and GPU. Moreover, the parallel TLD tracker can run 52.9 frames per second and meet the real-time requirement.

Natural language processing techniques for bioinformatics

  • Tsujii, Jun-ichi
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2003.10a
    • /
    • pp.3-3
    • /
    • 2003
  • With biomedical literature expanding so rapidly, there is an urgent need to discover and organize knowledge extracted from texts. Although factual databases contain crucial information the overwhelming amount of new knowledge remains in textual form (e.g. MEDLINE). In addition, new terms are constantly coined as the relationships linking new genes, drugs, proteins etc. As the size of biomedical literature is expanding, more systems are applying a variety of methods to automate the process of knowledge acquisition and management. In my talk, I focus on the project, GENIA, of our group at the University of Tokyo, the objective of which is to construct an information extraction system of protein - protein interaction from abstracts of MEDLINE. The talk includes (1) Techniques we use fDr named entity recognition (1-a) SOHMM (Self-organized HMM) (1-b) Maximum Entropy Model (1-c) Lexicon-based Recognizer (2) Treatment of term variants and acronym finders (3) Event extraction using a full parser (4) Linguistic resources for text mining (GENIA corpus) (4-a) Semantic Tags (4-b) Structural Annotations (4-c) Co-reference tags (4-d) GENIA ontology I will also talk about possible extension of our work that links the findings of molecular biology with clinical findings, and claim that textual based or conceptual based biology would be a viable alternative to system biology that tends to emphasize the role of simulation models in bioinformatics.

  • PDF