• Title/Summary/Keyword: Sequential processing

Search Result 480, Processing Time 0.031 seconds

Approximate Dynamic Programming Based Interceptor Fire Control and Effectiveness Analysis for M-To-M Engagement (근사적 동적계획을 활용한 요격통제 및 동시교전 효과분석)

  • Lee, Changseok;Kim, Ju-Hyun;Choi, Bong Wan;Kim, Kyeongtaek
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.4
    • /
    • pp.287-295
    • /
    • 2022
  • As low altitude long-range artillery threat has been strengthened, the development of anti-artillery interception system to protect assets against its attacks will be kicked off. We view the defense of long-range artillery attacks as a typical dynamic weapon target assignment (DWTA) problem. DWTA is a sequential decision process in which decision making under future uncertain attacks affects the subsequent decision processes and its results. These are typical characteristics of Markov decision process (MDP) model. We formulate the problem as a MDP model to examine the assignment policy for the defender. The proximity of the capital of South Korea to North Korea border limits the computation time for its solution to a few second. Within the allowed time interval, it is impossible to compute the exact optimal solution. We apply approximate dynamic programming (ADP) approach to check if ADP approach solve the MDP model within processing time limit. We employ Shoot-Shoot-Look policy as a baseline strategy and compare it with ADP approach for three scenarios. Simulation results show that ADP approach provide better solution than the baseline strategy.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Reproducibility of Adenosine Tc-99m sestaMIBI SPECT for the Diagnosis of Coronary Artery Disease (관동맥질환의 진단을 위한 아데노신 Tc-99m sestaMIBI SPECT의 재현성)

  • Lee, Duk-Young;Bae, Jin-Ho;Lee, Sang-Woo;Chun, Kyung-Ah;Yoo, Jeong-Soo;Ahn, Byeong-Cheol;Ha, Jeoung-Hee;Chae, Shung-Chull;Lee, Kyu-Bo;Lee, Jae-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.473-480
    • /
    • 2005
  • Purpose: Adenosine myocardial perfusion SPECT has proven to be useful in the detection of coronary artery disease, in the follow up the success of various therapeutic regimens and in assessing the prognosis of coronary artery disease. The purpose of this study is to define the reproducibility of myocardial perfusion SPECT using adenosine stress testing between two consecutive Tc-99m sestaMIBI (MIBI) SPECT studies in the same subjects. Methods: Thirty patients suspected of coronary artery disease in stable condition underwent sequential Tc-99m MIBI SPECT studies using intravenous adenosine. Gamma camera, acquisition and processing protocols used for the two tests were identical and no invasive procedures were performed between two tests. Mean interval between two tests were 4.1 days (range: 2-11 days). The left ventricular wall was divided into na segments and the degree of myocardial tracer uptake was graded with four-point scoring system by visual analysis. Images were interpretated by two independent nuclear medicine physicians and consensus was taken for final decision, if segmental score was not agreeable. Results: Hemodynamic responses to adenosine were not different between two consecutive studies. There were no serious side effects to stop infusion of adenosine and side effects profile was not different. When myocardial uptake was divided into normal and abnormal uptake, 481 of 540 segments were concordant (agreement rate 89%, Kappa index 0.74). With four-grade storing system, exact agreement was 81.3% (439 of 540 segments, tau b=0.73). One and two-grade differences were observed in 97 segments (18%) and 4 segments (0.7%) respectively, but three-grade difference was not observed in any segment. Extent and severity scores were not different between two studios. The extent and severity scores of the perfusion defect revealed excellent positive correlation between two test (r value for percentage extent and severity score is 0.982 and 0.965, p<0.001) Conclusion: Hemodynamic responses and side effects profile were not different between two consecutive adenosine stress tests in the same subjects. Adenosine Tc-99m sestaMIBI SPECT is highly reproducible, and could be used to assess temporal changes in myocardial perfusion in individual patients.

DIAGNOSTIC VALIDITY OF THE K-ABC AND THE K-LDES FOR CHILDREN WITH LEARNING DISORDER AND LEARNING PROBLEM (학습장애를 가진 아동에 대한 K-ABC와 K-LDES의 진단적 타당도)

  • Shin, Min-Sup;Cho, Soo-Churl;Kim, Boong-Nyun;Jeon, Sun-Young
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.14 no.2
    • /
    • pp.209-217
    • /
    • 2003
  • Object:This study examined the diagnostic validity of the K-ABC and the K-LDES for identifying the cognitive deficits and the learning difficulty of children with learning disorder and to diagnose the learning disorder. Method:The clinical group consisted of 15 children with learning disorder or attention deficit hyperactivity disorder accompanying learning problem(LP) and 14 children with attention deficit hyperactivity disorder. They were diagnosed either learning disorder or attention deficit hyperactivity disorder based on DSM-IV criteria by child psychiatrists and clinical psychologists visiting Seoul National University Children’s Hospital. The normal group was composed of 15 children be going to an elementary school. All groups were between the age of 7 and 12. The K-ABC was administered to the clinical and the normal group. The K-LDES was also administered to mothers of all groups. Result:There were no significant differences on sequential, simultaneous, mental processing subscales of the K-ABC in three groups. However, The LP group showed slightly lower scores on Achievement scale and significant low scores on Reading/Decoding than the other groups. On K-LDES, LP group showed significant low scores on Listing, Thinking, Reading, Writing, Spelling, Mathematical calculation, Learning quotient(LQ) than the other groups. Also there were significant correlations between K-ABC and K-LDES subscales. Conclusion:The result of present study showed that the K-ABC and the K-LDES are a valid and effective instruments for evaluating and diagnose the learning disorder.

  • PDF

Effect of Visual Perception by Vision Therapy for Improvement of Visual Function (시각기능 개선을 위한 시기능훈련이 시지각에 미치는 영향)

  • Lee, Seung Wook;Lee, Hyun Mee
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.20 no.4
    • /
    • pp.491-499
    • /
    • 2015
  • Purpose: This study was to examine how decline of visual function affects visual perception by assessing visual perception after improving visual function through visual training, and observing the change in the cognitive ability of visual perception. Methods: This study analyzes the visual perceptual evaluation (TVPS_R) of 23 children below age 13($8.75{\pm}1.66$) who have visual abnormalities, and improves visual function after conducting vision training (vision therapy) of the children. Results: Convergence increased from average $3.39{\pm}2.52{\Delta}$ (prism) to $13.87{\pm}6.04{\Delta}$ in the measurement of long-distance disparate points, and from average $5.48{\pm}3.42{\Delta}$ to $18.43{\pm}7.58{\Delta}$ in the measurement of short-distance disparate points. Short-distance diplopia points increased from $25.87{\pm}7.33cm$ to $7.48{\pm}2.87cm$, and as for accommodative insufficiency, short-distance blur points increased from $19.57{\pm}7.16cm$ to $7.09{\pm}1.88cm$. In the visual perceptual evaluation performed before and after improving visual function, 6 items except visual memory showed statistically significant improvement. By order of significant improvement, response gap was highest with $17.74{\pm}16.94$(p=0.000) in visual closure, followed by $15.65{\pm}17.11$(p=0.000) in visual sequential-memory, $13.65{\pm}16.63$(p=0.001) in visual figure-ground, $12.74{\pm}18.41$(p=0.003) in visual form-constancy, $6.48{\pm}10.07$ (p=0.005) in visual discrimination, and $4.17{\pm}9.33$(p=0.043) in visual spatial-relationship. In the visual perception quotient that added up these scores, the response gap was $15.22{\pm}8.66$(p=0.000), showing a more significant result. Conclusions: Vision training enables efficient visual processing and improves visual perceptual ability. It was confirmed that improvement of visual function through visual training not only improves abnormal visual function but also affects visual perception of children such as learning, perception and recognition.

R-lambda Model based Rate Control for GOP Parallel Coding in A Real-Time HEVC Software Encoder (HEVC 실시간 소프트웨어 인코더에서 GOP 병렬 부호화를 지원하는 R-lambda 모델 기반의 율 제어 방법)

  • Kim, Dae-Eun;Chang, Yongjun;Kim, Munchurl;Lim, Woong;Kim, Hui Yong;Seok, Jin Wook
    • Journal of Broadcast Engineering
    • /
    • v.22 no.2
    • /
    • pp.193-206
    • /
    • 2017
  • In this paper, we propose a rate control method based on the $R-{\lambda}$ model that supports a parallel encoding structure in GOP levels or IDR period levels for 4K UHD input video in real-time. For this, a slice-level bit allocation method is proposed for parallel encoding instead of sequential encoding. When a rate control algorithm is applied in the GOP level or IDR period level parallelism, the information of how many bits are consumed cannot be shared among the frames belonging to a same frame level except the lowest frame level of the hierarchical B structure. Therefore, it is impossible to manage the bit budget with the existing bit allocation method. In order to solve this problem, we improve the bit allocation procedure of the conventional ones that allocate target bits sequentially according to the encoding order. That is, the proposed bit allocation strategy is to assign the target bits in GOPs first, then to distribute the assigned target bits from the lowest depth level to the highest depth level of the HEVC hierarchical B structure within each GOP. In addition, we proposed a processing method that is used to improve subjective image qualities by allocating the bits according to the coding complexities of the frames. Experimental results show that the proposed bit allocation method works well for frame-level parallel HEVC software encoders and it is confirmed that the performance of our rate controller can be improved with a more elaborate bit allocation strategy by using the preprocessing results.

Comparison of internal and marginal fit of crown according to milling order in a single machinable wax disc (단일 절삭가공용 왁스 디스크 내에서 순차적 절삭가공 순서에 따른 크라운의 내면 및 변연 적합도 비교)

  • Song, Jun-Beom;Lee, Jonghyuk;Ha, Seung-Ryong;Choi, Yu-Sung
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.59 no.4
    • /
    • pp.395-404
    • /
    • 2021
  • Purpose. The purpose of present study was to evaluate the effect of changing structural stability of wax disc on the fit of prosthesis when the milling proceeded in order. Materials and methods. Prepared maxillary left first molar was used to fabricate a Ni-Cr alloy reference model. This was scanned to design crown and then wax pattern was milled, invested and cast to fabricate prosthesis. The wax patterns located in a row centrally within a single wax disc were set into a total of five groups ranging from WM1 group that was first milled to WM5 group that was last milled and the number of each group was set as 10. Silicone replica technique was used to measure the marginal gap, axial internal gap, line angle internal gap, occlusal internal gap. Data was evaluated with one-way ANOVA with significance level set at α = .05 and then Tukey HSD test was conducted for post analysis. Results. Marginal gap measured in each group, it was 40.41 ± 2.15 ㎛ in WM1 group, 40.44 ± 2.23 ㎛ in WM2 group, 39.96 ± 2.25 ㎛ in WM3 group, 39.96 ± 2.48 ㎛ in WM4 group, and 40.57 ± 2.53 ㎛ in WM5 group. No significant difference was found between groups. The significant difference between the groups was also not found in the axial internal gap, line angle internal gap, and occlusal internal gap. Conclusion. Internal and marginal fit of single crown to the sequential order of milling processing in the single machinable wax disc did not seem to be affected by the sequence.

Nonlinear Vector Alignment Methodology for Mapping Domain-Specific Terminology into General Space (전문어의 범용 공간 매핑을 위한 비선형 벡터 정렬 방법론)

  • Kim, Junwoo;Yoon, Byungho;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.127-146
    • /
    • 2022
  • Recently, as word embedding has shown excellent performance in various tasks of deep learning-based natural language processing, researches on the advancement and application of word, sentence, and document embedding are being actively conducted. Among them, cross-language transfer, which enables semantic exchange between different languages, is growing simultaneously with the development of embedding models. Academia's interests in vector alignment are growing with the expectation that it can be applied to various embedding-based analysis. In particular, vector alignment is expected to be applied to mapping between specialized domains and generalized domains. In other words, it is expected that it will be possible to map the vocabulary of specialized fields such as R&D, medicine, and law into the space of the pre-trained language model learned with huge volume of general-purpose documents, or provide a clue for mapping vocabulary between mutually different specialized fields. However, since linear-based vector alignment which has been mainly studied in academia basically assumes statistical linearity, it tends to simplify the vector space. This essentially assumes that different types of vector spaces are geometrically similar, which yields a limitation that it causes inevitable distortion in the alignment process. To overcome this limitation, we propose a deep learning-based vector alignment methodology that effectively learns the nonlinearity of data. The proposed methodology consists of sequential learning of a skip-connected autoencoder and a regression model to align the specialized word embedding expressed in each space to the general embedding space. Finally, through the inference of the two trained models, the specialized vocabulary can be aligned in the general space. To verify the performance of the proposed methodology, an experiment was performed on a total of 77,578 documents in the field of 'health care' among national R&D tasks performed from 2011 to 2020. As a result, it was confirmed that the proposed methodology showed superior performance in terms of cosine similarity compared to the existing linear vector alignment.

CLINICAL AND NEUROPSYCHOLOGICAL CHARACTERISTICS OF DSM-IV SUBTYPES OF ATTENTION DEFICIT HYPERACTIVITY DISORDER (주의력결핍 과잉행동장애의 아형별 신경심리학적 특성 비교)

  • Cheung, Seung-Deuk;Lee, Jong-Bum;Kim, Jin-Sung;Seo, Wan-Seok;Bai, Dai-Seg;Chun, Eun-Jin;Suh, Hae-Sook
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.13 no.1
    • /
    • pp.139-152
    • /
    • 2002
  • Objectives:This study was conducted to compare the clinical and neuropsychological characteristics by DSM-IV subtypes of attention deficit hyperactivity disorder(ADHD) patients who did not have comorbid psychiatric disorders. Methods:5-15 year old children with ADHD were recruited at psychiatric outpatient clinic of Yeungnam University hospital and the patients with comorbidity or neurological abnormalities were excluded. Finally, total 404 children with ADHD were selected for this study. There were 234 subjects of ADHD-C(57.9%), 156 subjects of ADHD-I(38.6%) and 14 subjects of ADHD-HI(3.5%), who fulfilled the DSM-IV diagnostic criteria. The mean age of the total subjects was 9.63±2.49 years old. The psychopathology, IQ, behavioral problems, neuropsychological executive function were evaluated before pharmacological treatment. The measures were Korean Personality Inventory of Child(K-PIC) for psychopathology, 4 behavioral check lists(ADDES-HV, ACTeRS, CAP, SNAP) for behavioral symptoms of ADHD, K-ABC and KEDI-WISC for IQ and Conner's CPT, WCST, SST for neuropsychological executive functions. Results:1) The prevalence of subtypes was ADHD-C, ADHD-I, ADHD-HI in decreasing order. There was no sex difference of prevalence among three subtypes. The mean age of ADHD-I was older than other subtypes. 2) There was significant differences of psychopathology among subtypes, the ADHD-C and ADHD-HI had higher than the ADHD-I in the scores of delinquent, hyperactivity and psychosis;the ADHD-C had higher than the ADHD-I in the scores of family relation and autism, the scores of ego resilience were lower than the ADHD-I. However, there was no difference in anxiety, depression and somatization scores among them. 3) The results of behavioral symptom check lists, the ADHD-C had higher the score of inattention, hyperactivity and impulsivity than the ADHD-I. Meanwhile the results of ACTeRs, which rated by the teachers, were different. 4) There were significant differences of sequential processing scale and arithmetics among subtypes in IQ using K-ABC, but there was no significant difference between the ADHD-C and the ADHD-I after excluding the ADHD-HI due to small numbers. 5) There was numerical difference among subtypes but did not reach statistical significance in three neuropsychological executive function tests. Conclusion:In conclusion, our results revealed that there was significant difference in clinical features among three subtypes but, no significant difference in executive functions.

  • PDF