• Title/Summary/Keyword: Learning data set

Search Result 1,114, Processing Time 0.029 seconds

A study on end-to-end speaker diarization system using single-label classification (단일 레이블 분류를 이용한 종단 간 화자 분할 시스템 성능 향상에 관한 연구)

  • Jaehee Jung;Wooil Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.6
    • /
    • pp.536-543
    • /
    • 2023
  • Speaker diarization, which labels for "who spoken when?" in speech with multiple speakers, has been studied on a deep neural network-based end-to-end method for labeling on speech overlap and optimization of speaker diarization models. Most deep neural network-based end-to-end speaker diarization systems perform multi-label classification problem that predicts the labels of all speakers spoken in each frame of speech. However, the performance of the multi-label-based model varies greatly depending on what the threshold is set to. In this paper, it is studied a speaker diarization system using single-label classification so that speaker diarization can be performed without thresholds. The proposed model estimate labels from the output of the model by converting speaker labels into a single label. To consider speaker label permutations in the training, the proposed model is used a combination of Permutation Invariant Training (PIT) loss and cross-entropy loss. In addition, how to add the residual connection structures to model is studied for effective learning of speaker diarization models with deep structures. The experiment used the Librispech database to generate and use simulated noise data for two speakers. When compared with the proposed method and baseline model using the Diarization Error Rate (DER) performance the proposed method can be labeling without threshold, and it has improved performance by about 20.7 %.

Influence of Video Clip-based Pedagogical Reasoning Activity on Elementary Preservice Teachers' Science Lesson Planning (비디오 클립을 활용한 교육적 추론 활동이 초등 예비교사의 과학 수업 계획에 미치는 영향)

  • Song, Nayoon;Yoon, Hye-Gyoung
    • Journal of Korean Elementary Science Education
    • /
    • v.43 no.1
    • /
    • pp.170-184
    • /
    • 2024
  • This study focused on the practical research needed to improve elementary school science lesson plans. Specifically, a video clip-based pedagogical reasoning activity that included elementary student misconceptions was presented and the influences of this activity on preservice teachers' science lesson planning were assessed. First, the eight preservice teacher participants were asked to write a lesson plan for a dissolution and solution unit, after which a first semi-structured interview was conducted. Then, the participants participated in a video clip-based pedagogical reasoning activity. Based on the activity results, the participants revised their previously planned lessons, and second semi-structured interviews were conducted. The data from the preservice teachers' lesson plans and interview transcripts were analyzed using a constant comparative method to investigate the lesson plan changes. It was found that after the video clip-based pedagogical reasoning activity, the preservice teacher tightened the activity or changed the material to understand the students' thinking processes. In addition, they supplemented their goals and assessment criteria to accommodate the diverse students' thinking. Some also specified motivational strategies that considered student interests, motivation, and possible misconceptions. However, some preservice teachers still set goals that did not sufficiently account for student misconceptions and some planned the student assessments based only on the learning goals rather than the students' thinking. The few preservice teachers were able to develop motivational strategies that considered interest, motivation, and misconceptions. The preservice teachers claimed that they had difficulty predicting the misconceptions and connecting these to the lesson content. Discussions were then held to assist the preservice teachers to consider possible student misconceptions when planning their lessons.

Preservice Elementary Teachers' Attitudes toward Science and Process Skills (초등 예비교사들의 과학에 대한 태도와 탐구 능력)

  • Lim, Choeng-Hwan;Lee, Sung-Ho
    • Journal of The Korean Association For Science Education
    • /
    • v.28 no.2
    • /
    • pp.180-185
    • /
    • 2008
  • The purpose of this study is to inquire the properties and relationship of attitudes toward science and process skills of preservice elementary teachers. Two instruments were used to collect the data, SAS(Science Attitude Scales) for checking up attitude toward science and TIPS II(Test of Integrated Process Skill II) for inspecting science process skills. Three main results were revealed. First, preservice elementary teachers' the attitude toward science and science process skills could not show the significant differences by gender. This result is differ from the results of preceding researches which had set up the students of elementary, middle and high school as objects. Second, the properties of preservice elementary teachers' the attitude toward science and science process skills according to the course in high school were also differ from those of preceding researches having students as objects. The preservice elementary teachers who got the literary courses in high school were more confident in science learning and perform that those who have the academic background of science courses in high school. In addition, although they showed better abilities in two sub-scales of science process skills, the preservice teachers with science course didn't show the better science process skills than those who had taken the literary course in total score of science process skill test. Third, there was a significant relationship between attitude toward science and science process skills of preservice elementary teachers but just one sub-scale was related with science process skills. According to these results, it can be said that the preceding results with students as objects can not be applied to and preservice elementary teachers should be guided by the methods which are considering their special properties.

A Preliminary Study on Competency Extraction for Fashion Design and Merchandising Majors (패션디자인 및 머천다이징 전공의 역량 추출에 대한 기초 연구)

  • Lee, Hana;Lee, Yhe-Young
    • Journal of Korean Home Economics Education Association
    • /
    • v.36 no.2
    • /
    • pp.101-117
    • /
    • 2024
  • The aim of this study is to identify the competencies required for fashion-related majors that meet contemporary demands, align with the objectives of university education, and reflect the qualities desired in graduates. To achieve this goal, we conducted content analysis of relevant data and in-depth interviews with experts. First, the content analysis involved coding key information from the introductions, educational goals, desired qualities of graduates, and curricula published on the websites of both South Korea and international fashion-related universities. Additionally, we analyzed the National Competency Standards (NCS) and the Meta-goals of higher education programs set by the International Textile Apparel Association (ITAA), extracting six core competencies. Second, in-depth interviews were conducted with six experts, each with 23 to 31 years of experience in Korean and international apparel industry and academia. The interviews were recorded, transcribed, and keywords were extracted. To ensure the validity of the coding results, cross-checks were performed among the researchers. The analysis identified the following competencies: empathic communication, social responsibility, professional thinking, creative and integrative thinking, global perspective, and challenging leadership. Based on these findings, establishing competencies that meet contemporary demands and developing corresponding curricula are essential steps towards creating a feedback system. Future research should focus on developing and implementing curricula that foster a virtuous cycle, ultimately enhancing students' competency levels.

Target-Aspect-Sentiment Joint Detection with CNN Auxiliary Loss for Aspect-Based Sentiment Analysis (CNN 보조 손실을 이용한 차원 기반 감성 분석)

  • Jeon, Min Jin;Hwang, Ji Won;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.1-22
    • /
    • 2021
  • Aspect Based Sentiment Analysis (ABSA), which analyzes sentiment based on aspects that appear in the text, is drawing attention because it can be used in various business industries. ABSA is a study that analyzes sentiment by aspects for multiple aspects that a text has. It is being studied in various forms depending on the purpose, such as analyzing all targets or just aspects and sentiments. Here, the aspect refers to the property of a target, and the target refers to the text that causes the sentiment. For example, for restaurant reviews, you could set the aspect into food taste, food price, quality of service, mood of the restaurant, etc. Also, if there is a review that says, "The pasta was delicious, but the salad was not," the words "steak" and "salad," which are directly mentioned in the sentence, become the "target." So far, in ABSA, most studies have analyzed sentiment only based on aspects or targets. However, even with the same aspects or targets, sentiment analysis may be inaccurate. Instances would be when aspects or sentiment are divided or when sentiment exists without a target. For example, sentences like, "Pizza and the salad were good, but the steak was disappointing." Although the aspect of this sentence is limited to "food," conflicting sentiments coexist. In addition, in the case of sentences such as "Shrimp was delicious, but the price was extravagant," although the target here is "shrimp," there are opposite sentiments coexisting that are dependent on the aspect. Finally, in sentences like "The food arrived too late and is cold now." there is no target (NULL), but it transmits a negative sentiment toward the aspect "service." Like this, failure to consider both aspects and targets - when sentiment or aspect is divided or when sentiment exists without a target - creates a dual dependency problem. To address this problem, this research analyzes sentiment by considering both aspects and targets (Target-Aspect-Sentiment Detection, hereby TASD). This study detected the limitations of existing research in the field of TASD: local contexts are not fully captured, and the number of epochs and batch size dramatically lowers the F1-score. The current model excels in spotting overall context and relations between each word. However, it struggles with phrases in the local context and is relatively slow when learning. Therefore, this study tries to improve the model's performance. To achieve the objective of this research, we additionally used auxiliary loss in aspect-sentiment classification by constructing CNN(Convolutional Neural Network) layers parallel to existing models. If existing models have analyzed aspect-sentiment through BERT encoding, Pooler, and Linear layers, this research added CNN layer-adaptive average pooling to existing models, and learning was progressed by adding additional loss values for aspect-sentiment to existing loss. In other words, when learning, the auxiliary loss, computed through CNN layers, allowed the local context to be captured more fitted. After learning, the model is designed to do aspect-sentiment analysis through the existing method. To evaluate the performance of this model, two datasets, SemEval-2015 task 12 and SemEval-2016 task 5, were used and the f1-score increased compared to the existing models. When the batch was 8 and epoch was 5, the difference was largest between the F1-score of existing models and this study with 29 and 45, respectively. Even when batch and epoch were adjusted, the F1-scores were higher than the existing models. It can be said that even when the batch and epoch numbers were small, they can be learned effectively compared to the existing models. Therefore, it can be useful in situations where resources are limited. Through this study, aspect-based sentiments can be more accurately analyzed. Through various uses in business, such as development or establishing marketing strategies, both consumers and sellers will be able to make efficient decisions. In addition, it is believed that the model can be fully learned and utilized by small businesses, those that do not have much data, given that they use a pre-training model and recorded a relatively high F1-score even with limited resources.

Abnormal Water Temperature Prediction Model Near the Korean Peninsula Using LSTM (LSTM을 이용한 한반도 근해 이상수온 예측모델)

  • Choi, Hey Min;Kim, Min-Kyu;Yang, Hyun
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.3
    • /
    • pp.265-282
    • /
    • 2022
  • Sea surface temperature (SST) is a factor that greatly influences ocean circulation and ecosystems in the Earth system. As global warming causes changes in the SST near the Korean Peninsula, abnormal water temperature phenomena (high water temperature, low water temperature) occurs, causing continuous damage to the marine ecosystem and the fishery industry. Therefore, this study proposes a methodology to predict the SST near the Korean Peninsula and prevent damage by predicting abnormal water temperature phenomena. The study area was set near the Korean Peninsula, and ERA5 data from the European Center for Medium-Range Weather Forecasts (ECMWF) was used to utilize SST data at the same time period. As a research method, Long Short-Term Memory (LSTM) algorithm specialized for time series data prediction among deep learning models was used in consideration of the time series characteristics of SST data. The prediction model predicts the SST near the Korean Peninsula after 1- to 7-days and predicts the high water temperature or low water temperature phenomenon. To evaluate the accuracy of SST prediction, Coefficient of determination (R2), Root Mean Squared Error (RMSE), and Mean Absolute Percentage Error (MAPE) indicators were used. The summer (JAS) 1-day prediction result of the prediction model, R2=0.996, RMSE=0.119℃, MAPE=0.352% and the winter (JFM) 1-day prediction result is R2=0.999, RMSE=0.063℃, MAPE=0.646%. Using the predicted SST, the accuracy of abnormal sea surface temperature prediction was evaluated with an F1 Score (F1 Score=0.98 for high water temperature prediction in summer (2021/08/05), F1 Score=1.0 for low water temperature prediction in winter (2021/02/19)). As the prediction period increased, the prediction model showed a tendency to underestimate the SST, which also reduced the accuracy of the abnormal water temperature prediction. Therefore, it is judged that it is necessary to analyze the cause of underestimation of the predictive model in the future and study to improve the prediction accuracy.

Research Framework for International Franchising (국제프랜차이징 연구요소 및 연구방향)

  • Kim, Ju-Young;Lim, Young-Kyun;Shim, Jae-Duck
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.4
    • /
    • pp.61-118
    • /
    • 2008
  • The purpose of this research is to construct research framework for international franchising based on existing literature and to identify research components in the framework. Franchise can be defined as management styles that allow franchisee use various management assets of franchisor in order to make or sell product or service. It can be divided into product distribution franchise that is designed to sell products and business format franchise that is designed for running it as business whatever its form is. International franchising can be defined as a way of internationalization of franchisor to foreign country by providing its business format or package to franchisee of host country. International franchising is growing fast for last four decades but academic research on this is quite limited. Especially in Korea, research about international franchising is carried out on by case study format with single case or empirical study format with survey based on domestic franchise theory. Therefore, this paper tries to review existing literature on international franchising research, providing research framework, and then stimulating new research on this field. International franchising research components include motives and environmental factors for decision of expanding to international franchising, entrance modes and development plan for international franchising, contracts and management strategy of international franchising, and various performance measures from different perspectives. First, motives of international franchising are fee collection from franchisee. Also it provides easier way to expanding to foreign country. The other motives including increase total sales volume, occupying better strategic position, getting quality resources, and improving efficiency. Environmental factors that facilitating international franchising encompasses economic condition, trend, and legal or political factors in host and/or home countries. In addition, control power and risk management capability of franchisor plays critical role in successful franchising contract. Final decision to enter foreign country via franchising is determined by numerous factors like history, size, growth, competitiveness, management system, bonding capability, industry characteristics of franchisor. After deciding to enter into foreign country, franchisor needs to set entrance modes of international franchising. Within contractual mode, there are master franchising and area developing franchising, licensing, direct franchising, and joint venture. Theories about entrance mode selection contain concepts of efficiency, knowledge-based approach, competence-based approach, agent theory, and governance cost. The next step after entrance decision is operation strategy. Operation strategy starts with selecting a target city and a target country for franchising. In order to finding, screening targets, franchisor needs to collect information about candidates. Critical information includes brand patent, commercial laws, regulations, market conditions, country risk, and industry analysis. After selecting a target city in target country, franchisor needs to select franchisee, in other word, partner. The first important criteria for selecting partners are financial credibility and capability, possession of real estate. And cultural similarity and knowledge about franchisor and/or home country are also recognized as critical criteria. The most important element in operating strategy is legal document between franchisor and franchisee with home and host countries. Terms and conditions in legal documents give objective information about characteristics of franchising agreement for academic research. Legal documents have definitions of terminology, territory and exclusivity, agreement of term, initial fee, continuing fees, clearing currency, and rights about sub-franchising. Also, legal documents could have terms about softer elements like training program and operation manual. And harder elements like law competent court and terms of expiration. Next element in operating strategy is about product and service. Especially for business format franchising, product/service deliverable, benefit communicators, system identifiers (architectural features), and format facilitators are listed for product/service strategic elements. Another important decision on product/service is standardization vs. customization. The rationale behind standardization is cost reduction, efficiency, consistency, image congruence, brand awareness, and competitiveness on price. Also standardization enables large scale R&D and innovative change in management style. Another element in operating strategy is control management. The simple way to control franchise contract is relying on legal terms, contractual control system. There are other control systems, administrative control system and ethical control system. Contractual control system is a coercive source of power, but franchisor usually doesn't want to use legal power since it doesn't help to build up positive relationship. Instead, self-regulation is widely used. Administrative control system uses control mechanism from ordinary work relationship. Its main component is supporting activities to franchisee and communication method. For example, franchisor provides advertising, training, manual, and delivery, then franchisee follows franchisor's direction. Another component is building franchisor's brand power. The last research element is performance factor of international franchising. Performance elements can be divided into franchisor's performance and franchisee's performance. The conceptual performance measures of franchisor are simple but not easy to obtain objectively. They are profit, sale, cost, experience, and brand power. The performance measures of franchisee are mostly about benefits of host country. They contain small business development, promotion of employment, introduction of new business model, and level up technology status. There are indirect benefits, like increase of tax, refinement of corporate citizenship, regional economic clustering, and improvement of international balance. In addition to those, host country gets socio-cultural change other than economic effects. It includes demographic change, social trend, customer value change, social communication, and social globalization. Sometimes it is called as westernization or McDonaldization of society. In addition, the paper reviews on theories that have been frequently applied to international franchising research, such as agent theory, resource-based view, transaction cost theory, organizational learning theory, and international expansion theories. Resource based theory is used in strategic decision based on resources, like decision about entrance and cooperation depending on resources of franchisee and franchisor. Transaction cost theory can be applied in determination of mutual trust or satisfaction of franchising players. Agent theory tries to explain strategic decision for reducing problem caused by utilizing agent, for example research on control system in franchising agreements. Organizational Learning theory is relatively new in franchising research. It assumes organization tries to maximize performance and learning of organization. In addition, Internalization theory advocates strategic decision of direct investment for removing inefficiency of market transaction and is applied in research on terms of contract. And oligopolistic competition theory is used to explain various entry modes for international expansion. Competency theory support strategic decision of utilizing key competitive advantage. Furthermore, research methodologies including qualitative and quantitative methodologies are suggested for more rigorous international franchising research. Quantitative research needs more real data other than survey data which is usually respondent's judgment. In order to verify theory more rigorously, research based on real data is essential. However, real quantitative data is quite hard to get. The qualitative research other than single case study is also highly recommended. Since international franchising has limited number of applications, scientific research based on grounded theory and ethnography study can be used. Scientific case study is differentiated with single case study on its data collection method and analysis method. The key concept is triangulation in measurement, logical coding and comparison. Finally, it provides overall research direction for international franchising after summarizing research trend in Korea. International franchising research in Korea has two different types, one is for studying Korean franchisor going overseas and the other is for Korean franchisee of foreign franchisor. Among research on Korean franchisor, two common patterns are observed. First of all, they usually deal with success story of one franchisor. The other common pattern is that they focus on same industry and country. Therefore, international franchise research needs to extend their focus to broader subjects with scientific research methodology as well as development of new theory.

  • PDF

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

A Study on the Establishment of Comparison System between the Statement of Military Reports and Related Laws (군(軍) 보고서 등장 문장과 관련 법령 간 비교 시스템 구축 방안 연구)

  • Jung, Jiin;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.109-125
    • /
    • 2020
  • The Ministry of National Defense is pushing for the Defense Acquisition Program to build strong defense capabilities, and it spends more than 10 trillion won annually on defense improvement. As the Defense Acquisition Program is directly related to the security of the nation as well as the lives and property of the people, it must be carried out very transparently and efficiently by experts. However, the excessive diversification of laws and regulations related to the Defense Acquisition Program has made it challenging for many working-level officials to carry out the Defense Acquisition Program smoothly. It is even known that many people realize that there are related regulations that they were unaware of until they push ahead with their work. In addition, the statutory statements related to the Defense Acquisition Program have the tendency to cause serious issues even if only a single expression is wrong within the sentence. Despite this, efforts to establish a sentence comparison system to correct this issue in real time have been minimal. Therefore, this paper tries to propose a "Comparison System between the Statement of Military Reports and Related Laws" implementation plan that uses the Siamese Network-based artificial neural network, a model in the field of natural language processing (NLP), to observe the similarity between sentences that are likely to appear in the Defense Acquisition Program related documents and those from related statutory provisions to determine and classify the risk of illegality and to make users aware of the consequences. Various artificial neural network models (Bi-LSTM, Self-Attention, D_Bi-LSTM) were studied using 3,442 pairs of "Original Sentence"(described in actual statutes) and "Edited Sentence"(edited sentences derived from "Original Sentence"). Among many Defense Acquisition Program related statutes, DEFENSE ACQUISITION PROGRAM ACT, ENFORCEMENT RULE OF THE DEFENSE ACQUISITION PROGRAM ACT, and ENFORCEMENT DECREE OF THE DEFENSE ACQUISITION PROGRAM ACT were selected. Furthermore, "Original Sentence" has the 83 provisions that actually appear in the Act. "Original Sentence" has the main 83 clauses most accessible to working-level officials in their work. "Edited Sentence" is comprised of 30 to 50 similar sentences that are likely to appear modified in the county report for each clause("Original Sentence"). During the creation of the edited sentences, the original sentences were modified using 12 certain rules, and these sentences were produced in proportion to the number of such rules, as it was the case for the original sentences. After conducting 1 : 1 sentence similarity performance evaluation experiments, it was possible to classify each "Edited Sentence" as legal or illegal with considerable accuracy. In addition, the "Edited Sentence" dataset used to train the neural network models contains a variety of actual statutory statements("Original Sentence"), which are characterized by the 12 rules. On the other hand, the models are not able to effectively classify other sentences, which appear in actual military reports, when only the "Original Sentence" and "Edited Sentence" dataset have been fed to them. The dataset is not ample enough for the model to recognize other incoming new sentences. Hence, the performance of the model was reassessed by writing an additional 120 new sentences that have better resemblance to those in the actual military report and still have association with the original sentences. Thereafter, we were able to check that the models' performances surpassed a certain level even when they were trained merely with "Original Sentence" and "Edited Sentence" data. If sufficient model learning is achieved through the improvement and expansion of the full set of learning data with the addition of the actual report appearance sentences, the models will be able to better classify other sentences coming from military reports as legal or illegal. Based on the experimental results, this study confirms the possibility and value of building "Real-Time Automated Comparison System Between Military Documents and Related Laws". The research conducted in this experiment can verify which specific clause, of several that appear in related law clause is most similar to the sentence that appears in the Defense Acquisition Program-related military reports. This helps determine whether the contents in the military report sentences are at the risk of illegality when they are compared with those in the law clauses.