• Title/Summary/Keyword: 학습기반

Search Result 10,129, Processing Time 0.035 seconds

Automatic Fracture Detection in CT Scan Images of Rocks Using Modified Faster R-CNN Deep-Learning Algorithm with Rotated Bounding Box (회전 경계박스 기능의 변형 FASTER R-CNN 딥러닝 알고리즘을 이용한 암석 CT 영상 내 자동 균열 탐지)

  • Pham, Chuyen;Zhuang, Li;Yeom, Sun;Shin, Hyu-Soung
    • Tunnel and Underground Space
    • /
    • v.31 no.5
    • /
    • pp.374-384
    • /
    • 2021
  • In this study, we propose a new approach for automatic fracture detection in CT scan images of rock specimens. This approach is built on top of two-stage object detection deep learning algorithm called Faster R-CNN with a major modification of using rotated bounding box. The use of rotated bounding box plays a key role in the future work to overcome several inherent difficulties of fracture segmentation relating to the heterogeneity of uninterested background (i.e., minerals) and the variation in size and shape of fracture. Comparing to the commonly used bounding box (i.e., axis-align bounding box), rotated bounding box shows a greater adaptability to fit with the elongated shape of fracture, such that minimizing the ratio of background within the bounding box. Besides, an additional benefit of rotated bounding box is that it can provide relative information on the orientation and length of fracture without the further segmentation and measurement step. To validate the applicability of the proposed approach, we train and test our approach with a number of CT image sets of fractured granite specimens with highly heterogeneous background and other rocks such as sandstone and shale. The result demonstrates that our approach can lead to the encouraging results on fracture detection with the mean average precision (mAP) up to 0.89 and also outperform the conventional approach in terms of background-to-object ratio within the bounding box.

A fundamental study on the automation of tunnel blasting design using a machine learning model (머신러닝을 이용한 터널발파설계 자동화를 위한 기초연구)

  • Kim, Yangkyun;Lee, Je-Kyum;Lee, Sean Seungwon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.5
    • /
    • pp.431-449
    • /
    • 2022
  • As many tunnels generally have been constructed, various experiences and techniques have been accumulated for tunnel design as well as tunnel construction. Hence, there are not a few cases that, for some usual tunnel design works, it is sufficient to perform the design by only modifying or supplementing previous similar design cases unless a tunnel has a unique structure or in geological conditions. In particular, for a tunnel blast design, it is reasonable to refer to previous similar design cases because the blast design in the stage of design is a preliminary design, considering that it is general to perform additional blast design through test blasts prior to the start of tunnel excavation. Meanwhile, entering the industry 4.0 era, artificial intelligence (AI) of which availability is surging across whole industry sector is broadly utilized to tunnel and blasting. For a drill and blast tunnel, AI is mainly applied for the estimation of blast vibration and rock mass classification, etc. however, there are few cases where it is applied to blast pattern design. Thus, this study attempts to automate tunnel blast design by means of machine learning, a branch of artificial intelligence. For this, the data related to a blast design was collected from 25 tunnel design reports for learning as well as 2 additional reports for the test, and from which 4 design parameters, i.e., rock mass class, road type and cross sectional area of upper section as well as bench section as input data as well as16 design elements, i.e., blast cut type, specific charge, the number of drill holes, and spacing and burden for each blast hole group, etc. as output. Based on this design data, three machine learning models, i.e., XGBoost, ANN, SVM, were tested and XGBoost was chosen as the best model and the results show a generally similar trend to an actual design when assumed design parameters were input. It is not enough yet to perform the whole blast design using the results from this study, however, it is planned that additional studies will be carried out to make it possible to put it to practical use after collecting more sufficient blast design data and supplementing detailed machine learning processes.

Future Prospects of Forest Type Change Determined from National Forest Inventory Time-series Data (시계열 국가산림자원조사 자료를 이용한 전국 산림의 임상 변화 특성 분석과 미래 전망)

  • Eun-Sook, Kim;Byung-Heon, Jung;Jae-Soo, Bae;Jong-Hwan, Lim
    • Journal of Korean Society of Forest Science
    • /
    • v.111 no.4
    • /
    • pp.461-472
    • /
    • 2022
  • Natural and anthropogenic factors cause forest types to continuously change. Since the ratio of forest area by forest type is important information for identifying the characteristics of national forest resources, an accurate understanding of the prospect of forest type change is required. The study aim was to use National Forest Inventory (NFI) time-series data to understand the characteristics of forest type change and to estimate future prospects of nationwide forest type change. We used forest type change information from the fifth and seventh NFI datasets, climate, topography, forest stand, and disturbance variables related to forest type change to analyze trends and characteristics of forest type change. The results showed that the forests in Korea are changing in the direction of decreasing coniferous forests and increasing mixed and broadleaf forests. The forest sites that were changing from coniferous to mixed forests or from mixed to broadleaf forests were mainly located in wet topographic environments and climatic conditions. The forest type changes occurred more frequently in sites with high disturbance potential (high temperature, young or sparse forest stands, and non-forest areas). We used a climate change scenario (RCP 8.5) to establish a forest type change model (SVM) to predict future changes. During the 40-year period from 2015 to 2055, the SVM predicted that coniferous forests will decrease from 38.1% to 28.5%, broadleaf forests will increase from 34.2% to 38.8%, and mixed forests will increase from 27.7% to 32.7%. These results can be used as basic data for establishing future forest management strategies.

Exploring the contextual factors of episodic memory: dissociating distinct social, behavioral, and intentional episodic encoding from spatio-temporal contexts based on medial temporal lobe-cortical networks (일화기억을 구성하는 맥락 요소에 대한 탐구: 시공간적 맥락과 구분되는 사회적, 행동적, 의도적 맥락의 내측두엽-대뇌피질 네트워크 특징을 중심으로)

  • Park, Jonghyun;Nah, Yoonjin;Yu, Sumin;Lee, Seung-Koo;Han, Sanghoon
    • Korean Journal of Cognitive Science
    • /
    • v.33 no.2
    • /
    • pp.109-133
    • /
    • 2022
  • Episodic memory consists of a core event and the associated contexts. Although the role of the hippocampus and its neighboring regions in contextual representations during encoding has become increasingly evident, it remains unclear how these regions handle various context-specific information other than spatio-temporal contexts. Using high-resolution functional MRI, we explored the patterns of the medial temporal lobe (MTL) and cortical regions' involvement during the encoding of various types of contextual information (i.e., journalism principle 5W1H): "Who did it?," "Why did it happen?," "What happened?," "When did it happen?," "Where did it happen?," and "How did it happen?" Participants answered six different contextual questions while looking at simple experimental events consisting of two faces with one object on the screen. The MTL was divided to sub-regions by hierarchical clustering from resting-state data. General linear model analyses revealed a stronger activation of MTL sub-regions, the prefrontal lobe (PFC), and the inferior parietal lobule (IPL) during social (Who), behavioral (How), and intentional (Why) contextual processing when compared with spatio-temporal (Where/When) contextual processing. To further investigate the functional networks involved in contextual encoding dissociation, a multivariate pattern analysis was conducted with features selected as the task-based connectivity links between the hippocampal subfields and PFC/IPL. Each social, behavioral, and intentional contextual processing was individually and successfully classified from spatio-temporal contextual processing, respectively. Thus, specific contexts in episodic memory, namely social, behavior, and intention, involve distinct functional connectivity patterns that are distinct from those for spatio-temporal contextual memory.

Development of deep learning network based low-quality image enhancement techniques for improving foreign object detection performance (이물 객체 탐지 성능 개선을 위한 딥러닝 네트워크 기반 저품질 영상 개선 기법 개발)

  • Ki-Yeol Eom;Byeong-Seok Min
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.99-107
    • /
    • 2024
  • Along with economic growth and industrial development, there is an increasing demand for various electronic components and device production of semiconductor, SMT component, and electrical battery products. However, these products may contain foreign substances coming from manufacturing process such as iron, aluminum, plastic and so on, which could lead to serious problems or malfunctioning of the product, and fire on the electric vehicle. To solve these problems, it is necessary to determine whether there are foreign materials inside the product, and may tests have been done by means of non-destructive testing methodology such as ultrasound ot X-ray. Nevertheless, there are technical challenges and limitation in acquiring X-ray images and determining the presence of foreign materials. In particular Small-sized or low-density foreign materials may not be visible even when X-ray equipment is used, and noise can also make it difficult to detect foreign objects. Moreover, in order to meet the manufacturing speed requirement, the x-ray acquisition time should be reduced, which can result in the very low signal- to-noise ratio(SNR) lowering the foreign material detection accuracy. Therefore, in this paper, we propose a five-step approach to overcome the limitations of low resolution, which make it challenging to detect foreign substances. Firstly, global contrast of X-ray images are increased through histogram stretching methodology. Second, to strengthen the high frequency signal and local contrast, we applied local contrast enhancement technique. Third, to improve the edge clearness, Unsharp masking is applied to enhance edges, making objects more visible. Forth, the super-resolution method of the Residual Dense Block (RDB) is used for noise reduction and image enhancement. Last, the Yolov5 algorithm is employed to train and detect foreign objects after learning. Using the proposed method in this study, experimental results show an improvement of more than 10% in performance metrics such as precision compared to low-density images.

Study on data preprocessing methods for considering snow accumulation and snow melt in dam inflow prediction using machine learning & deep learning models (머신러닝&딥러닝 모델을 활용한 댐 일유입량 예측시 융적설을 고려하기 위한 데이터 전처리에 대한 방법 연구)

  • Jo, Youngsik;Jung, Kwansue
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.1
    • /
    • pp.35-44
    • /
    • 2024
  • Research in dam inflow prediction has actively explored the utilization of data-driven machine learning and deep learning (ML&DL) tools across diverse domains. Enhancing not just the inherent model performance but also accounting for model characteristics and preprocessing data are crucial elements for precise dam inflow prediction. Particularly, existing rainfall data, derived from snowfall amounts through heating facilities, introduces distortions in the correlation between snow accumulation and rainfall, especially in dam basins influenced by snow accumulation, such as Soyang Dam. This study focuses on the preprocessing of rainfall data essential for the application of ML&DL models in predicting dam inflow in basins affected by snow accumulation. This is vital to address phenomena like reduced outflow during winter due to low snowfall and increased outflow during spring despite minimal or no rain, both of which are physical occurrences. Three machine learning models (SVM, RF, LGBM) and two deep learning models (LSTM, TCN) were built by combining rainfall and inflow series. With optimal hyperparameter tuning, the appropriate model was selected, resulting in a high level of predictive performance with NSE ranging from 0.842 to 0.894. Moreover, to generate rainfall correction data considering snow accumulation, a simulated snow accumulation algorithm was developed. Applying this correction to machine learning and deep learning models yielded NSE values ranging from 0.841 to 0.896, indicating a similarly high level of predictive performance compared to the pre-snow accumulation application. Notably, during the snow accumulation period, adjusting rainfall during the training phase was observed to lead to a more accurate simulation of observed inflow when predicted. This underscores the importance of thoughtful data preprocessing, taking into account physical factors such as snowfall and snowmelt, in constructing data models.

Exploring Pre-Service Earth Science Teachers' Understandings of Computational Thinking (지구과학 예비교사들의 컴퓨팅 사고에 대한 인식 탐색)

  • Young Shin Park;Ki Rak Park
    • Journal of the Korean earth science society
    • /
    • v.45 no.3
    • /
    • pp.260-276
    • /
    • 2024
  • The purpose of this study is to explore whether pre-service teachers majoring in earth science improve their perception of computational thinking through STEAM classes focused on engineering-based wave power plants. The STEAM class involved designing the most efficient wave power plant model. The survey on computational thinking practices, developed from previous research, was administered to 15 Earth science pre-service teachers to gauge their understanding of computational thinking. Each group developed an efficient wave power plant model based on the scientific principal of turbine operation using waves. The activities included problem recognition (problem solving), coding (coding and programming), creating a wave power plant model using a 3D printer (design and create model), and evaluating the output to correct errors (debugging). The pre-service teachers showed a high level of recognition of computational thinking practices, particularly in "logical thinking," with the top five practices out of 14 averaging five points each. However, participants lacked a clear understanding of certain computational thinking practices such as abstraction, problem decomposition, and using bid data, with their comprehension of these decreasing after the STEAM lesson. Although there was a significant reduction in the misconception that computational thinking is "playing online games" (from 4.06 to 0.86), some participants still equated it with "thinking like a computer" and "using a computer to do calculations". The study found slight improvements in "problem solving" (3.73 to 4.33), "pattern recognition" (3.53 to 3.66), and "best tool selection" (4.26 to 4.66). To enhance computational thinking skills, a practice-oriented curriculum should be offered. Additional STEAM classes on diverse topics could lead to a significant improvement in computational thinking practices. Therefore, establishing an educational curriculum for multisituational learning is essential.

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

A Comparative Study on Awareness of Middle School Students, School Parents, and Human Resources Directors in Industrial Institutions about Admission into Specialized High Schools and Career after Graduating from Specialized High Schools (특성화고 진학 및 졸업 후 진로에 대한 중학생, 학부모, 산업체 인사 담당자의 인식 비교 연구)

  • Lee, Byung-Wook;Ahn, Jae-Yeong;Lee, Chan-Joo;Lee, Sang-Hyun
    • 대한공업교육학회지
    • /
    • v.38 no.2
    • /
    • pp.48-67
    • /
    • 2013
  • This study tried to suggest implications about operation direction of specialized high schools (SHS) by researching awareness of middle school students (MSS), school parents (SP), human resources directors in industrial institutions (HRDII) who will be the main users of SHS education, about entering SHS and career after graduating from SHS. Seniors of middle school, SP and HRDII in Asan, Chungnam were the subject of this survey research. The summary of the result of this study is as follow: First, MSS and SP usually hoped to enter general high schools rather than vocational education schools such as SHS, meister high schools, and MSS considered school records and SP considered aptitude and talent for the factors to choose high school. Second, MSS, SP, and HRDII recognized purposes of SHS as improvement of talent and aptitude, and getting a job. As for positive images of SHS, they recognized it as applying talent and aptitude to life early, getting good jobs easily, fast independence after graduation, and learning excellent technologies, and as for negative images of SHS, they recognized it as social prejudices and discrimination, students with bad school records enter them, disadvantages about promotion and wages, and being unfavorable for entering universities. They also recognized education of SHS as being effective for improvement of basic and executive ability and key competency, development of creative human resources, and improvement of right personality and courteous manners. Third, many MSS and SP showed intention to enter SHS if it is established in Asan. They wished to enter SHS because they would like to apply their aptitude and talent to life early, learn excellent skill, and hope for early employment, on the other hand, they did not wish to enter SHS because it was not suited for their aptitude and talent, awareness about SHS is low, it is unfavorable to enter universities, and there were social prejudices and discrimination. They also similarly hoped for getting jobs and entering universities after graduating from SHS. And the reason they wanted to get a job was usually because they want to be successful by advancing into society early, or because it is still hard to get a job even after graduate from the university, on the other hand, the reason they want to enter university is because is usually in-depth education about major and social discrimination about level of education. The ability to perform duties forms the greatest part of the employment standard that MSS, SP, and HRDII aware. MSS and SP usually hoped for industrial, home economics and housework and commercial majors in SHS, and considered aptitude and talent, the promising future, and being favorable for employment for choosing major. The reason HRDII hire SHS student was to develop student into talent of industrial institution, ability of student, and need for manpower with high school graduation level, and there were also partial answer that they can hire SHS student if they have ability to perform duties. The proposals about operation direction of SHS according to the results above are as follow: SHS should diversify major and curriculum to meet various requirements of student and parents, establish SHS admission system based on career guidance, and improve student's ability to perform duties by establishing work-based learning. The Government should organize work-to-school policy to enable practical career development of students from SHS, and promote relevant policy to reinforcing SHS education rather than quantitative evaluation such as employment rate, and cooperative support from each government departments is required to make manpower with skill related to SHS to get proper evaluation and treatment.