• Title/Summary/Keyword: Neural Network Learning

Search Result 4,228, Processing Time 0.028 seconds

Research on APC Verification for Disaster Victims and Vulnerable Facilities (재난약자 및 취약시설에 대한 APC실증에 관한 연구)

  • Seungyong Kim;Incheol Hwang;Dongsik Kim;Jungjae Shin;Seunggap Yong
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.1
    • /
    • pp.199-205
    • /
    • 2024
  • Purpose: This study aims to improve the recognition rate of Auto People Counting (APC) in accurately identifying and providing information on remaining evacuees in disaster-vulnerable facilities such as nursing homes to firefighting and other response agencies in the event of a disaster. Methods: In this study, a baseline model was established using CNN (Convolutional Neural Network) models to improve the algorithm for recognizing images of incoming and outgoing individuals through cameras installed in actual disaster-vulnerable facilities operating APC systems. Various algorithms were analyzed, and the top seven candidates were selected. The research was conducted by utilizing transfer learning models to select the optimal algorithm with the best performance. Results: Experiment results confirmed the precision and recall of Densenet201 and Resnet152v2 models, which exhibited the best performance in terms of time and accuracy. It was observed that both models demonstrated 100% accuracy for all labels, with Densenet201 model showing superior performance. Conclusion: The optimal algorithm applicable to APC among various artificial intelligence algorithms was selected. Further research on algorithm analysis and learning is required to accurately identify the incoming and outgoing individuals in disaster-vulnerable facilities in various disaster situations such as emergencies in the future.

Automated Data Extraction from Unstructured Geotechnical Report based on AI and Text-mining Techniques (AI 및 텍스트 마이닝 기법을 활용한 지반조사보고서 데이터 추출 자동화)

  • Park, Jimin;Seo, Wanhyuk;Seo, Dong-Hee;Yun, Tae-Sup
    • Journal of the Korean Geotechnical Society
    • /
    • v.40 no.4
    • /
    • pp.69-79
    • /
    • 2024
  • Field geotechnical data are obtained from various field and laboratory tests and are documented in geotechnical investigation reports. For efficient design and construction, digitizing these geotechnical parameters is essential. However, current practices involve manual data entry, which is time-consuming, labor-intensive, and prone to errors. Thus, this study proposes an automatic data extraction method from geotechnical investigation reports using image-based deep learning models and text-mining techniques. A deep-learning-based page classification model and a text-searching algorithm were employed to classify geotechnical investigation report pages with 100% accuracy. Computer vision algorithms were utilized to identify valid data regions within report pages, and text analysis was used to match and extract the corresponding geotechnical data. The proposed model was validated using a dataset of 205 geotechnical investigation reports, achieving an average data extraction accuracy of 93.0%. Finally, a user-interface-based program was developed to enhance the practical application of the extraction model. It allowed users to upload PDF files of geotechnical investigation reports, automatically analyze these reports, and extract and edit data. This approach is expected to improve the efficiency and accuracy of digitizing geotechnical investigation reports and building geotechnical databases.

Current Status of Satellite Remote Sensing-Based Methane Emission Monitoring Technologies (인공위성 원격탐사 기반 메탄 배출 모니터링 기술 현황)

  • Minju Kim;Jeongwoo Park;Chang-Uk Hyun
    • Economic and Environmental Geology
    • /
    • v.57 no.5
    • /
    • pp.513-527
    • /
    • 2024
  • Methane is the second most significant greenhouse gas contributing to global warming after carbon dioxide, exerting a substantial impact on climate change. This paper provides a comprehensive review of satellite remote sensing-based methane detection technologies used to efficiently detect and quantify methane emissions. Methane emission sources are broadly categorized into natural sources (such as permafrost and wetlands) and anthropogenic sources (such as agriculture, coal mines, oil and gas fields, and landfills). This study focuses on anthropogenic sources and examines the principles of methane detection using information from various spectral bands, including the shortwave infrared (SWIR) band, and the utilization of key satellite data supporting these technologies. Recently, deep learning techniques have been applied in methane detection research using satellite data, contributing to more accurate analyses of methane emissions. Furthermore, this paper assesses the practicality of satellite-based methane monitoring by synthesizing case studies of methane emission detection at global, regional, and major incident scales, including examples of applying deep learning techniques. At the global scale, research utilizing satellite sensors like the Sentinel-5P TROPOspheric Monitoring Instrument (TROPOMI) was reviewed. At the regional scale, studies were highlighted where TROPOMI data was combined with relatively high-resolution satellite data, such as the Sentinel-2 MultiSpectral Instrument (MSI) and GHGSat Wide-Angle Fabry-Perot (WAF-P) Imaging Spectrometer, to detect methane emissions and sources. Through this comprehensive review, the current state and applicability of satellite-based methane detection technologies are evaluated.

A Study on the stock price prediction and influence factors through NARX neural network optimization (NARX 신경망 최적화를 통한 주가 예측 및 영향 요인에 관한 연구)

  • Cheon, Min Jong;Lee, Ook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.8
    • /
    • pp.572-578
    • /
    • 2020
  • The stock market is affected by unexpected factors, such as politics, society, and natural disasters, as well as by corporate performance and economic conditions. In recent days, artificial intelligence has become popular, and many researchers have tried to conduct experiments with that. Our study proposes an experiment using not only stock-related data but also other various economic data. We acquired a year's worth of data on stock prices, the percentage of foreigners, interest rates, and exchange rates, and combined them in various ways. Thus, our input data became diversified, and we put the combined input data into a nonlinear autoregressive network with exogenous inputs (NARX) model. With the input data in the NARX model, we analyze and compare them to the original data. As a result, the model exhibits a root mean square error (RMSE) of 0.08 as being the most accurate when we set 10 neurons and two delays with a combination of stock prices and exchange rates from the U.S., China, Europe, and Japan. This study is meaningful in that the exchange rate has the greatest influence on stock prices, lowering the error from RMSE 0.589 when only closing data are used.

A Study on Optimal Output Neuron Allocation of LVQ Neural Network using Variance Estimation (분산추정에 의한 LVQ 신경회로망의 최적 출력뉴런 분할에 관한 연구)

  • 정준원;조성원
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1996.10a
    • /
    • pp.239-242
    • /
    • 1996
  • 본 논문에서는 BP(Back Propagation)에 비해서 빠른 학습시간과 다른 경쟁학습 신경회로망 알고리즘에 비해서 비교적 우수한 성능으로 패턴인식 등에 많이 이용되고 있는 LVQ(Learning Vector Quantization) 알고리즘의 성능을 향상시키기 위한 방법을 논의하고자 한다. 일반적으로 LVQ는 음(negative)의 학습을 하기 때문에 초기 가중치가 제대로 설정되지 않으면 발산할 수 있다는 단점이 있으며, 경쟁학습 계열의 신경망이기 때문에 출력 층의 뉴런 수에 따라 성능에 큰 영향을 받는다고 알려져 있다.[1]. 지도학습 형태를 지닌 LVQ의 경우에 학습패턴이 n개의 클래스를 가지고, 각 클래스 별로 학습패턴의 수가 같은 경우에 일반적으로 전체 출력뉴런에 대해서 (출력뉴런수/n)개의 뉴런을 각 클래스의 목표(desired) 클러스터로 할당하여 학습을 수행하는데, 본 논문에서는 각 클래스에 동일한 수의 출력뉴런을 할당하지 않고, 학습데이터에서 각 클래스의 분산을 추정하여 각 클래스의 분산을 추정분산에 비례하게 목표 출력뉴런을 할당하고, 초기 가중치도 추정분산에 비례하게 각 클래스의 초기 임의 위치 입력백터를 사용하여 학습을 수행하는 방법을 제안한다. 본 논문에서 제안하는 방법은 분류하고자 하는 데이터에 대해서 필요한 최적의 출력뉴런 수를 찾는 것이 아니라 이미 결정되어 있는 출력뉴런 수에 대해서 각 클래스에 할당할 출력 뉴런 수를 데이터의 추정분산에 의해서 결정하는 것으로, 추정분산이 크면 상대적으로 많은 출력 뉴런을 할당하고 작으면 상대적으로 적은 출력뉴런을 할당하고 초기 가중치도 마찬가지 방법으로 결정하며, 이렇게 하면 정해진 출력뉴런 개수 안에서 각 클래스 별로 분류의 어려움에 따라서 출력뉴런을 할당하기 때문에 미학습 뉴런이 줄어들게 되어 성능의 향상을 기대할 수 있으며, 실험적으로 제안된 방법이 더 나은 성능을 보임을 확인했다.initially they expected a more practical program about planting than programs that teach community design. Many people are active in their own towns to create better environments and communities. The network system "Alpha Green-Net" is functional to support graduates of the course. In the future these educational programs for citizens will becomes very important. Other cities are starting to have their own progrms, but they are still very short term. "Alpha Green-Net" is in the process of growing. Many members are very keen to develop their own abilities. In the future these NPOs should become independent. To help these NPOs become independent and active the educational programs should consider and teach about how to do this more in the future.단하였는데 그 결과, 좌측 촉각엽에서 제4형의 신경연접이 퇴행성 변화를 나타내었다. 그러므로 촉각의 지각신경세포는 뇌의 같은 족 촉각엽에 뻗어와 제4형 신경연접을 형성한다고 결론되었다.$/ 값이 210 $\mu\textrm{g}$/$m\ell$로서 효과적인 저해 활성을 나타내었다 따라서, 본 연구에서 빈

  • PDF

Fake News Detection on YouTube Using Related Video Information (관련 동영상 정보를 활용한 YouTube 가짜뉴스 탐지 기법)

  • Junho Kim;Yongjun Shin;Hyunchul Ahn
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.19-36
    • /
    • 2023
  • As advances in information and communication technology have made it easier for anyone to produce and disseminate information, a new problem has emerged: fake news, which is false information intentionally shared to mislead people. Initially spread mainly through text, fake news has gradually evolved and is now distributed in multimedia formats. Since its founding in 2005, YouTube has become the world's leading video platform and is used by most people worldwide. However, it has also become a primary source of fake news, causing social problems. Various researchers have been working on detecting fake news on YouTube. There are content-based and background information-based approaches to fake news detection. Still, content-based approaches are dominant when looking at conventional fake news research and YouTube fake news detection research. This study proposes a fake news detection method based on background information rather than content-based fake news detection. In detail, we suggest detecting fake news by utilizing related video information from YouTube. Specifically, the method detects fake news through CNN, a deep learning network, from the vectorized information obtained from related videos and the original video using Doc2vec, an embedding technique. The empirical analysis shows that the proposed method has better prediction performance than the existing content-based approach to detecting fake news on YouTube. The proposed method in this study contributes to making our society safer and more reliable by preventing the spread of fake news on YouTube, which is highly contagious.

Comparison of Models for Stock Price Prediction Based on Keyword Search Volume According to the Social Acceptance of Artificial Intelligence (인공지능의 사회적 수용도에 따른 키워드 검색량 기반 주가예측모형 비교연구)

  • Cho, Yujung;Sohn, Kwonsang;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.103-128
    • /
    • 2021
  • Recently, investors' interest and the influence of stock-related information dissemination are being considered as significant factors that explain stock returns and volume. Besides, companies that develop, distribute, or utilize innovative new technologies such as artificial intelligence have a problem that it is difficult to accurately predict a company's future stock returns and volatility due to macro-environment and market uncertainty. Market uncertainty is recognized as an obstacle to the activation and spread of artificial intelligence technology, so research is needed to mitigate this. Hence, the purpose of this study is to propose a machine learning model that predicts the volatility of a company's stock price by using the internet search volume of artificial intelligence-related technology keywords as a measure of the interest of investors. To this end, for predicting the stock market, we using the VAR(Vector Auto Regression) and deep neural network LSTM (Long Short-Term Memory). And the stock price prediction performance using keyword search volume is compared according to the technology's social acceptance stage. In addition, we also conduct the analysis of sub-technology of artificial intelligence technology to examine the change in the search volume of detailed technology keywords according to the technology acceptance stage and the effect of interest in specific technology on the stock market forecast. To this end, in this study, the words artificial intelligence, deep learning, machine learning were selected as keywords. Next, we investigated how many keywords each week appeared in online documents for five years from January 1, 2015, to December 31, 2019. The stock price and transaction volume data of KOSDAQ listed companies were also collected and used for analysis. As a result, we found that the keyword search volume for artificial intelligence technology increased as the social acceptance of artificial intelligence technology increased. In particular, starting from AlphaGo Shock, the keyword search volume for artificial intelligence itself and detailed technologies such as machine learning and deep learning appeared to increase. Also, the keyword search volume for artificial intelligence technology increases as the social acceptance stage progresses. It showed high accuracy, and it was confirmed that the acceptance stages showing the best prediction performance were different for each keyword. As a result of stock price prediction based on keyword search volume for each social acceptance stage of artificial intelligence technologies classified in this study, the awareness stage's prediction accuracy was found to be the highest. The prediction accuracy was different according to the keywords used in the stock price prediction model for each social acceptance stage. Therefore, when constructing a stock price prediction model using technology keywords, it is necessary to consider social acceptance of the technology and sub-technology classification. The results of this study provide the following implications. First, to predict the return on investment for companies based on innovative technology, it is most important to capture the recognition stage in which public interest rapidly increases in social acceptance of the technology. Second, the change in keyword search volume and the accuracy of the prediction model varies according to the social acceptance of technology should be considered in developing a Decision Support System for investment such as the big data-based Robo-advisor recently introduced by the financial sector.

Multi-classification of Osteoporosis Grading Stages Using Abdominal Computed Tomography with Clinical Variables : Application of Deep Learning with a Convolutional Neural Network (멀티 모달리티 데이터 활용을 통한 골다공증 단계 다중 분류 시스템 개발: 합성곱 신경망 기반의 딥러닝 적용)

  • Tae Jun Ha;Hee Sang Kim;Seong Uk Kang;DooHee Lee;Woo Jin Kim;Ki Won Moon;Hyun-Soo Choi;Jeong Hyun Kim;Yoon Kim;So Hyeon Bak;Sang Won Park
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.3
    • /
    • pp.187-201
    • /
    • 2024
  • Osteoporosis is a major health issue globally, often remaining undetected until a fracture occurs. To facilitate early detection, deep learning (DL) models were developed to classify osteoporosis using abdominal computed tomography (CT) scans. This study was conducted using retrospectively collected data from 3,012 contrast-enhanced abdominal CT scans. The DL models developed in this study were constructed for using image data, demographic/clinical information, and multi-modality data, respectively. Patients were categorized into the normal, osteopenia, and osteoporosis groups based on their T-scores, obtained from dual-energy X-ray absorptiometry, into normal, osteopenia, and osteoporosis groups. The models showed high accuracy and effectiveness, with the combined data model performing the best, achieving an area under the receiver operating characteristic curve of 0.94 and an accuracy of 0.80. The image-based model also performed well, while the demographic data model had lower accuracy and effectiveness. In addition, the DL model was interpreted by gradient-weighted class activation mapping (Grad-CAM) to highlight clinically relevant features in the images, revealing the femoral neck as a common site for fractures. The study shows that DL can accurately identify osteoporosis stages from clinical data, indicating the potential of abdominal CT scans in early osteoporosis detection and reducing fracture risks with prompt treatment.

Optimal supervised LSA method using selective feature dimension reduction (선택적 자질 차원 축소를 이용한 최적의 지도적 LSA 방법)

  • Kim, Jung-Ho;Kim, Myung-Kyu;Cha, Myung-Hoon;In, Joo-Ho;Chae, Soo-Hoan
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.47-60
    • /
    • 2010
  • Most of the researches about classification usually have used kNN(k-Nearest Neighbor), SVM(Support Vector Machine), which are known as learn-based model, and Bayesian classifier, NNA(Neural Network Algorithm), which are known as statistics-based methods. However, there are some limitations of space and time when classifying so many web pages in recent internet. Moreover, most studies of classification are using uni-gram feature representation which is not good to represent real meaning of words. In case of Korean web page classification, there are some problems because of korean words property that the words have multiple meanings(polysemy). For these reasons, LSA(Latent Semantic Analysis) is proposed to classify well in these environment(large data set and words' polysemy). LSA uses SVD(Singular Value Decomposition) which decomposes the original term-document matrix to three different matrices and reduces their dimension. From this SVD's work, it is possible to create new low-level semantic space for representing vectors, which can make classification efficient and analyze latent meaning of words or document(or web pages). Although LSA is good at classification, it has some drawbacks in classification. As SVD reduces dimensions of matrix and creates new semantic space, it doesn't consider which dimensions discriminate vectors well but it does consider which dimensions represent vectors well. It is a reason why LSA doesn't improve performance of classification as expectation. In this paper, we propose new LSA which selects optimal dimensions to discriminate and represent vectors well as minimizing drawbacks and improving performance. This method that we propose shows better and more stable performance than other LSAs' in low-dimension space. In addition, we derive more improvement in classification as creating and selecting features by reducing stopwords and weighting specific values to them statistically.

  • PDF

Comparison of Convolutional Neural Network (CNN) Models for Lettuce Leaf Width and Length Prediction (상추잎 너비와 길이 예측을 위한 합성곱 신경망 모델 비교)

  • Ji Su Song;Dong Suk Kim;Hyo Sung Kim;Eun Ji Jung;Hyun Jung Hwang;Jaesung Park
    • Journal of Bio-Environment Control
    • /
    • v.32 no.4
    • /
    • pp.434-441
    • /
    • 2023
  • Determining the size or area of a plant's leaves is an important factor in predicting plant growth and improving the productivity of indoor farms. In this study, we developed a convolutional neural network (CNN)-based model to accurately predict the length and width of lettuce leaves using photographs of the leaves. A callback function was applied to overcome data limitations and overfitting problems, and K-fold cross-validation was used to improve the generalization ability of the model. In addition, ImageDataGenerator function was used to increase the diversity of training data through data augmentation. To compare model performance, we evaluated pre-trained models such as VGG16, Resnet152, and NASNetMobile. As a result, NASNetMobile showed the highest performance, especially in width prediction, with an R_squared value of 0.9436, and RMSE of 0.5659. In length prediction, the R_squared value was 0.9537, and RMSE of 0.8713. The optimized model adopted the NASNetMobile architecture, the RMSprop optimization tool, the MSE loss functions, and the ELU activation functions. The training time of the model averaged 73 minutes per Epoch, and it took the model an average of 0.29 seconds to process a single lettuce leaf photo. In this study, we developed a CNN-based model to predict the leaf length and leaf width of plants in indoor farms, which is expected to enable rapid and accurate assessment of plant growth status by simply taking images. It is also expected to contribute to increasing the productivity and resource efficiency of farms by taking appropriate agricultural measures such as adjusting nutrient solution in real time.