• Title/Summary/Keyword: neural Networks

Search Result 4,858, Processing Time 0.031 seconds

Introducing SEABOT: Methodological Quests in Southeast Asian Studies

  • Keck, Stephen
    • SUVANNABHUMI
    • /
    • v.10 no.2
    • /
    • pp.181-213
    • /
    • 2018
  • How to study Southeast Asia (SEA)? The need to explore and identify methodologies for studying SEA are inherent in its multifaceted subject matter. At a minimum, the region's rich cultural diversity inhibits both the articulation of decisive defining characteristics and the training of scholars who can write with confidence beyond their specialisms. Consequently, the challenges of understanding the region remain and a consensus regarding the most effective approaches to studying its history, identity and future seem quite unlikely. Furthermore, "Area Studies" more generally, has proved to be a less attractive frame of reference for burgeoning scholarly trends. This paper will propose a new tool to help address these challenges. Even though the science of artificial intelligence (AI) is in its infancy, it has already yielded new approaches to many commercial, scientific and humanistic questions. At this point, AI has been used to produce news, generate better smart phones, deliver more entertainment choices, analyze earthquakes and write fiction. The time has come to explore the possibility that AI can be put at the service of the study of SEA. The paper intends to lay out what would be required to develop SEABOT. This instrument might exist as a robot on the web which might be called upon to make the study of SEA both broader and more comprehensive. The discussion will explore the financial resources, ownership and timeline needed to make SEABOT go from an idea to a reality. SEABOT would draw upon artificial neural networks (ANNs) to mine the region's "Big Data", while synthesizing the information to form new and useful perspectives on SEA. Overcoming significant language issues, applying multidisciplinary methods and drawing upon new yields of information should produce new questions and ways to conceptualize SEA. SEABOT could lead to findings which might not otherwise be achieved. SEABOT's work might well produce outcomes which could open up solutions to immediate regional problems, provide ASEAN planners with new resources and make it possible to eventually define and capitalize on SEA's "soft power". That is, new findings should provide the basis for ASEAN diplomats and policy-makers to develop new modalities of cultural diplomacy and improved governance. Last, SEABOT might also open up avenues to tell the SEA story in new distinctive ways. SEABOT is seen as a heuristic device to explore the results which this instrument might yield. More important the discussion will also raise the possibility that an AI-driven perspective on SEA may prove to be even more problematic than it is beneficial.

  • PDF

Automation of Regression Analysis for Predicting Flatfish Production (광어 생산량 예측을 위한 회귀분석 자동화 시스템 구축)

  • Ahn, Jinhyun;Kang, Jungwoon;Kim, Mincheol;Park, So-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.128-130
    • /
    • 2021
  • This study aims to implement a Regression Analysis system for predicting the appropriate production of flatfish. Due to Korea's signing of FTAs with countries around the world and accelerating market opening, Korean flatfish farming businesses are experiencing many difficulties due to the specificity and uncertainty of the environment. In addition, there is a need for a solution to problems such as sluggish consumption and price drop due to the recent surge in imported seafood such as salmon and yellowtail and changes in people's dietary habits. in this study, Using the python module, xlwings, it was used to obtain for the production amount of flatfish and to predict the amount of flatfish to be produced later. was used to predict the amount of flatfish to be produced in the future. Therefore, based on the analysis results of this prediction of flatfish production, the flatfish aquaculture industry will be able to come up with a plan to achieve an appropriate production volume and control supply and demand, which will reduce unnecessary economic loss and promote new value creation based on data. In addition, through the data approach attempted in this study, various analysis techniques such as artificial neural networks and multiple regression analysis can be used in future research in various fields, which will become the foundation of basic data that can effectively analyze and utilize big data in various industries.

  • PDF

A Comparative Study on Data Augmentation Using Generative Models for Robust Solar Irradiance Prediction

  • Jinyeong Oh;Jimin Lee;Daesungjin Kim;Bo-Young Kim;Jihoon Moon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.11
    • /
    • pp.29-42
    • /
    • 2023
  • In this paper, we propose a method to enhance the prediction accuracy of solar irradiance for three major South Korean cities: Seoul, Busan, and Incheon. Our method entails the development of five generative models-vanilla GAN, CTGAN, Copula GAN, WGANGP, and TVAE-to generate independent variables that mimic the patterns of existing training data. To mitigate the bias in model training, we derive values for the dependent variables using random forests and deep neural networks, enriching the training datasets. These datasets are integrated with existing data to form comprehensive solar irradiance prediction models. The experimentation revealed that the augmented datasets led to significantly improved model performance compared to those trained solely on the original data. Specifically, CTGAN showed outstanding results due to its sophisticated mechanism for handling the intricacies of multivariate data relationships, ensuring that the generated data are diverse and closely aligned with the real-world variability of solar irradiance. The proposed method is expected to address the issue of data scarcity by augmenting the training data with high-quality synthetic data, thereby contributing to the operation of solar power systems for sustainable development.

Semantic Segmentation of Hazardous Facilities in Rural Area Using U-Net from KOMPSAT Ortho Mosaic Imagery (KOMPSAT 정사모자이크 영상으로부터 U-Net 모델을 활용한 농촌위해시설 분류)

  • Sung-Hyun Gong;Hyung-Sup Jung;Moung-Jin Lee;Kwang-Jae Lee;Kwan-Young Oh;Jae-Young Chang
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1693-1705
    • /
    • 2023
  • Rural areas, which account for about 90% of the country's land area, are increasing in importance and value as a space that performs various public functions. However, facilities that adversely affect residents' lives, such as livestock facilities, factories, and solar panels, are being built indiscriminately near residential areas, damaging the rural environment and landscape and lowering the quality of residents' lives. In order to prevent disorderly development in rural areas and manage rural space in a planned manner, detection and monitoring of hazardous facilities in rural areas is necessary. Data can be acquired through satellite imagery, which can be acquired periodically and provide information on the entire region. Effective detection is possible by utilizing image-based deep learning techniques using convolutional neural networks. Therefore, U-Net model, which shows high performance in semantic segmentation, was used to classify potentially hazardous facilities in rural areas. In this study, KOMPSAT ortho-mosaic optical imagery provided by the Korea Aerospace Research Institute in 2020 with a spatial resolution of 0.7 meters was used, and AI training data for livestock facilities, factories, and solar panels were produced by hand for training and inference. After training with U-Net, pixel accuracy of 0.9739 and mean Intersection over Union (mIoU) of 0.7025 were achieved. The results of this study can be used for monitoring hazardous facilities in rural areas and are expected to be used as basis for rural planning.

Deep Learning-Based Algorithm for the Detection and Characterization of MRI Safety of Cardiac Implantable Electronic Devices on Chest Radiographs

  • Ue-Hwan Kim;Moon Young Kim;Eun-Ah Park;Whal Lee;Woo-Hyun Lim;Hack-Lyoung Kim;Sohee Oh;Kwang Nam Jin
    • Korean Journal of Radiology
    • /
    • v.22 no.11
    • /
    • pp.1918-1928
    • /
    • 2021
  • Objective: With the recent development of various MRI-conditional cardiac implantable electronic devices (CIEDs), the accurate identification and characterization of CIEDs have become critical when performing MRI in patients with CIEDs. We aimed to develop and evaluate a deep learning-based algorithm (DLA) that performs the detection and characterization of parameters, including MRI safety, of CIEDs on chest radiograph (CR) in a single step and compare its performance with other related algorithms that were recently developed. Materials and Methods: We developed a DLA (X-ray CIED identification [XCID]) using 9912 CRs of 958 patients with 968 CIEDs comprising 26 model groups from 4 manufacturers obtained between 2014 and 2019 from one hospital. The performance of XCID was tested with an external dataset consisting of 2122 CRs obtained from a different hospital and compared with the performance of two other related algorithms recently reported, including PacemakerID (PID) and Pacemaker identification with neural networks (PPMnn). Results: The overall accuracies of XCID for the manufacturer classification, model group identification, and MRI safety characterization using the internal test dataset were 99.7% (992/995), 97.2% (967/995), and 98.9% (984/995), respectively. These were 95.8% (2033/2122), 85.4% (1813/2122), and 92.2% (1956/2122), respectively, with the external test dataset. In the comparative study, the accuracy for the manufacturer classification was 95.0% (152/160) for XCID and 91.3% for PPMnn (146/160), which was significantly higher than that for PID (80.0%,128/160; p < 0.001 for both). XCID demonstrated a higher accuracy (88.1%; 141/160) than PPMnn (80.0%; 128/160) in identifying model groups (p < 0.001). Conclusion: The remarkable and consistent performance of XCID suggests its applicability for detection, manufacturer and model identification, as well as MRI safety characterization of CIED on CRs. Further studies are warranted to guarantee the safe use of XCID in clinical practice.

Brain Activities by the Generating-Process-Types of Scientific Emotion in the Pre-Service Teachers' Hypothesis Generation About Biological Phenomena: An fMRI Study (예비교사들의 생물학 가설 생성에서 나타나는 과학적 감성의 생성 과정 유형별 두뇌 활성화에 대한 fMRI 연구)

  • Shin, Dong-Hoon;Kwon, Yong-Ju
    • Journal of The Korean Association For Science Education
    • /
    • v.26 no.4
    • /
    • pp.568-580
    • /
    • 2006
  • The purpose of this study was to investigate the brain activities by 4-types of Generating Process of Scientific Emotion (GPSE) in the hypothesis-generating biological phenomena by using fMRI. Four-types of GPSE were involved in the Basic Generating Process (BGP), Retrospective Generating Process (RGP), Cognitive Generating Process (CGP) and Attributive Generating Process (AGP). For this study, we made an experimental design capable of validating the 4-types of generating process (e.g. BGP, RGP, CGP and AGP), and then measured BOLD signals of 10 pre-service teachers' brain activities by 3.0T fMRI system. Subjects were 10 healthy females majoring in biology education. As a result, there were clear differences among 4-types of GPSE. Brain areas activated by BGP were at right occipital lobe (BA 17), at left thalamus and left parahippocampal gyrus, while in the case of RGP, at left superior parietal lobe (BA 8, 9), at left pulvinar and left globus pallidus were activated. Brain areas activated by CGP were the right posterior cingulate and left medial frontal gyrus (BA 6). In the case of AGP, the most distinctively activated brain areas were the right medial frontal gyrus (BA 8) and left inferior parietal lobule (BA 40). These results would mean that each of the 4-types of GPSE has a specific neural networks in the brain, respectively. Furthermore, it would provide the basis of brain-based learning in science education.

A New Bias Scheduling Method for Improving Both Classification Performance and Precision on the Classification and Regression Problems (분류 및 회귀문제에서의 분류 성능과 정확도를 동시에 향상시키기 위한 새로운 바이어스 스케줄링 방법)

  • Kim Eun-Mi;Park Seong-Mi;Kim Kwang-Hee;Lee Bae-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.11
    • /
    • pp.1021-1028
    • /
    • 2005
  • The general solution for classification and regression problems can be found by matching and modifying matrices with the information in real world and then these matrices are teaming in neural networks. This paper treats primary space as a real world, and dual space that Primary space matches matrices using kernel. In practical study, there are two kinds of problems, complete system which can get an answer using inverse matrix and ill-posed system or singular system which cannot get an answer directly from inverse of the given matrix. Further more the problems are often given by the latter condition; therefore, it is necessary to find regularization parameter to change ill-posed or singular problems into complete system. This paper compares each performance under both classification and regression problems among GCV, L-Curve, which are well known for getting regularization parameter, and kernel methods. Both GCV and L-Curve have excellent performance to get regularization parameters, and the performances are similar although they show little bit different results from the different condition of problems. However, these methods are two-step solution because both have to calculate the regularization parameters to solve given problems, and then those problems can be applied to other solving methods. Compared with UV and L-Curve, kernel methods are one-step solution which is simultaneously teaming a regularization parameter within the teaming process of pattern weights. This paper also suggests dynamic momentum which is leaning under the limited proportional condition between learning epoch and the performance of given problems to increase performance and precision for regularization. Finally, this paper shows the results that suggested solution can get better or equivalent results compared with GCV and L-Curve through the experiments using Iris data which are used to consider standard data in classification, Gaussian data which are typical data for singular system, and Shaw data which is an one-dimension image restoration problems.

Requirement Analysis for Agricultural Meteorology Information Service Systems based on the Fourth Industrial Revolution Technologies (4차 산업혁명 기술에 기반한 농업 기상 정보 시스템의 요구도 분석)

  • Kim, Kwang Soo;Yoo, Byoung Hyun;Hyun, Shinwoo;Kang, DaeGyoon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.3
    • /
    • pp.175-186
    • /
    • 2019
  • Efforts have been made to introduce the climate smart agriculture (CSA) for adaptation to future climate conditions, which would require collection and management of site specific meteorological data. The objectives of this study were to identify requirements for construction of agricultural meteorology information service system (AMISS) using technologies that lead to the fourth industrial revolution, e.g., internet of things (IoT), artificial intelligence, and cloud computing. The IoT sensors that require low cost and low operating current would be useful to organize wireless sensor network (WSN) for collection and analysis of weather measurement data, which would help assessment of productivity for an agricultural ecosystem. It would be recommended to extend the spatial extent of the WSN to a rural community, which would benefit a greater number of farms. It is preferred to create the big data for agricultural meteorology in order to produce and evaluate the site specific data in rural areas. The digital climate map can be improved using artificial intelligence such as deep neural networks. Furthermore, cloud computing and fog computing would help reduce costs and enhance the user experience of the AMISS. In addition, it would be advantageous to combine environmental data and farm management data, e.g., price data for the produce of interest. It would also be needed to develop a mobile application whose user interface could meet the needs of stakeholders. These fourth industrial revolution technologies would facilitate the development of the AMISS and wide application of the CSA.

Detection of Wildfire Burned Areas in California Using Deep Learning and Landsat 8 Images (딥러닝과 Landsat 8 영상을 이용한 캘리포니아 산불 피해지 탐지)

  • Youngmin Seo;Youjeong Youn;Seoyeon Kim;Jonggu Kang;Yemin Jeong;Soyeon Choi;Yungyo Im;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1413-1425
    • /
    • 2023
  • The increasing frequency of wildfires due to climate change is causing extreme loss of life and property. They cause loss of vegetation and affect ecosystem changes depending on their intensity and occurrence. Ecosystem changes, in turn, affect wildfire occurrence, causing secondary damage. Thus, accurate estimation of the areas affected by wildfires is fundamental. Satellite remote sensing is used for forest fire detection because it can rapidly acquire topographic and meteorological information about the affected area after forest fires. In addition, deep learning algorithms such as convolutional neural networks (CNN) and transformer models show high performance for more accurate monitoring of fire-burnt regions. To date, the application of deep learning models has been limited, and there is a scarcity of reports providing quantitative performance evaluations for practical field utilization. Hence, this study emphasizes a comparative analysis, exploring performance enhancements achieved through both model selection and data design. This study examined deep learning models for detecting wildfire-damaged areas using Landsat 8 satellite images in California. Also, we conducted a comprehensive comparison and analysis of the detection performance of multiple models, such as U-Net and High-Resolution Network-Object Contextual Representation (HRNet-OCR). Wildfire-related spectral indices such as normalized difference vegetation index (NDVI) and normalized burn ratio (NBR) were used as input channels for the deep learning models to reflect the degree of vegetation cover and surface moisture content. As a result, the mean intersection over union (mIoU) was 0.831 for U-Net and 0.848 for HRNet-OCR, showing high segmentation performance. The inclusion of spectral indices alongside the base wavelength bands resulted in increased metric values for all combinations, affirming that the augmentation of input data with spectral indices contributes to the refinement of pixels. This study can be applied to other satellite images to build a recovery strategy for fire-burnt areas.

The Pattern Analysis of Financial Distress for Non-audited Firms using Data Mining (데이터마이닝 기법을 활용한 비외감기업의 부실화 유형 분석)

  • Lee, Su Hyun;Park, Jung Min;Lee, Hyoung Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.111-131
    • /
    • 2015
  • There are only a handful number of research conducted on pattern analysis of corporate distress as compared with research for bankruptcy prediction. The few that exists mainly focus on audited firms because financial data collection is easier for these firms. But in reality, corporate financial distress is a far more common and critical phenomenon for non-audited firms which are mainly comprised of small and medium sized firms. The purpose of this paper is to classify non-audited firms under distress according to their financial ratio using data mining; Self-Organizing Map (SOM). SOM is a type of artificial neural network that is trained using unsupervised learning to produce a lower dimensional discretized representation of the input space of the training samples, called a map. SOM is different from other artificial neural networks as it applies competitive learning as opposed to error-correction learning such as backpropagation with gradient descent, and in the sense that it uses a neighborhood function to preserve the topological properties of the input space. It is one of the popular and successful clustering algorithm. In this study, we classify types of financial distress firms, specially, non-audited firms. In the empirical test, we collect 10 financial ratios of 100 non-audited firms under distress in 2004 for the previous two years (2002 and 2003). Using these financial ratios and the SOM algorithm, five distinct patterns were distinguished. In pattern 1, financial distress was very serious in almost all financial ratios. 12% of the firms are included in these patterns. In pattern 2, financial distress was weak in almost financial ratios. 14% of the firms are included in pattern 2. In pattern 3, growth ratio was the worst among all patterns. It is speculated that the firms of this pattern may be under distress due to severe competition in their industries. Approximately 30% of the firms fell into this group. In pattern 4, the growth ratio was higher than any other pattern but the cash ratio and profitability ratio were not at the level of the growth ratio. It is concluded that the firms of this pattern were under distress in pursuit of expanding their business. About 25% of the firms were in this pattern. Last, pattern 5 encompassed very solvent firms. Perhaps firms of this pattern were distressed due to a bad short-term strategic decision or due to problems with the enterpriser of the firms. Approximately 18% of the firms were under this pattern. This study has the academic and empirical contribution. In the perspectives of the academic contribution, non-audited companies that tend to be easily bankrupt and have the unstructured or easily manipulated financial data are classified by the data mining technology (Self-Organizing Map) rather than big sized audited firms that have the well prepared and reliable financial data. In the perspectives of the empirical one, even though the financial data of the non-audited firms are conducted to analyze, it is useful for find out the first order symptom of financial distress, which makes us to forecast the prediction of bankruptcy of the firms and to manage the early warning and alert signal. These are the academic and empirical contribution of this study. The limitation of this research is to analyze only 100 corporates due to the difficulty of collecting the financial data of the non-audited firms, which make us to be hard to proceed to the analysis by the category or size difference. Also, non-financial qualitative data is crucial for the analysis of bankruptcy. Thus, the non-financial qualitative factor is taken into account for the next study. This study sheds some light on the non-audited small and medium sized firms' distress prediction in the future.