• Title/Summary/Keyword: making techniques

Search Result 1,309, Processing Time 0.03 seconds

A Study on Smart Teaching Plan Production System Combined Education Profiling (교육 프로파일링을 융합한 스마트 교안제작 시스템에 관한 연구)

  • Kim, Ki-Bong;Cho, Han-Jin
    • Journal of Digital Convergence
    • /
    • v.13 no.3
    • /
    • pp.185-191
    • /
    • 2015
  • We are fusing to the profiling techniques in education and making to a smart teaching plan production systems. Therefore, we investigated based on technical elements necessary for the proper look for profiling techniques, trends in the technology and product trends. We proposed that is smart teaching plan production system such as the smart teaching plan production and editing technique, the management technique for the smart teaching plan, and works for relation skill to smart teaching plan production systems. If you using the techniques that we were proposed to build effective smart teaching plan production system, management system for contents control, virtual classroom then they can management their's class and class teaching plan production, file management to teaching group easy. and those students can easy understand in their class too easy.

A Review on Remote Sensing and GIS Applications to Monitor Natural Disasters in Indonesia

  • Hakim, Wahyu Luqmanul;Lee, Chang-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_1
    • /
    • pp.1303-1322
    • /
    • 2020
  • Indonesia is more prone to natural disasters due to its geological condition under the three main plates, making Indonesia experience frequent seismic activity, causing earthquakes, volcanic eruption, and tsunami. Those disasters could lead to other disasters such as landslides, floods, land subsidence, and coastal inundation. Monitoring those disasters could be essential to predict and prevent damage to the environment. We reviewed the application of remote sensing and Geographic Information System (GIS) for detecting natural disasters in the case of Indonesia, based on 43 articles. The remote sensing and GIS method will be focused on InSAR techniques, image classification, and susceptibility mapping. InSAR method has been used to monitor natural disasters affecting the deformation of the earth's surface in Indonesia, such as earthquakes, volcanic activity, and land subsidence. Monitoring landslides in Indonesia using InSAR techniques has not been found in many studies; hence it is crucial to monitor the unstable slope that leads to a landslide. Image classification techniques have been used to monitor pre-and post-natural disasters in Indonesia, such as earthquakes, tsunami, forest fires, and volcano eruptions. It has a lack of studies about the classification of flood damage in Indonesia. However, flood mapping was found in susceptibility maps, as many studies about the landslide susceptibility map in Indonesia have been conducted. However, a land subsidence susceptibility map was the one subject to be studied more to decrease land subsidence damage, considering many reported cases found about land subsidence frequently occur in several cities in Indonesia.

An insight into the prediction of mechanical properties of concrete using machine learning techniques

  • Neeraj Kumar Shukla;Aman Garg;Javed Bhutto;Mona Aggarwal;M.Ramkumar Raja;Hany S. Hussein;T.M. Yunus Khan;Pooja Sabherwal
    • Computers and Concrete
    • /
    • v.32 no.3
    • /
    • pp.263-286
    • /
    • 2023
  • Experimenting with concrete to determine its compressive and tensile strengths is a laborious and time-consuming operation that requires a lot of attention to detail. Researchers from all around the world have spent the better part of the last several decades attempting to use machine learning algorithms to make accurate predictions about the technical qualities of various kinds of concrete. The research that is currently available on estimating the strength of concrete draws attention to the applicability and precision of the various machine learning techniques. This article provides a summary of the research that has previously been conducted on estimating the strength of concrete by making use of a variety of different machine learning methods. In this work, a classification of the existing body of research literature is presented, with the classification being based on the machine learning technique used by the researchers. The present review work will open the horizon for the researchers working on the machine learning based prediction of the compressive strength of concrete by providing the recommendations and benefits and drawbacks associated with each model as determining the compressive strength of concrete practically is a laborious and time-consuming task.

Comparison between Social Network Based Rank Discrimination Techniques of Data Envelopment Analysis: Beyond the Limitations (사회 연결망 분석 기반 자료포락분석 순위 결정 기법간 비교와 한계 극복 방안에 대한 연구)

  • Hee Jay Kang
    • Journal of Information Technology Services
    • /
    • v.22 no.1
    • /
    • pp.57-74
    • /
    • 2023
  • It has been pointed out as a limitation that the rank of some efficient DMUs(decision making units) cannot be discriminated due to the relativity nature of efficiency measured by DEA(data envelopment analysis), comparing the production structure. Recently, to solve this problem, a DEA-SNA(social network analysis) model that combines SNA techniques with data envelopment analysis has been studied intensively. Several models have been proposed using techniques such as eigenvector centrality, pagerank centrality, and hypertext induced topic selection(HITS) algorithm, but DMUs that cannot be ranked still remain. Moreover, in the process of extracting latent information within the DMU group to build effective network, a problem that violates the basic assumptions of the DEA also arises. This study is meaningful in finding the cause of the limitations by comparing and analyzing the characteristics of the DEA-SNA model proposed so far, and based on this, suggesting the direction and possibility to develop more advanced model. Through the results of this study, it will be enable to further expand the field of research related to DEA.

Menu Analysis Using Menu Engineering and Cost/Margin Analysis - French Restaurant of the Tourism Hotel in Seoul - (메뉴엔지니어링기법과 CMA 기법을 이용한 메뉴 분석에 관한 연구 - 서울지역 특1급 호텔의 프렌치레스토랑을 중심으로 -)

  • Lee, Eun-Jung;Lee, Young-Sook
    • Journal of the Korean Society of Food Culture
    • /
    • v.21 no.3
    • /
    • pp.270-279
    • /
    • 2006
  • This study was designed to : (a) analyze the menus of the French restaurant in tourism hotel using the menu analysis techniques of Kasavana & Smith and Pavesic, (b) compare the characteristics of the two analysis techniques. The calculations for the menu analysis were done using the MS 2000 Excel spreadsheet program. The menu mix % and unit contribution margin were used as variables by Kasavana & Smith and weighted contribution margins (WCM) and potential food cost % (PFC%) by Pavesic. In two cases, a four-cell matrix was created and menu items were located in each according they achieved high or low scores with respect to two variables. The items that scored favorably on both variables were rated in the top category (e.g., star, prime) and those that scored below average on both were rated in the lowest category (e.g., dog, problem). While Kasavana & Smith's method focused on customer's viewpoints, Pavesic's method considered the manager's viewpoints. Therefore, it is more likely to be desirable for decision-making on menus if the menu analysis techniques chosen is suited to its purpose.

Comparative Analysis of Baseflow Separation using Conventional and Deep Learning Techniques

  • Yusuff, Kareem Kola;Shiksa, Bastola;Park, Kidoo;Jung, Younghun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.149-149
    • /
    • 2022
  • Accurate quantitative evaluation of baseflow contribution to streamflow is imperative to address seasonal drought vulnerability, flood occurrence and groundwater management concerns for efficient and sustainable water resources management in watersheds. Several baseflow separation algorithms using recursive filters, graphical method and tracer or chemical balance have been developed but resulting baseflow outputs always show wide variations, thereby making it hard to determine best separation technique. Therefore, the current global shift towards implementation of artificial intelligence (AI) in water resources is employed to compare the performance of deep learning models with conventional hydrograph separation techniques to quantify baseflow contribution to streamflow of Piney River watershed, Tennessee from 2001-2021. Streamflow values are obtained from the USGS station 03602500 and modeled to generate values of Baseflow Index (BI) using Web-based Hydrograph Analysis (WHAT) model. Annual and seasonal baseflow outputs from the traditional separation techniques are compared with results of Long Short Term Memory (LSTM) and simple Gated Recurrent Unit (GRU) models. The GRU model gave optimal BFI values during the four seasons with average NSE = 0.98, KGE = 0.97, r = 0.89 and future baseflow volumes are predicted. AI offers easier and more accurate approach to groundwater management and surface runoff modeling to create effective water policy frameworks for disaster management.

  • PDF

Digital Forensic Investigation on Social Media Platforms: A Survey on Emerging Machine Learning Approaches

  • Abdullahi Aminu Kazaure;Aman Jantan;Mohd Najwadi Yusoff
    • Journal of Information Science Theory and Practice
    • /
    • v.12 no.1
    • /
    • pp.39-59
    • /
    • 2024
  • An online social network is a platform that is continuously expanding, which enables groups of people to share their views and communicate with one another using the Internet. The social relations among members of the public are significantly improved because of this gesture. Despite these advantages and opportunities, criminals are continuing to broaden their attempts to exploit people by making use of techniques and approaches designed to undermine and exploit their victims for criminal activities. The field of digital forensics, on the other hand, has made significant progress in reducing the impact of this risk. Even though most of these digital forensic investigation techniques are carried out manually, most of these methods are not usually appropriate for use with online social networks due to their complexity, growth in data volumes, and technical issues that are present in these environments. In both civil and criminal cases, including sexual harassment, intellectual property theft, cyberstalking, online terrorism, and cyberbullying, forensic investigations on social media platforms have become more crucial. This study explores the use of machine learning techniques for addressing criminal incidents on social media platforms, particularly during forensic investigations. In addition, it outlines some of the difficulties encountered by forensic investigators while investigating crimes on social networking sites.

A Comprehensive Review of Recent Advances in the Enrichment and Mass Spectrometric Analysis of Glycoproteins and Glycopeptides in Complex Biological Matrices

  • Mohamed A. Gab-Allah;Jeongkwon Kim
    • Mass Spectrometry Letters
    • /
    • v.15 no.1
    • /
    • pp.1-25
    • /
    • 2024
  • Protein glycosylation, a highly significant and ubiquitous post-translational modification (PTM) in eukaryotic cells, has attracted considerable research interest due to its pivotal role in a wide array of essential biological processes. Conducting a comprehensive analysis of glycoproteins is imperative for understanding glycoprotein bio-functions and identifying glycosylated biomarkers. However, the complexity and heterogeneity of glycan structures, coupled with the low abundance and poor ionization efficiencies of glycopeptides have all contributed to making the analysis and subsequent identification of glycans and glycopeptides much more challenging than any other biopolymers. Nevertheless, the significant advancements in enrichment techniques, chromatographic separation, and mass spectrometric methodologies represent promising avenues for mitigating these challenges. Numerous substrates and multifunctional materials are being designed for glycopeptide enrichment, proving valuable in glycomics and glycoproteomics. Mass spectrometry (MS) is pivotal for probing protein glycosylation, offering sensitivity and structural insight into glycopeptides and glycans. Additionally, enhanced MS-based glycopeptide characterization employs various separation techniques like liquid chromatography, capillary electrophoresis, and ion mobility. In this review, we highlight recent advances in enrichment methods and MS-based separation techniques for analyzing different types of protein glycosylation. This review also discusses various approaches employed for glycan release that facilitate the investigation of the glycosylation sites of the identified glycoproteins. Furthermore, numerous bioinformatics tools aiding in accurately characterizing glycan and glycopeptides are covered.

Development of a Default Prediction Model for Vulnerable Populations Using Imbalanced Data Analysis (불균형 데이터 처리 기반의 취약계층 채무불이행 예측모델 개발)

  • Lee, Jong Hwa
    • The Journal of Information Systems
    • /
    • v.33 no.3
    • /
    • pp.175-185
    • /
    • 2024
  • Purpose This study aims to analyze the relationship between consumption patterns and default risk among financially vulnerable households in a rapidly changing economic environment. Financially vulnerable households are more susceptible to economic shocks, and their consumption patterns can significantly contribute to an increased risk of default. Therefore, this study seeks to provide a systematic approach to predict and manage these risks in advance. Design/methodology/approach The study utilizes data from the Korea Welfare Panel Study (KOWEPS) to analyze the consumption patterns and default status of financially vulnerable households. To address the issue of data imbalance, sampling techniques such as SMOTE, SMOTE-ENN, and SMOTE-Tomek Links were applied. Various machine learning algorithms, including Logistic Regression, Decision Tree, Random Forest, and Support Vector Machine (SVM), were employed to develop the prediction model. The performance of the models was evaluated using Confusion Matrix and F1-score. Findings The findings reveal that when using the original imbalanced data, the prediction performance for the minority class (default) was poor. However, after applying imbalance handling techniques such as SMOTE, the predictive performance for the minority class improved significantly. In particular, the Random Forest model, when combined with the SMOTE-Tomek Links technique, showed the highest predictive performance, making it the most suitable model for default prediction. These results suggest that effectively addressing data imbalance is crucial in developing accurate default prediction models, and the appropriate use of sampling techniques can greatly enhance predictive performance.