• Title/Summary/Keyword: Operations Research Models

Search Result 690, Processing Time 0.023 seconds

Comparative Analysis of Mathematics Textbooks in Elementary Schools between Korea and Canada - Focusing on the Numbers and Operations in 5th and 6th Grade - (한국과 캐나다 초등학교 수학 교과서 비교 분석 - 초등학교 5, 6학년 수와 연산 영역을 중심으로 -)

  • Kim, Aekyong;Ryu, Heuisu
    • Journal of Science Education
    • /
    • v.44 no.3
    • /
    • pp.331-344
    • /
    • 2020
  • This study aims to find meaningful implications for the development of Korean elementary school math education courses and textbooks by comparing and analyzing the number and arithmetic areas of Korean and Canadian math textbooks in fifth and sixth grades. To this end, the textbook composition system of Korean and Canadian elementary schools was compared and analyzed, and the number and timing of introduction of math textbooks and math textbooks by grade, and the number in fifth and sixth grade and the learning contents of math textbooks were compared and analyzed. The following conclusions were obtained from this study: First, it is necessary to organize a textbook that can solve the problem in an integrated way by introducing the learned mathematical concepts and computations naturally in the context of problems closely related to real life, regardless of the type of private calculation or mathematics area. Second, it is necessary to organize questions using materials such as real photography and mathematics, science, technology, engineering, art, etc. and to organize textbooks that make people feel the necessity and usefulness of mathematics. Third, sufficient learning of the principles of mathematics through the use of various actual teaching aids and mathematical models, and the construction of textbooks focusing on problem-solving strategies using engineering tools are needed. Fourth, in-depth discussions are needed on the timing of learning guidance for fractions and minority learning or how to organize and develop learning content.

Predicting blast-induced ground vibrations at limestone quarry from artificial neural network optimized by randomized and grid search cross-validation, and comparative analyses with blast vibration predictor models

  • Salman Ihsan;Shahab Saqib;Hafiz Muhammad Awais Rashid;Fawad S. Niazi;Mohsin Usman Qureshi
    • Geomechanics and Engineering
    • /
    • v.35 no.2
    • /
    • pp.121-133
    • /
    • 2023
  • The demand for cement and limestone crushed materials has increased many folds due to the tremendous increase in construction activities in Pakistan during the past few decades. The number of cement production industries has increased correspondingly, and so the rock-blasting operations at the limestone quarry sites. However, the safety procedures warranted at these sites for the blast-induced ground vibrations (BIGV) have not been adequately developed and/or implemented. Proper prediction and monitoring of BIGV are necessary to ensure the safety of structures in the vicinity of these quarry sites. In this paper, an attempt has been made to predict BIGV using artificial neural network (ANN) at three selected limestone quarries of Pakistan. The ANN has been developed in Python using Keras with sequential model and dense layers. The hyper parameters and neurons in each of the activation layers has been optimized using randomized and grid search method. The input parameters for the model include distance, a maximum charge per delay (MCPD), depth of hole, burden, spacing, and number of blast holes, whereas, peak particle velocity (PPV) is taken as the only output parameter. A total of 110 blast vibrations datasets were recorded from three different limestone quarries. The dataset has been divided into 85% for neural network training, and 15% for testing of the network. A five-layer ANN is trained with Rectified Linear Unit (ReLU) activation function, Adam optimization algorithm with a learning rate of 0.001, and batch size of 32 with the topology of 6-32-32-256-1. The blast datasets were utilized to compare the performance of ANN, multivariate regression analysis (MVRA), and empirical predictors. The performance was evaluated using the coefficient of determination (R2), mean absolute error (MAE), mean squared error (MSE), mean absolute percentage error (MAPE), and root mean squared error (RMSE)for predicted and measured PPV. To determine the relative influence of each parameter on the PPV, sensitivity analyses were performed for all input parameters. The analyses reveal that ANN performs superior than MVRA and other empirical predictors, andthat83% PPV is affected by distance and MCPD while hole depth, number of blast holes, burden and spacing contribute for the remaining 17%. This research provides valuable insights into improving safety measures and ensuring the structural integrity of buildings near limestone quarry sites.

Retail Product Development and Brand Management Collaboration between Industry and University Student Teams (산업여대학학생단대지간적령수산품개발화품패관리협작(产业与大学学生团队之间的零售产品开发和品牌管理协作))

  • Carroll, Katherine Emma
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.3
    • /
    • pp.239-248
    • /
    • 2010
  • This paper describes a collaborative project between academia and industry which focused on improving the marketing and product development strategies for two private label apparel brands of a large regional department store chain in the southeastern United States. The goal of the project was to revitalize product lines of the two brands by incorporating student ideas for new solutions, thereby giving the students practical experience with a real-life industry situation. There were a number of key players involved in the project. A privately-owned department store chain based in the southeastern United States which was seeking an academic partner had recognized a need to update two existing private label brands. They targeted middle-aged consumers looking for casual, moderately priced merchandise. The company was seeking to change direction with both packaging and presentation, and possibly product design. The branding and product development divisions of the company contacted professors in an academic department of a large southeastern state university. Two of the professors agreed that the task would be a good fit for their classes - one was a junior-level Intermediate Brand Management class; the other was a senior-level Fashion Product Development class. The professors felt that by working collaboratively on the project, students would be exposed to a real world scenario, within the security of an academic learning environment. Collaboration within an interdisciplinary team has the advantage of providing experiences and resources beyond the capabilities of a single student and adds "brainpower" to problem-solving processes (Lowman 2000). This goal of improving the capabilities of students directed the instructors in each class to form interdisciplinary teams between the Branding and Product Development classes. In addition, many universities are employing industry partnerships in research and teaching, where collaboration within temporal (semester) and physical (classroom/lab) constraints help to increase students' knowledge and experience of a real-world situation. At the University of Tennessee, the Center of Industrial Services and UT-Knoxville's College of Engineering worked with a company to develop design improvements in its U.S. operations. In this study, Because should be lower case b with a private label retail brand, Wickett, Gaskill and Damhorst's (1999) revised Retail Apparel Product Development Model was used by the product development and brand management teams. This framework was chosen because it addresses apparel product development from the concept to the retail stage. Two classes were involved in this project: a junior level Brand Management class and a senior level Fashion Product Development class. Seven teams were formed which included four students from Brand Management and two students from Product Development. The classes were taught the same semester, but not at the same time. At the beginning of the semester, each class was introduced to the industry partner and given the problem. Half the teams were assigned to the men's brand and half to the women's brand. The teams were responsible for devising approaches to the problem, formulating a timeline for their work, staying in touch with industry representatives and making sure that each member of the team contributed in a positive way. The objective for the teams was to plan, develop, and present a product line using merchandising processes (following the Wickett, Gaskill and Damhorst model) and develop new branding strategies for the proposed lines. The teams performed trend, color, fabrication and target market research; developed sketches for a line; edited the sketches and presented their line plans; wrote specifications; fitted prototypes on fit models, and developed final production samples for presentation to industry. The branding students developed a SWOT analysis, a Brand Measurement report, a mind-map for the brands and a fully integrated Marketing Report which was presented alongside the ideas for the new lines. In future if the opportunity arises to work in this collaborative way with an existing company who wishes to look both at branding and product development strategies, classes will be scheduled at the same time so that students have more time to meet and discuss timelines and assigned tasks. As it was, student groups had to meet outside of each class time and this proved to be a challenging though not uncommon part of teamwork (Pfaff and Huddleston, 2003). Although the logistics of this exercise were time-consuming to set up and administer, professors felt that the benefits to students were multiple. The most important benefit, according to student feedback from both classes, was the opportunity to work with industry professionals, follow their process, and see the results of their work evaluated by the people who made the decisions at the company level. Faculty members were grateful to have a "real-world" case to work with in the classroom to provide focus. Creative ideas and strategies were traded as plans were made, extending and strengthening the departmental links be tween the branding and product development areas. By working not only with students coming from a different knowledge base, but also having to keep in contact with the industry partner and follow the framework and timeline of industry practice, student teams were challenged to produce excellent and innovative work under new circumstances. Working on the product development and branding for "real-life" brands that are struggling gave students an opportunity to see how closely their coursework ties in with the real-world and how creativity, collaboration and flexibility are necessary components of both the design and business aspects of company operations. Industry personnel were impressed by (a) the level and depth of knowledge and execution in the student projects, and (b) the creativity of new ideas for the brands.

Detection of Phantom Transaction using Data Mining: The Case of Agricultural Product Wholesale Market (데이터마이닝을 이용한 허위거래 예측 모형: 농산물 도매시장 사례)

  • Lee, Seon Ah;Chang, Namsik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.161-177
    • /
    • 2015
  • With the rapid evolution of technology, the size, number, and the type of databases has increased concomitantly, so data mining approaches face many challenging applications from databases. One such application is discovery of fraud patterns from agricultural product wholesale transaction instances. The agricultural product wholesale market in Korea is huge, and vast numbers of transactions have been made every day. The demand for agricultural products continues to grow, and the use of electronic auction systems raises the efficiency of operations of wholesale market. Certainly, the number of unusual transactions is also assumed to be increased in proportion to the trading amount, where an unusual transaction is often the first sign of fraud. However, it is very difficult to identify and detect these transactions and the corresponding fraud occurred in agricultural product wholesale market because the types of fraud are more intelligent than ever before. The fraud can be detected by verifying the overall transaction records manually, but it requires significant amount of human resources, and ultimately is not a practical approach. Frauds also can be revealed by victim's report or complaint. But there are usually no victims in the agricultural product wholesale frauds because they are committed by collusion of an auction company and an intermediary wholesaler. Nevertheless, it is required to monitor transaction records continuously and to make an effort to prevent any fraud, because the fraud not only disturbs the fair trade order of the market but also reduces the credibility of the market rapidly. Applying data mining to such an environment is very useful since it can discover unknown fraud patterns or features from a large volume of transaction data properly. The objective of this research is to empirically investigate the factors necessary to detect fraud transactions in an agricultural product wholesale market by developing a data mining based fraud detection model. One of major frauds is the phantom transaction, which is a colluding transaction by the seller(auction company or forwarder) and buyer(intermediary wholesaler) to commit the fraud transaction. They pretend to fulfill the transaction by recording false data in the online transaction processing system without actually selling products, and the seller receives money from the buyer. This leads to the overstatement of sales performance and illegal money transfers, which reduces the credibility of market. This paper reviews the environment of wholesale market such as types of transactions, roles of participants of the market, and various types and characteristics of frauds, and introduces the whole process of developing the phantom transaction detection model. The process consists of the following 4 modules: (1) Data cleaning and standardization (2) Statistical data analysis such as distribution and correlation analysis, (3) Construction of classification model using decision-tree induction approach, (4) Verification of the model in terms of hit ratio. We collected real data from 6 associations of agricultural producers in metropolitan markets. Final model with a decision-tree induction approach revealed that monthly average trading price of item offered by forwarders is a key variable in detecting the phantom transaction. The verification procedure also confirmed the suitability of the results. However, even though the performance of the results of this research is satisfactory, sensitive issues are still remained for improving classification accuracy and conciseness of rules. One such issue is the robustness of data mining model. Data mining is very much data-oriented, so data mining models tend to be very sensitive to changes of data or situations. Thus, it is evident that this non-robustness of data mining model requires continuous remodeling as data or situation changes. We hope that this paper suggest valuable guideline to organizations and companies that consider introducing or constructing a fraud detection model in the future.

A Comparative Study between Space Law and the Law of the Sea (우주법과 해양법의 비교 연구)

  • Kim, Han-Taek
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.24 no.2
    • /
    • pp.187-210
    • /
    • 2009
  • Space law(or outer space law) and the law of the sea are branches of international law dealing with activities in geographical ares which do not or do only in part come under national sovereignty. Legal rules pertaining to the outer space and sea began to develop once activities emerged in those areas: amongst others, activities dealing with transportation, research, exploration, defense and exploitation. Naturally the law of the sea developed first, followed, early in the twentieth century, by air law, and later in the century by space law. Obviously the law of the sea, of the air and of outer space influence each other. Ideas have been borrowed from one field and applied to another. This article examines some analogies and differences between the outer space law and the law of the sea, especially from the perspective of the legal status, the exploration and exploitation of the natural resources and environment. As far as the comparisons of the legal status between the outer space and high seas are concerned the two areas are res extra commercium. The latter is res extra commercium based on both the customary international law and treaty, however, the former is different respectively according to the customary law and treaty. Under international customary law, whilst outer space constitutes res extra commercium, celestial bodies are res nullius. However as among contracting States of the 1967 Outer Space Treaty, both outer space and celestial bodies are declared res extra commercium. As for the comparisons of the exploration and exploitation of natural resources between the Moon including other celestial bodies in 1979 Moon Agreement and the deep sea bed in the 1982 United Nations Convention on the Law of the Sea, the both areas are the common heritage of mankind. The latter gives us very systematic models such as International Sea-bed Authority, however, the international regime for the former will be established as the exploitation of the natural resources of the celestial bodies other than the Earth is about to become feasible. Thus Moon Agreement could not impose a moratorium, but would merely permit orderly attempts to establish that such exploitation was in fact feasible and practicable, by allowing experimental beginnings and thereafter pilot operations. As Professor Carl Christol said until the parties of the Moon Agreement were able to put into operation the legal regime for the equitable sharing of benefits, they would remain free to disregard the Common Heritage of Mankind principle. Parties to one or both of the agreements would retain jurisdiction over national space activities. In so far as the comparisons of the protection of the environment between the outer space and sea is concerned the legal instruments for the latter are more systematically developed than the former. In the case of the former there are growing tendencies of concerning the environmental threats arising from space activities these days. There is no separate legal instrument to deal with those problems.

  • PDF

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.

LIM Implementation Method for Planning Biotope Area Ratio in Apartment Complex - Focused on Terrain and Pavement Modeling - (공동주택단지의 생태면적률 계획을 위한 LIM 활용방법 - 지형 및 포장재 모델링을 중심으로 -)

  • Kim, Bok-Young;Son, Yong-Hoon;Lee, Soon-Ji
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.46 no.3
    • /
    • pp.14-26
    • /
    • 2018
  • The Biotope Area Ratio (BAR) is a quantitative pre-planning index for sustainable development and an integrated indicator for the balanced development of buildings and outdoor spaces. However, it has been pointed out that there are problems in operations management: errors in area calculation, insufficiency in the underground soil condition and depth, reduction in biotope area after construction, and functional failure as a pre-planning index. To address these problems, this study proposes implementing LIM. Since the weights of the BAR are mainly decided by the underground soil condition and depth with land cover types, the study focused on the terrain and pavements. The model should conform to BIM guidelines and standards provided by government agencies and professional organizations. Thus, the scope and Level Of Detail (LOD) of the model were defined, and the method to build a model with BIM software was developed. An apartment complex on sloping ground was selected as a case study, a 3D terrain modeled, paving libraries created with property information on the BAR, and a LIM model completed for the site. Then the BAR was calculated and construction documents were created with the BAR table and pavement details. As results of the study, it was found that the application of the criteria on the BAR and calculation became accurate, and the efficiency of design tasks was improved by LIM. It also enabled the performance of evidence-based design on the terrain and underground structures. To adopt LIM, it is necessary to create and distribute LIM library manuals or templates, and build library content that comply with KBIMS standards. The government policy must also have practitioners submit BIM models in the certification system. Since it is expected that the criteria on planting types in the BAR will be expanded, further research is needed to build and utilize the information model for planting materials.

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

Landscape Object Classification and Attribute Information System for Standardizing Landscape BIM Library (조경 BIM 라이브러리 표준화를 위한 조경객체 및 속성정보 분류체계)

  • Kim, Bok-Young
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.51 no.2
    • /
    • pp.103-119
    • /
    • 2023
  • Since the Korean government has decided to apply the policy of BIM (Building Information Modeling) to the entire construction industry, it has experienced a positive trend in adoption and utilization. BIM can reduce workloads by building model objects into libraries that conform to standards and enable consistent quality, data integrity, and compatibility. In the domestic architecture, civil engineering, and the overseas landscape architecture sectors, many BIM library standardization studies have been conducted, and guidelines have been established based on them. Currently, basic research and attempts to introduce BIM are being made in Korean landscape architecture field, but the diffusion has been delayed due to difficulties in application. This can be addressed by enhancing the efficiency of BIM work using standardized libraries. Therefore, this study aims to provide a starting point for discussions and present a classification system for objects and attribute information that can be referred to when creating landscape libraries in practice. The standardization of landscape BIM library was explored from two directions: object classification and attribute information items. First, the Korean construction information classification system, product inventory classification system, landscape design and construction standards, and BIM object classification of the NLA (Norwegian Association of Landscape Architects) were referred to classify landscape objects. As a result, the objects were divided into 12 subcategories, including 'trees', 'shrubs', 'ground cover and others', 'outdoor installation', 'outdoor lighting facility', 'stairs and ramp', 'outdoor wall', 'outdoor structure', 'pavement', 'curb', 'irrigation', and 'drainage' under five major categories: 'landscape plant', 'landscape facility', 'landscape structure', 'landscape pavement', and 'irrigation and drainage'. Next, the attribute information for the objects was extracted and structured. To do this, the common attribute information items of the KBIMS (Korean BIM Standard) were included, and the object attribute information items that vary according to the type of objects were included by referring to the PDT (Product Data Template) of the LI (UK Landscape Institute). As a result, the common attributes included information on 'identification', 'distribution', 'classification', and 'manufacture and supply' information, while the object attributes included information on 'naming', 'specifications', 'installation or construction', 'performance', 'sustainability', and 'operations and maintenance'. The significance of this study lies in establishing the foundation for the introduction of landscape BIM through the standardization of library objects, which will enhance the efficiency of modeling tasks and improve the data consistency of BIM models across various disciplines in the construction industry.

Retrieval of Hourly Aerosol Optical Depth Using Top-of-Atmosphere Reflectance from GOCI-II and Machine Learning over South Korea (GOCI-II 대기상한 반사도와 기계학습을 이용한 남한 지역 시간별 에어로졸 광학 두께 산출)

  • Seyoung Yang;Hyunyoung Choi;Jungho Im
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.933-948
    • /
    • 2023
  • Atmospheric aerosols not only have adverse effects on human health but also exert direct and indirect impacts on the climate system. Consequently, it is imperative to comprehend the characteristics and spatiotemporal distribution of aerosols. Numerous research endeavors have been undertaken to monitor aerosols, predominantly through the retrieval of aerosol optical depth (AOD) via satellite-based observations. Nonetheless, this approach primarily relies on a look-up table-based inversion algorithm, characterized by computationally intensive operations and associated uncertainties. In this study, a novel high-resolution AOD direct retrieval algorithm, leveraging machine learning, was developed using top-of-atmosphere reflectance data derived from the Geostationary Ocean Color Imager-II (GOCI-II), in conjunction with their differences from the past 30-day minimum reflectance, and meteorological variables from numerical models. The Light Gradient Boosting Machine (LGBM) technique was harnessed, and the resultant estimates underwent rigorous validation encompassing random, temporal, and spatial N-fold cross-validation (CV) using ground-based observation data from Aerosol Robotic Network (AERONET) AOD. The three CV results consistently demonstrated robust performance, yielding R2=0.70-0.80, RMSE=0.08-0.09, and within the expected error (EE) of 75.2-85.1%. The Shapley Additive exPlanations(SHAP) analysis confirmed the substantial influence of reflectance-related variables on AOD estimation. A comprehensive examination of the spatiotemporal distribution of AOD in Seoul and Ulsan revealed that the developed LGBM model yielded results that are in close concordance with AERONET AOD over time, thereby confirming its suitability for AOD retrieval at high spatiotemporal resolution (i.e., hourly, 250 m). Furthermore, upon comparing data coverage, it was ascertained that the LGBM model enhanced data retrieval frequency by approximately 8.8% in comparison to the GOCI-II L2 AOD products, ameliorating issues associated with excessive masking over very illuminated surfaces that are often encountered in physics-based AOD retrieval processes.