• Title/Summary/Keyword: Build Tools

Search Result 247, Processing Time 0.028 seconds

Development of Metacognitive-Based Online Learning Tools Website for Effective Learning (효과적인 학습을 위한 메타인지 기반의 온라인 학습 도구 웹사이트 구축)

  • Lee, Hyun-June;Bean, Gi-Bum;Kim, Eun-Seo;Moon, Il-Young
    • Journal of Practical Engineering Education
    • /
    • v.14 no.2
    • /
    • pp.351-359
    • /
    • 2022
  • In this paper, this app is an online learning tool web application that helps learners learn efficiently. It discusses how learners can improve their learning efficiency in these three aspects: retrieval practice, systematization, metacognition. Through this web service, learners can proceed with learning with a flash card-based retrieval practice. In this case, a method of managing a flash card in a form similar to a directory-file system using a composite pattern is described. Learners can systematically organize their knowledge by converting flash cards into a mind map. The color of the mind map varies according to the learner's learning progress, and learners can easily recognize what they know and what they do not know through color. In this case, it is proposed to build a deep learning model to improve the accuracy of an algorithm for determining and predicting learning progress.

Research on the development of automated tools to de-identify personal information of data for AI learning - Based on video data - (인공지능 학습용 데이터의 개인정보 비식별화 자동화 도구 개발 연구 - 영상데이터기반 -)

  • Hyunju Lee;Seungyeob Lee;Byunghoon Jeon
    • Journal of Platform Technology
    • /
    • v.11 no.3
    • /
    • pp.56-67
    • /
    • 2023
  • Recently, de-identification of personal information, which has been a long-cherished desire of the data-based industry, was revised and specified in August 2020. It became the foundation for activating data called crude oil[2] in the fourth industrial era in the industrial field. However, some people are concerned about the infringement of the basic rights of the data subject[3]. Accordingly, a development study was conducted on the Batch De-Identification Tool, a personal information de-identification automation tool. In this study, first, we developed an image labeling tool to label human faces (eyes, nose, mouth) and car license plates of various resolutions to build data for training. Second, an object recognition model was trained to run the object recognition module to perform de-identification of personal information. The automated personal information de-identification tool developed as a result of this research shows the possibility of proactively eliminating privacy violations through online services. These results suggest possibilities for data-based industries to maximize the value of data while balancing privacy and utilization.

  • PDF

Identification of Cardiovascular Disease Based on Echocardiography and Electrocardiogram Data Using the Decision Tree Classification Approach

  • Tb Ai Munandar;Sumiati;Vidila Rosalina
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.150-156
    • /
    • 2023
  • For a doctor, diagnosing a patient's heart disease is not easy. It takes the ability and experience with high flying hours to be able to accurately diagnose the type of patient's heart disease based on the existing factors in the patient. Several studies have been carried out to develop tools to identify types of heart disease in patients. However, most only focus on the results of patient answers and lab results, the rest use only echocardiography data or electrocardiogram results. This research was conducted to test how accurate the results of the classification of heart disease by using two medical data, namely echocardiography and electrocardiogram. Three treatments were applied to the two medical data and analyzed using the decision tree approach. The first treatment was to build a classification model for types of heart disease based on echocardiography and electrocardiogram data, the second treatment only used echocardiography data and the third treatment only used electrocardiogram data. The results showed that the classification of types of heart disease in the first treatment had a higher level of accuracy than the second and third treatments. The accuracy level for the first, second and third treatment were 78.95%, 73.69% and 50%, respectively. This shows that in order to diagnose the type of patient's heart disease, it is advisable to look at the records of both the patient's medical data (echocardiography and electrocardiogram) to get an accurate level of diagnosis results that can be accounted for.

The Design and Implementation of an Educational Computer Model for Semiconductor Manufacturing Courses (반도체 공정 교육을 위한 교육용 컴퓨터 모델 설계 및 구현)

  • Han, Young-Shin;Jeon, Dong-Hoon
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.4
    • /
    • pp.219-225
    • /
    • 2009
  • The primary purpose of this study is to build computer models referring overall flow of complex and various semiconductor wafer manufacturing process and to implement a educational model which operates with a presentation tool showing device design. It is important that Korean semiconductor industries secure high competitive power on efficient manufacturing management and to develop technology continuously. Models representing the FAB processes and the functions of each process are developed for Seoul National University Semiconductor Research Center. However, it is expected that the models are effective as visually educational tools in Korean semiconductor industries. In addition, it is anticipated that these models are useful for semiconductor process courses in academia. Scalability and flexibility allow semiconductor manufacturers to customize the models and perform simulation education. Subsequently, manufacturers save budget.

Scientometrics-based R&D Topography Analysis to Identify Research Trends Related to Image Segmentation (이미지 분할(image segmentation) 관련 연구 동향 파악을 위한 과학계량학 기반 연구개발지형도 분석)

  • Young-Chan Kim;Byoung-Sam Jin;Young-Chul Bae
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.3
    • /
    • pp.563-572
    • /
    • 2024
  • Image processing and computer vision technologies are becoming increasingly important in a variety of application fields that require techniques and tools for sophisticated image analysis. In particular, image segmentation is a technology that plays an important role in image analysis. In this study, in order to identify recent research trends on image segmentation techniques, we used the Web of Science(WoS) database to analyze the R&D topography based on the network structure of the author's keyword co-occurrence matrix. As a result, from 2015 to 2023, as a result of the analysis of the R&D map of research articles on image segmentation, R&D in this field is largely focused on four areas of research and development: (1) researches on collecting and preprocessing image data to build higher-performance image segmentation models, (2) the researches on image segmentation using statistics-based models or machine learning algorithms, (3) the researches on image segmentation for medical image analysis, and (4) deep learning-based image segmentation-related R&D. The scientometrics-based analysis performed in this study can not only map the trajectory of R&D related to image segmentation, but can also serve as a marker for future exploration in this dynamic field.

Tool for Supporting Design Pattern-Oriented Software Development (디자인 패턴지향 소프트웨어 개발 지원 도구)

  • Kim, Woon-Yong;Choi, Young-Keun
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.8
    • /
    • pp.555-564
    • /
    • 2002
  • Design patterns are used to utilize well-defined design information. As using these design patterns, we can get re-use in object-oriented paradigm, decrease the time of development and improvement the quality of software. Although these design patterns are widely used among practice, most of design patterns information is manually used, inconsistent and its utilization could be very low. Because the design patterns information that a designer applies does not appear in software, it is sometimes difficult to track them. In this paper, we propose a tool support for design pattern-oriented software development. This tool supports design pattern management, software design and automatic source code generation. The design pattern management has the function for storing, managing and analyzing the existing design pattern and registering new design pattern. The software design has the function for software design with UML and automatically generate design pattern elements. By using this design information, this system can automatically generate source code. In the result to include the tracking design pattern element that is not Included In the existing CASE tools into design information, we can build the stable and efficient system that provides to analyse software, manage design pattern and automatically generate source code.

Development of Green Template for Building Life Cycle Assessment Using BIM (건축물 LCA를 위한 BIM 친환경 템플릿 개발에 관한 연구)

  • Lee, Sung Woo;Tae, Sung Ho;Kim, Tae Hyoung;Roh, Seung Jun
    • Spatial Information Research
    • /
    • v.23 no.1
    • /
    • pp.1-8
    • /
    • 2015
  • The purpose of this study is to develope BIM Template according to major building material for efficiently and quantitatively evaluating greenhouse gas emission at the design stage. Template users consider various environmental impacts without connecting simulation tools for analyzing environmental impact and Template users who have no prior knowledge can Life Cycle Assessment by using The green template. For this study, Database which was reflected in template was constructed considering environmental performance. and 6 kinds of environmental impact categories and PPS standard construction codes were analyzed by major building material derived from literature. Based on this analyzed data, The major Material Family according to the main building material was developed. When users conduct modeling by utilizing Family established, evaluating result can be confirmed in the Revit BIM Modeling program by using the schedule function of the Revit. Users through the modeling, the decision-making environment performance possible. In addition, we propose to create a guideline for the steps required to build an additional established family.

Short-term Construction Investment Forecasting Model in Korea (건설투자(建設投資)의 단기예측모형(短期豫測模型) 비교(比較))

  • Kim, Kwan-young;Lee, Chang-soo
    • KDI Journal of Economic Policy
    • /
    • v.14 no.1
    • /
    • pp.121-145
    • /
    • 1992
  • This paper examines characteristics of time series data related to the construction investment(stationarity and time series components such as secular trend, cyclical fluctuation, seasonal variation, and random change) and surveys predictibility, fitness, and explicability of independent variables of various models to build a short-term construction investment forecasting model suitable for current economic circumstances. Unit root test, autocorrelation coefficient and spectral density function analysis show that related time series data do not have unit roots, fluctuate cyclically, and are largely explicated by lagged variables. Moreover it is very important for the short-term construction investment forecasting to grasp time lag relation between construction investment series and leading indicators such as building construction permits and value of construction orders received. In chapter 3, we explicate 7 forecasting models; Univariate time series model (ARIMA and multiplicative linear trend model), multivariate time series model using leading indicators (1st order autoregressive model, vector autoregressive model and error correction model) and multivariate time series model using National Accounts data (simple reduced form model disconnected from simultaneous macroeconomic model and VAR model). These models are examined by 4 statistical tools that are average absolute error, root mean square error, adjusted coefficient of determination, and Durbin-Watson statistic. This analysis proves two facts. First, multivariate models are more suitable than univariate models in the point that forecasting error of multivariate models tend to decrease in contrast to the case of latter. Second, VAR model is superior than any other multivariate models; average absolute prediction error and root mean square error of VAR model are quitely low and adjusted coefficient of determination is higher. This conclusion is reasonable when we consider current construction investment has sustained overheating growth more than secular trend.

  • PDF

A Study on the Effects of Lifelong Educator's Emotional Intelligence on Job Performance (평생교육사의 감성지능이 직무성과에 미치는 영향)

  • Bang, Hee-Bong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.6
    • /
    • pp.53-60
    • /
    • 2016
  • This study examined the data from 300 Daejeon and Chungnam Province's lifelong educators to establish the mediated effects of emotional intelligence on job performance and the influential relation between them. A research model and hypothesis was conducted from preliminary research, and the main data was brought into statistical analysis with the SPSS 20.0. The statistical analysis was divided into frequency, primary factors, and reliability; the tools used were the Varimax rotation factor analysis, and Cronbach's alpha coefficient. The analysis was also performed using the Pearson correlation coefficient, where the hypothesis was proven through regression analysis. The results showed that first, emotional intelligence and all of its sub factors have a positive influence on job performance. Second, job satisfaction, a factor shown in emotional intelligence and job performance, appeared to have mediating effects on self-awareness, self-management, social recognition, and relationship management. The results suggest that systematic training and development programs for improving emotional intelligence will be needed to build the competence of lifelong educators.

Current status of Atomic and Molecular Data for Low-Temperature Plasmas

  • Yoon, Jung-Sik;Song, Mi-Young;Kwon, Deuk-Chul
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2015.08a
    • /
    • pp.64-64
    • /
    • 2015
  • Control of plasma processing methodologies can only occur by obtaining a thorough understanding of the physical and chemical properties of plasmas. However, all plasma processes are currently used in the industry with an incomplete understanding of the coupled chemical and physical properties of the plasma involved. Thus, they are often 'non-predictive' and hence it is not possible to alter the manufacturing process without the risk of considerable product loss. Only a more comprehensive understanding of such processes will allow models of such plasmas to be constructed that in turn can be used to design the next generation of plasma reactors. Developing such models and gaining a detailed understanding of the physical and chemical mechanisms within plasma systems is intricately linked to our knowledge of the key interactions within the plasma and thus the status of the database for characterizing electron, ion and photon interactions with those atomic and molecular species within the plasma and knowledge of both the cross-sections and reaction rates for such collisions, both in the gaseous phase and on the surfaces of the plasma reactor. The compilation of databases required for understanding most plasmas remains inadequate. The spectroscopic database required for monitoring both technological and fusion plasmas and thence deriving fundamental quantities such as chemical composition, neutral, electron and ion temperatures is incomplete with several gaps in our knowledge of many molecular spectra, particularly for radicals and excited (vibrational and electronic) species. However, the compilation of fundamental atomic and molecular data required for such plasma databases is rarely a coherent, planned research program, instead it is a parasitic process. The plasma community is a rapacious user of atomic and molecular data but is increasingly faced with a deficit of data necessary to both interpret observations and build models that can be used to develop the next-generation plasma tools that will continue the scientific and technological progress of the late 20th and early 21st century. It is therefore necessary to both compile and curate the A&M data we do have and thence identify missing data needed by the plasma community (and other user communities). Such data may then be acquired using a mixture of benchmarking experiments and theoretical formalisms. However, equally important is the need for the scientific/technological community to recognize the need to support the value of such databases and the underlying fundamental A&M that populates them. This must be conveyed to funders who are currently attracted to more apparent high-profile projects.

  • PDF