• Title/Summary/Keyword: Model Based Reasoning

Search Result 411, Processing Time 0.029 seconds

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

An Analysis of Cognitive Demands of Tasks in Elementary Mathematical Instruction: Focusing on 'Ratio and Proportion' (수학 교수${\cdot}$학습 과정에서 과제의 인지적 수준 분석 - 초등학교 '비와 비율' 단원을 중심으로 -)

  • Kim, Hee-Seong;Pang, Suk-Jeong
    • Journal of Educational Research in Mathematics
    • /
    • v.15 no.3
    • /
    • pp.251-272
    • /
    • 2005
  • Given that cognitive demands of mathematical tasks can be changed during instruction, this study attempts to provide a detailed description to explore how tasks are set up and implemented in the classroom and what are the classroom-based factors. As an exploratory and qualitative case study, 4 of six-grade classrooms where high-level tasks on ratio and proportion were used were videotaped and analyzed with regard to the patterns emerged during the task setup and implementation. With regard to 16 tasks, four kinds of Patterns emerged: (a) maintenance of high-level cognitive demands (7 tasks), (b) decline into the procedure without connection to the meaning (1 task), (c) decline into unsystematic exploration (2 tasks), and (d) decline into not-sufficient exploration (6 tasks), which means that the only partial meaning of a given task is addressed. The 4th pattern is particularly significant, mainly because previous studies have not identified. Contributing factors to this pattern include private-learning without reasonable explanation, well-performed model presented at the beginning of a lesson, and mathematical concepts which are not clear in the textbook. On the one hand, factors associated with the maintenance of high-level cognitive demands include Improvising a task based on students' for knowledge, scaffolding of students' thinking, encouraging students to justify and explain their reasoning, using group-activity appropriately, and rethinking the solution processes. On the other hand, factors associated with the decline of high-level cognitive demands include too much or too little time, inappropriateness of a task for given students, little interest in high-level thinking process, and emphasis on the correct answer in place of its meaning. These factors may urge teachers to be sensitive of what should be focused during their teaching practices to keep the high-level cognitive demands. To emphasize, cognitive demands are fixed neither by the task nor by the teacher. So, we need to study them in the process of teaching and learning.

  • PDF

A Case Study on Instruction for Mathematically Gifted Children through The Application of Open-ended Problem Solving Tasks (개방형 과제를 활용한 수학 영재아 수업 사례 분석)

  • Park Hwa-Young;Kim Soo-Hwan
    • Communications of Mathematical Education
    • /
    • v.20 no.1 s.25
    • /
    • pp.117-145
    • /
    • 2006
  • Mathematically gifted children have creative curiosity about novel tasks deriving from their natural mathematical talents, aptitudes, intellectual abilities and creativities. More effect in nurturing the creative thinking found in brilliant children, letting them approach problem solving in various ways and make strategic attempts is needed. Given this perspective, it is desirable to select open-ended and atypical problems as a task for educational program for gifted children. In this paper, various types of open-ended problems were framed and based on these, teaming activities were adapted into gifted children's class. Then in the problem solving process, the characteristic of bright children's mathematical thinking ability and examples of problem solving strategies were analyzed so that suggestions about classes for bright children utilizing open-ended tasks at elementary schools could be achieved. For this, an open-ended task made of 24 inquiries was structured, the teaching procedure was made of three steps properly transforming Renzulli's Enrichment Triad Model, and 24 periods of classes were progressed according to the teaching plan. One period of class for each subcategories of mathematical thinking ability; ability of intuitional insight, systematizing information, space formation/visualization, mathematical abstraction, mathematical reasoning, and reflective thinking were chosen and analyzed regarding teaching, teaming process and products. Problem solving examples that could be anticipated through teaching and teaming process and products analysis, and creative problem solving examples were suggested, and suggestions about teaching bright children using open-ended tasks were deduced based on the analysis of the characteristic of tasks, role of the teacher, impartiality and probability of approaching through reflecting the classes. Through the case study of a mathematics class for bright children making use of open-ended tasks proved to satisfy the curiosity of the students, and was proved to be effective for providing and forming a habit of various mathematical thinking experiences by establishing atypical mathematical problem solving strategies. This study is meaningful in that it provided mathematically gifted children's problem solving procedures about open-ended problems and it made an attempt at concrete and practical case study about classes fur gifted children while most of studies on education for gifted children in this country focus on the studies on basic theories or quantitative studies.

  • PDF

Development of Neuropsychological Model for Spatial Ability and Application to Light & Shadow Problem Solving Process (공간능력에 대한 신경과학적 모델 개발 및 빛과 그림자 문제 해결 과정에의 적용)

  • Shin, Jung-Yun;Yang, Il-Ho;Park, Sang-woo
    • Journal of The Korean Association For Science Education
    • /
    • v.41 no.5
    • /
    • pp.371-390
    • /
    • 2021
  • The purpose of this study is to develop a neuropsychological model for the spatial ability factor and to divide the brain active area involved in the light & shadow problem solving process into the domain-general ability and the domain-specific ability based on the neuropsychological model. Twenty-four male college students participated in the study to measure the synchronized eye movement and electroencephalograms (EEG) while they performed the spatial ability test and the light & shadow tasks. Neuropsychological model for the spatial ability factor and light & shadow problem solving process was developed by integrating the measurements of the participants' eye movements, brain activity areas, and the interview findings regarding their thoughts and strategies. The results of this study are as follows; first, the spatial visualization and mental rotation factors mainly required activation of the parietal lobe, and the spatial orientation factor required activation of the frontal lobe. Second, in the light & shadow problem solving process, participants use both their spatial ability as a domain-general thought, and the application of scientific principles as a domain-specific thought. The brain activity patterns resulting from a participants' inferring the shadow by parallel light source and inferring the shadow when the direction of the light changed were similar to the neuropsychological model for the spatial visualization factor. The brain activity pattern from inferring an object from its shadow by light from multiple directions was similar to the neuropsychological model for the spatial orientation factor. The brain activity pattern from inferring a shadow with a point source of light was similar to the neuropsychological model for the spatial visualization factor. In addition, when solving the light & shadow tasks, the brain's middle temporal gyrus, precentral gyrus, inferior frontal gyrus, middle frontal gyrus were additionally activated, which are responsible for deductive reasoning, working memory, and planning for action.

A Study on the Development of a Competency-Based Intervention Course Curriculum of the Korean Academy of Sensory Integration (대한감각통합치료학회 역량기반 중재과정 교육커리큘럼 개발연구)

  • Namkung, Young;Kim, Kyeong-Mi;Kim, Misun;Lee, Jiyoung
    • The Journal of Korean Academy of Sensory Integration
    • /
    • v.17 no.3
    • /
    • pp.26-45
    • /
    • 2019
  • Objective : The purpose of this study is to develop educational goals, training content, and training methods for the intervention course of the Korean Academy of Sensory Integration (KASI) and to conduct competency-based intervention courses based on the competency model for sensory integration intervention. Methods : This study was conducted on work therapists who participated in the 2019 intervention course of KASI. In the first phase, educational needs were analyzed to set goals for the interventional course. In the second phase, a meeting of researchers drafted the intervention course education program and the methods of education, and the intervention course was conducted. In the third phase, the changes in educational satisfaction and performance level pre- and post-intervention course for each competency index were investigated. Results : The educational goals of "learning and applying the clinical reasoning process of sensory integration intervention" and "intervention by applying the principle of sensory integration intervention" were set after reflecting on the results of the analysis of the educational requirements. The length of the competency-based intervention course was 42 hours. The average education satisfaction level of participants in the arbitration process was 4.48±0.73, and the average education satisfaction level of the supervisor was 3.92±0.71. In both groups, the most satisfying curriculums were the data-driven decision-making process and the intervention goal-setting lecture. But the satisfaction level of was the lowest. Before and after the intervention course, there were significant changes in the performance of the two behavioral indicators of the analytic skills in the expertise competency cluster of the competency model. Conclusion : This study is meaningful in that it conducted a survey of educational needs, the development and implementation of an educational curriculum, and an education satisfaction survey through systematic courses necessary for education development.

Features of sample concepts in the probability and statistics chapters of Korean mathematics textbooks of grades 1-12 (초.중.고등학교 확률과 통계 단원에 나타난 표본개념에 대한 분석)

  • Lee, Young-Ha;Shin, Sou-Yeong
    • Journal of Educational Research in Mathematics
    • /
    • v.21 no.4
    • /
    • pp.327-344
    • /
    • 2011
  • This study is the first step for us toward improving high school students' capability of statistical inferences, such as obtaining and interpreting the confidence interval on the population mean that is currently learned in high school. We suggest 5 underlying concepts of 'discretion of contingency and inevitability', 'discretion of induction and deduction', 'likelihood principle', 'variability of a statistic' and 'statistical model', those are necessary to appreciate statistical inferences as a reliable arguing tools in spite of its occasional erroneous conclusions. We assume those 5 concepts above are to be gradually developing in their school periods and Korean mathematics textbooks of grades 1-12 were analyzed. Followings were found. For the right choice of solving methodology of the given problem, no elementary textbook but a few high school textbooks describe its difference between the contingent circumstance and the inevitable one. Formal definitions of population and sample are not introduced until high school grades, so that the developments of critical thoughts on the reliability of inductive reasoning could not be observed. On the contrary of it, strong emphasis lies on the calculation stuff of the sample data without any inference on the population prospective based upon the sample. Instead of the representative properties of a random sample, more emphasis lies on how to get a random sample. As a result of it, the fact that 'the random variability of the value of a statistic which is calculated from the sample ought to be inherited from the randomness of the sample' could neither be noticed nor be explained as well. No comparative descriptions on the statistical inferences against the mathematical(deductive) reasoning were found. Few explanations on the likelihood principle and its probabilistic applications in accordance with students' cognitive developmental growth were found. It was hard to find the explanation of a random variability of statistics and on the existence of its sampling distribution. It is worthwhile to explain it because, nevertheless obtaining the sampling distribution of a particular statistic, like a sample mean, is a very difficult job, mere noticing its existence may cause a drastic change of understanding in a statistical inference.

  • PDF

Developing and Applying the Questionnaire to Measure Science Core Competencies Based on the 2015 Revised National Science Curriculum (2015 개정 과학과 교육과정에 기초한 과학과 핵심역량 조사 문항의 개발 및 적용)

  • Ha, Minsu;Park, HyunJu;Kim, Yong-Jin;Kang, Nam-Hwa;Oh, Phil Seok;Kim, Mi-Jum;Min, Jae-Sik;Lee, Yoonhyeong;Han, Hyo-Jeong;Kim, Moogyeong;Ko, Sung-Woo;Son, Mi-Hyun
    • Journal of The Korean Association For Science Education
    • /
    • v.38 no.4
    • /
    • pp.495-504
    • /
    • 2018
  • This study was conducted to develop items to measure scientific core competency based on statements of scientific core competencies presented in the 2015 revised national science curriculum and to identify the validity and reliability of the newly developed items. Based on the explanations of scientific reasoning, scientific inquiry ability, scientific problem-solving ability, scientific communication ability, participation/lifelong learning in science presented in the 2015 revised national science curriculum, 25 items were developed by five science education experts. To explore the validity and reliability of the developed items, data were collected from 11,348 students in elementary, middle, and high schools nationwide. The content validity, substantive validity, the internal structure validity, and generalization validity proposed by Messick (1995) were examined by various statistical tests. The results of the MNSQ analysis showed that there were no nonconformity in the 25 items. The confirmatory factor analysis using the structural equation modeling revealed that the five-factor model was a suitable model. The differential item functioning analyses by gender and school level revealed that the nonconformity DIF value was found in only two out of 175 cases. The results of the multivariate analysis of variance by gender and school level showed significant differences of test scores between schools and genders, and the interaction effect was also significant. The assessment items of science core competency based on the 2015 revised national science curriculum are valid from a psychometric point of view and can be used in the science education field.

A Study on Web-based Technology Valuation System (웹기반 지능형 기술가치평가 시스템에 관한 연구)

  • Sung, Tae-Eung;Jun, Seung-Pyo;Kim, Sang-Gook;Park, Hyun-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.23-46
    • /
    • 2017
  • Although there have been cases of evaluating the value of specific companies or projects which have centralized on developed countries in North America and Europe from the early 2000s, the system and methodology for estimating the economic value of individual technologies or patents has been activated on and on. Of course, there exist several online systems that qualitatively evaluate the technology's grade or the patent rating of the technology to be evaluated, as in 'KTRS' of the KIBO and 'SMART 3.1' of the Korea Invention Promotion Association. However, a web-based technology valuation system, referred to as 'STAR-Value system' that calculates the quantitative values of the subject technology for various purposes such as business feasibility analysis, investment attraction, tax/litigation, etc., has been officially opened and recently spreading. In this study, we introduce the type of methodology and evaluation model, reference information supporting these theories, and how database associated are utilized, focusing various modules and frameworks embedded in STAR-Value system. In particular, there are six valuation methods, including the discounted cash flow method (DCF), which is a representative one based on the income approach that anticipates future economic income to be valued at present, and the relief-from-royalty method, which calculates the present value of royalties' where we consider the contribution of the subject technology towards the business value created as the royalty rate. We look at how models and related support information (technology life, corporate (business) financial information, discount rate, industrial technology factors, etc.) can be used and linked in a intelligent manner. Based on the classification of information such as International Patent Classification (IPC) or Korea Standard Industry Classification (KSIC) for technology to be evaluated, the STAR-Value system automatically returns meta data such as technology cycle time (TCT), sales growth rate and profitability data of similar company or industry sector, weighted average cost of capital (WACC), indices of industrial technology factors, etc., and apply adjustment factors to them, so that the result of technology value calculation has high reliability and objectivity. Furthermore, if the information on the potential market size of the target technology and the market share of the commercialization subject refers to data-driven information, or if the estimated value range of similar technologies by industry sector is provided from the evaluation cases which are already completed and accumulated in database, the STAR-Value is anticipated that it will enable to present highly accurate value range in real time by intelligently linking various support modules. Including the explanation of the various valuation models and relevant primary variables as presented in this paper, the STAR-Value system intends to utilize more systematically and in a data-driven way by supporting the optimal model selection guideline module, intelligent technology value range reasoning module, and similar company selection based market share prediction module, etc. In addition, the research on the development and intelligence of the web-based STAR-Value system is significant in that it widely spread the web-based system that can be used in the validation and application to practices of the theoretical feasibility of the technology valuation field, and it is expected that it could be utilized in various fields of technology commercialization.

Construction and Application of Intelligent Decision Support System through Defense Ontology - Application example of Air Force Logistics Situation Management System (국방 온톨로지를 통한 지능형 의사결정지원시스템 구축 및 활용 - 공군 군수상황관리체계 적용 사례)

  • Jo, Wongi;Kim, Hak-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.77-97
    • /
    • 2019
  • The large amount of data that emerges from the initial connection environment of the Fourth Industrial Revolution is a major factor that distinguishes the Fourth Industrial Revolution from the existing production environment. This environment has two-sided features that allow it to produce data while using it. And the data produced so produces another value. Due to the massive scale of data, future information systems need to process more data in terms of quantities than existing information systems. In addition, in terms of quality, only a large amount of data, Ability is required. In a small-scale information system, it is possible for a person to accurately understand the system and obtain the necessary information, but in a variety of complex systems where it is difficult to understand the system accurately, it becomes increasingly difficult to acquire the desired information. In other words, more accurate processing of large amounts of data has become a basic condition for future information systems. This problem related to the efficient performance of the information system can be solved by building a semantic web which enables various information processing by expressing the collected data as an ontology that can be understood by not only people but also computers. For example, as in most other organizations, IT has been introduced in the military, and most of the work has been done through information systems. Currently, most of the work is done through information systems. As existing systems contain increasingly large amounts of data, efforts are needed to make the system easier to use through its data utilization. An ontology-based system has a large data semantic network through connection with other systems, and has a wide range of databases that can be utilized, and has the advantage of searching more precisely and quickly through relationships between predefined concepts. In this paper, we propose a defense ontology as a method for effective data management and decision support. In order to judge the applicability and effectiveness of the actual system, we reconstructed the existing air force munitions situation management system as an ontology based system. It is a system constructed to strengthen management and control of logistics situation of commanders and practitioners by providing real - time information on maintenance and distribution situation as it becomes difficult to use complicated logistics information system with large amount of data. Although it is a method to take pre-specified necessary information from the existing logistics system and display it as a web page, it is also difficult to confirm this system except for a few specified items in advance, and it is also time-consuming to extend the additional function if necessary And it is a system composed of category type without search function. Therefore, it has a disadvantage that it can be easily utilized only when the system is well known as in the existing system. The ontology-based logistics situation management system is designed to provide the intuitive visualization of the complex information of the existing logistics information system through the ontology. In order to construct the logistics situation management system through the ontology, And the useful functions such as performance - based logistics support contract management and component dictionary are further identified and included in the ontology. In order to confirm whether the constructed ontology can be used for decision support, it is necessary to implement a meaningful analysis function such as calculation of the utilization rate of the aircraft, inquiry about performance-based military contract. Especially, in contrast to building ontology database in ontology study in the past, in this study, time series data which change value according to time such as the state of aircraft by date are constructed by ontology, and through the constructed ontology, It is confirmed that it is possible to calculate the utilization rate based on various criteria as well as the computable utilization rate. In addition, the data related to performance-based logistics contracts introduced as a new maintenance method of aircraft and other munitions can be inquired into various contents, and it is easy to calculate performance indexes used in performance-based logistics contract through reasoning and functions. Of course, we propose a new performance index that complements the limitations of the currently applied performance indicators, and calculate it through the ontology, confirming the possibility of using the constructed ontology. Finally, it is possible to calculate the failure rate or reliability of each component, including MTBF data of the selected fault-tolerant item based on the actual part consumption performance. The reliability of the mission and the reliability of the system are calculated. In order to confirm the usability of the constructed ontology-based logistics situation management system, the proposed system through the Technology Acceptance Model (TAM), which is a representative model for measuring the acceptability of the technology, is more useful and convenient than the existing system.

A Comparison between Korean and American Sixth Grade Students in Mathematical Creativity Ability and Mathematical Thinking Ability (한국과 미국의 초등학교 6학년군 학생들의 수학 창의성과 수학적 사고력의 비교)

  • Lee, Kang-Sup;Hwang, Dong-Jou
    • Communications of Mathematical Education
    • /
    • v.25 no.1
    • /
    • pp.245-259
    • /
    • 2011
  • In this study, the instrument of mathematical creative problem solving ability test were considered the differences between Korean and American sixth grade students in mathematical creativity ability and mathematical thinking ability. The instrument consists of 9 items. The participants for the study were 212 Korean and 148 American students. SPSS were carried out to verify the validities and reliability. Reliabilities(Cronbach ${\alpha}$) in mathematical creativity ability is 0.9047 and in mathematical thinking ability is 0.9299 which were satisfied internal validity evaluation on the test items. Internal validity were analyzed by BIGSTEPS based on Rasch's 1-parameter item response model. The results of this study can serve as a foundation for understanding the Korean and American students differences in mathematical creativity ability and mathematical thinking ability. Especially we get the some informations on mathematical creativity ability for American's fifth grade to seventh grade students.