• Title/Summary/Keyword: Informal Language

Search Result 55, Processing Time 0.019 seconds

A Comparative Pedagogical Approach to Lifelong Education: Possibilities and Limitations (평생교육의 비교교육학적 접근: 가능성과 한계)

  • Choi, DonMin
    • Korean Journal of Comparative Education
    • /
    • v.28 no.3
    • /
    • pp.291-307
    • /
    • 2018
  • As the value of lifelong learning becomes important, states are making efforts to build a system of lifelong learning. According to this tendency, this paper intends to compare the participation rate of lifelong learning, learning outcomes, learning support infrastructure, support of learning expenses, and recognition of lifelong learning. For the comparative pedagogical approach, Bray and Thomas' cubes such as geographical / regional level, non - geographical demographic statistics, social and educational aspects were utilized. The participation rate of lifelong learning in Korea is 34.4% in 2017, which is lower than the OECD average of 46%. The competency scores of Korean adults were lower than the OECD national averages of the PIAAC survey which measured adult competence, language ability, numeracy, and computer-based problem solving ability. In order to recognize prior learning, EU countries have developed EQFs to evaluate all non-formal and informal learning outcomes, while Korea recognizes qualification as a credit banking credit under the academic credit banking system. International comparisons of lifelong learning can be used as an important tool for diagnosing the actual conditions of lifelong learning in a country and establishing future lifelong learning policies. Therefore, it is necessary to maintain that the comparative pedagogical approach of lifelong learning differs according to the historical context, socioeconomic characteristics, and population dynamics, including the formation process and characteristics of modern countries.

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.

A Study on the Impact of Employee's Person-Environment Fit and Information Systems Acceptance Factors on Performance: The Mediating Role of Social Capital (조직구성원의 개인-환경적합성과 정보시스템 수용요인이 성과에 미치는 영향에 관한 연구: 사회자본의 매개역할)

  • Heo, Myung-Sook;Cheon, Myun-Joong
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.1-42
    • /
    • 2009
  • In a knowledge-based society, a firm's intellectual capital represents the wealth of ideas and ability to innovate, which are indispensable elements for the future growth. Therefore, the intellectual capital is evidently recognized as the most valuable asset in the organization. Considered as intangible asset, intellectual capital is the basis based on which firms can foster their sustainable competitive advantage. One of the essential components of the intellectual capital is a social capital, indicating the firm's individual members' ability to build a firm's social networks. As such, social capital is a powerful concept necessary for understanding the emergence, growth, and functioning of network linkages. The more social capital a firm is equipped with, the more successfully it can establish new social networks. By providing a shared context for social interactions, social capital facilitates the creation of new linkages in the organizational setting. This concept of "person-environment fit" has long been prevalent in the management literature. The fit is grounded in the interaction theory of behavior. The interaction perspective has a fairly long theoretical tradition, beginning with proposition that behavior is a function of the person and environment. This view asserts that neither personal characteristics nor the situation alone adequately explains the variance in behavioral and attitudinal variables. Instead, the interaction of personal and situational variables accounts for the greatest variance. Accordingly, the person-environment fit is defined as the degree of congruence or match between personal and situational variables in producing significant selected outcomes. In addition, information systems acceptance factors enable organizations to build large electronic communities with huge knowledge resources. For example, the Intranet helps to build knowledge-based communities, which in turn increases employee communication and collaboration. It is vital since through active communication and collaborative efforts can employees build common basis for shared understandings that evolve into stronger relationships embedded with trust. To this aim, the electronic communication network allows the formation of social network to be more viable to rapid mobilization and assimilation of knowledge assets in the organizations. The purpose of this study is to investigate: (1) the impact of person-environment fit(person-job fit, person-person fit, person-group fit, person-organization fit) on social capital(network ties, trust, norm, shared language); (2) the impact of information systems acceptance factors(availability, perceived usefulness, perceived ease of use) on social capital; (3) the impact of social capital on personal performance(work performance, work satisfaction); and (4) the mediating role of social capital between person-environment fit and personal performance. In general, social capital is defined as the aggregated actual or collective potential resources which lead to the possession of a durable network. The concept of social capital was originally developed by sociologists for their analysis in social context. Recently, it has become an increasingly popular jargon used in the management literature in describing organizational phenomena outside the realm of transaction costs. Since both environmental factors and information systems acceptance factors affect the network of employee's relationships, this study proposes that these two factors have significant influence on the social capital of employees. The person-environment fit basically refers to the alignment between characteristics of people and their environments, thereby resulting in positive outcomes for both individuals and organizations. In addition, the information systems acceptance factors have rather direct influences on the social network of employees. Based on such theoretical framework, namely person-environment fit and social capital theory, we develop our research model and hypotheses. The results of data analysis, based on 458 employee cases are as follow: Firstly, both person-environment fit(person-job fit, person-person fit, person-group fit, person-organization fit) and information systems acceptance factors(availability perceived usefulness, perceived ease of use) significantly influence social capital(network ties, norm, shared language). In addition, person-environment fit is a stronger factor influencing social capital than information systems acceptance factors. Secondly, social capital is a significant factor in both work satisfaction and work performance. Finally, social capital partly plays a mediating role between person-environment fit and personal performance. Our findings suggest that it is vital for firms to understand the importance of environmental factors affecting social capital of employees and accordingly identify the importance of information systems acceptance factors in building formal and informal relationships of employees. Firms also need to reflect their recognition of the importance of social capital's mediating role in boosting personal performance. Some limitations arisen in the course of the research and suggestions for future research directions are also discussed.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.