• Title/Summary/Keyword: Technology-specific Training

Search Result 264, Processing Time 0.02 seconds

A Study on the Method of Christian Youth Education for the Improvement of Relationship (관계성 향상을 위한 기독 청년교육 방안 연구)

  • Park, Eunhye
    • Journal of Christian Education in Korea
    • /
    • v.71
    • /
    • pp.121-154
    • /
    • 2022
  • This study is to summarize the relationship between youth in terms of developmental psychology, university education, faith, and spirituality in order to form and improve relationships, which are major developmental tasks of youth, and to suggest Christian youth education by the elements of education. Relationships are formed when you are connected to another person and community, feel interested in each other, feel a sense of bond and belonging, and maintain a stable and satisfactory relationship. This is not skill or technology, but is related to life attitude and value, and continuous learning and training are required. Various developmental tasks in youth have something in common with relationships. Relationships positively affect the lives of young people, such as satisfaction with college life in the early stages of youth, adaptation to college life, personality, and career decision. Relationships are also very important in faith because human existence and faith are defined and formed through relationships. The relationship between the community and others plays an important role in spiritual development for the meaning of life and inner growth. In the aspects of learners and educational environment, it was suggested to understand learners with desire for relationships, the generation they live in, and the educational environment in which the relationship between young people occurs. In terms of teachers, teachers have to try to change their roles such as facilitators, guides, managers, and mentors. For the educational purpose and content, it was suggested that relationships should be the ultimate purpose and the educational content for this was presented in three different types of relationships and each main contents to be dealt with. In terms of educational method, it was proposed to select a learner-centered group learning method that induces communication and active participation of learners to cause interaction by considering other elements of education according to the content of the relationship in the cognitive, emotional, and behavioral dimensions. In the aspects of educational results and evaluation, it was proposed to confirm that what was considered during the educational planning stage was effectively carried out in actual education, to evaluate various evaluation methods, various aspects, and to summarize the evaluation results for the specific application.

A Study on the Individual Radiation Exposure of Medical Facility Nuclear Workers by Job (의료기관 핵의학 종사자의 직무 별 개인피폭선량에 관한 연구)

  • Kang, Chun-Goo;Oh, Ki-Baek;Park, Hoon-Hee;Oh, Shin-Hyun;Park, Min-Soo;Kim, Jung-Yul;Lee, Jin-Kyu;Na, Soo-Kyung;Kim, Jae-Sam;Lee, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.9-16
    • /
    • 2010
  • Purpose: With increasing medical use of radiation and radioactive isotopes, there is a need to better manage the risk of radiation exposure. This study aims to grasp and analyze the individual radiation exposure situations of radiation-related workers in a medical facility by specific job, in order to instill awareness of radiation danger and to assist in safety and radiation exposure management for such workers. Materials and Methods: 1 January 2007 to 31 December 2009 to work in medical institutions are classified as radiation workers Nuclear personal radiation dosimeter regularly, continuously administered survey of 40 workers in three years of occupation to target, Imaging Unit beautifully, age, dose sector, job function-related tasks to identify the average annual dose for a deep dose, respectively, were analyzed. The frequency analysis and ANOVA analysis was performed. Results: Imaging Unit beautifully three years the annual dose PET and PET/CT in the work room 11.06~12.62 mSv dose showed the highest, gamma camera injection room 11.72 mSv with a higher average annual dose of occupation by the clinical technicians 8.92 mSv the highest, radiological 7.50 mSv, a nurse 2.61 mSv, the researchers 0.69 mSv, received 0.48 mSv, 0.35 mSv doctors orderly, and detail work employed the average annual dose of the PET and PET/CT work is 12.09 mSv showed the highest radiation dose, gamma camera injection work the 11.72 mSv, gamma camera imaging work 4.92 mSv, treatment and safety management and 2.98 mSv, a nurse working 2.96 mSv, management of 1.72 mSv, work image analysis 0.92 mSv, reading task 0.54 mSv, with receiving 0.51 mSv, 0.29 mSv research work, respectively. Dose sector average annual dose of the study subjects, 15 people (37.5%) than the 1 mSv dose distribution and 5 people (12.5%) and 1.01~5.0 mSv with the dose distribution was less than, 5.01~10.0 mSv in the 14 people (35.0%), 10.01~20.0 mSv in the 6 people (15.0%) of the distribution were analyzed. The average annual dose according to age in occupations that radiological workers 25~34 years old have the highest average of 8.69 mSv dose showed the average annual dose of tenure of 5~9 years in jobs radiation workers in the 9.5 mSv The average was the highest dose. Conclusion: These results suggest that medical radiation workers working in Nuclear Medicine radiation safety management of the majority of the current were carried out in the effectiveness, depending on job characteristics has been found that many differences. However, this requires efforts to minimize radiation exposure, and systematic training for them and for reasonable radiation exposure management system is needed.

  • PDF

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.