• Title/Summary/Keyword: 자동화기술

Search Result 3,032, Processing Time 0.033 seconds

Detection of Phantom Transaction using Data Mining: The Case of Agricultural Product Wholesale Market (데이터마이닝을 이용한 허위거래 예측 모형: 농산물 도매시장 사례)

  • Lee, Seon Ah;Chang, Namsik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.161-177
    • /
    • 2015
  • With the rapid evolution of technology, the size, number, and the type of databases has increased concomitantly, so data mining approaches face many challenging applications from databases. One such application is discovery of fraud patterns from agricultural product wholesale transaction instances. The agricultural product wholesale market in Korea is huge, and vast numbers of transactions have been made every day. The demand for agricultural products continues to grow, and the use of electronic auction systems raises the efficiency of operations of wholesale market. Certainly, the number of unusual transactions is also assumed to be increased in proportion to the trading amount, where an unusual transaction is often the first sign of fraud. However, it is very difficult to identify and detect these transactions and the corresponding fraud occurred in agricultural product wholesale market because the types of fraud are more intelligent than ever before. The fraud can be detected by verifying the overall transaction records manually, but it requires significant amount of human resources, and ultimately is not a practical approach. Frauds also can be revealed by victim's report or complaint. But there are usually no victims in the agricultural product wholesale frauds because they are committed by collusion of an auction company and an intermediary wholesaler. Nevertheless, it is required to monitor transaction records continuously and to make an effort to prevent any fraud, because the fraud not only disturbs the fair trade order of the market but also reduces the credibility of the market rapidly. Applying data mining to such an environment is very useful since it can discover unknown fraud patterns or features from a large volume of transaction data properly. The objective of this research is to empirically investigate the factors necessary to detect fraud transactions in an agricultural product wholesale market by developing a data mining based fraud detection model. One of major frauds is the phantom transaction, which is a colluding transaction by the seller(auction company or forwarder) and buyer(intermediary wholesaler) to commit the fraud transaction. They pretend to fulfill the transaction by recording false data in the online transaction processing system without actually selling products, and the seller receives money from the buyer. This leads to the overstatement of sales performance and illegal money transfers, which reduces the credibility of market. This paper reviews the environment of wholesale market such as types of transactions, roles of participants of the market, and various types and characteristics of frauds, and introduces the whole process of developing the phantom transaction detection model. The process consists of the following 4 modules: (1) Data cleaning and standardization (2) Statistical data analysis such as distribution and correlation analysis, (3) Construction of classification model using decision-tree induction approach, (4) Verification of the model in terms of hit ratio. We collected real data from 6 associations of agricultural producers in metropolitan markets. Final model with a decision-tree induction approach revealed that monthly average trading price of item offered by forwarders is a key variable in detecting the phantom transaction. The verification procedure also confirmed the suitability of the results. However, even though the performance of the results of this research is satisfactory, sensitive issues are still remained for improving classification accuracy and conciseness of rules. One such issue is the robustness of data mining model. Data mining is very much data-oriented, so data mining models tend to be very sensitive to changes of data or situations. Thus, it is evident that this non-robustness of data mining model requires continuous remodeling as data or situation changes. We hope that this paper suggest valuable guideline to organizations and companies that consider introducing or constructing a fraud detection model in the future.

A study on the optimization of tunnel support patterns using ANN and SVR algorithms (ANN 및 SVR 알고리즘을 활용한 최적 터널지보패턴 선정에 관한 연구)

  • Lee, Je-Kyum;Kim, YangKyun;Lee, Sean Seungwon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.6
    • /
    • pp.617-628
    • /
    • 2022
  • A ground support pattern should be designed by properly integrating various support materials in accordance with the rock mass grade when constructing a tunnel, and a technical decision must be made in this process by professionals with vast construction experiences. However, designing supports at the early stage of tunnel design, such as feasibility study or basic design, may be very challenging due to the short timeline, insufficient budget, and deficiency of field data. Meanwhile, the design of the support pattern can be performed more quickly and reliably by utilizing the machine learning technique and the accumulated design data with the rapid increase in tunnel construction in South Korea. Therefore, in this study, the design data and ground exploration data of 48 road tunnels in South Korea were inspected, and data about 19 items, including eight input items (rock type, resistivity, depth, tunnel length, safety index by tunnel length, safety index by rick index, tunnel type, tunnel area) and 11 output items (rock mass grade, two items for shotcrete, three items for rock bolt, three items for steel support, two items for concrete lining), were collected to automatically determine the rock mass class and the support pattern. Three machine learning models (S1, A1, A2) were developed using two machine learning algorithms (SVR, ANN) and organized data. As a result, the A2 model, which applied different loss functions according to the output data format, showed the best performance. This study confirms the potential of support pattern design using machine learning, and it is expected that it will be able to improve the design model by continuously using the model in the actual design, compensating for its shortcomings, and improving its usability.

Regeneration of a defective Railroad Surface for defect detection with Deep Convolution Neural Networks (Deep Convolution Neural Networks 이용하여 결함 검출을 위한 결함이 있는 철도선로표면 디지털영상 재 생성)

  • Kim, Hyeonho;Han, Seokmin
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.23-31
    • /
    • 2020
  • This study was carried out to generate various images of railroad surfaces with random defects as training data to be better at the detection of defects. Defects on the surface of railroads are caused by various factors such as friction between track binding devices and adjacent tracks and can cause accidents such as broken rails, so railroad maintenance for defects is necessary. Therefore, various researches on defect detection and inspection using image processing or machine learning on railway surface images have been conducted to automate railroad inspection and to reduce railroad maintenance costs. In general, the performance of the image processing analysis method and machine learning technology is affected by the quantity and quality of data. For this reason, some researches require specific devices or vehicles to acquire images of the track surface at regular intervals to obtain a database of various railway surface images. On the contrary, in this study, in order to reduce and improve the operating cost of image acquisition, we constructed the 'Defective Railroad Surface Regeneration Model' by applying the methods presented in the related studies of the Generative Adversarial Network (GAN). Thus, we aimed to detect defects on railroad surface even without a dedicated database. This constructed model is designed to learn to generate the railroad surface combining the different railroad surface textures and the original surface, considering the ground truth of the railroad defects. The generated images of the railroad surface were used as training data in defect detection network, which is based on Fully Convolutional Network (FCN). To validate its performance, we clustered and divided the railroad data into three subsets, one subset as original railroad texture images and the remaining two subsets as another railroad surface texture images. In the first experiment, we used only original texture images for training sets in the defect detection model. And in the second experiment, we trained the generated images that were generated by combining the original images with a few railroad textures of the other images. Each defect detection model was evaluated in terms of 'intersection of union(IoU)' and F1-score measures with ground truths. As a result, the scores increased by about 10~15% when the generated images were used, compared to the case that only the original images were used. This proves that it is possible to detect defects by using the existing data and a few different texture images, even for the railroad surface images in which dedicated training database is not constructed.

Development of a water quality prediction model for mineral springs in the metropolitan area using machine learning (머신러닝을 활용한 수도권 약수터 수질 예측 모델 개발)

  • Yeong-Woo Lim;Ji-Yeon Eom;Kee-Young Kwahk
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.307-325
    • /
    • 2023
  • Due to the prolonged COVID-19 pandemic, the frequency of people who are tired of living indoors visiting nearby mountains and national parks to relieve depression and lethargy has exploded. There is a place where thousands of people who came out of nature stop walking and breathe and rest, that is the mineral spring. Even in mountains or national parks, there are about 600 mineral springs that can be found occasionally in neighboring parks or trails in the metropolitan area. However, due to irregular and manual water quality tests, people drink mineral water without knowing the test results in real time. Therefore, in this study, we intend to develop a model that can predict the quality of the spring water in real time by exploring the factors affecting the quality of the spring water and collecting data scattered in various places. After limiting the regions to Seoul and Gyeonggi-do due to the limitations of data collection, we obtained data on water quality tests from 2015 to 2020 for about 300 mineral springs in 18 cities where data management is well performed. A total of 10 factors were finally selected after two rounds of review among various factors that are considered to affect the suitability of the mineral spring water quality. Using AutoML, an automated machine learning technology that has recently been attracting attention, we derived the top 5 models based on prediction performance among about 20 machine learning methods. Among them, the catboost model has the highest performance with a prediction classification accuracy of 75.26%. In addition, as a result of examining the absolute influence of the variables used in the analysis through the SHAP method on the prediction, the most important factor was whether or not a water quality test was judged nonconforming in the previous water quality test. It was confirmed that the temperature on the day of the inspection and the altitude of the mineral spring had an influence on whether the water quality was unsuitable.

Study on water quality prediction in water treatment plants using AI techniques (AI 기법을 활용한 정수장 수질예측에 관한 연구)

  • Lee, Seungmin;Kang, Yujin;Song, Jinwoo;Kim, Juhwan;Kim, Hung Soo;Kim, Soojun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.151-164
    • /
    • 2024
  • In water treatment plants supplying potable water, the management of chlorine concentration in water treatment processes involving pre-chlorination or intermediate chlorination requires process control. To address this, research has been conducted on water quality prediction techniques utilizing AI technology. This study developed an AI-based predictive model for automating the process control of chlorine disinfection, targeting the prediction of residual chlorine concentration downstream of sedimentation basins in water treatment processes. The AI-based model, which learns from past water quality observation data to predict future water quality, offers a simpler and more efficient approach compared to complex physicochemical and biological water quality models. The model was tested by predicting the residual chlorine concentration downstream of the sedimentation basins at Plant, using multiple regression models and AI-based models like Random Forest and LSTM, and the results were compared. For optimal prediction of residual chlorine concentration, the input-output structure of the AI model included the residual chlorine concentration upstream of the sedimentation basin, turbidity, pH, water temperature, electrical conductivity, inflow of raw water, alkalinity, NH3, etc. as independent variables, and the desired residual chlorine concentration of the effluent from the sedimentation basin as the dependent variable. The independent variables were selected from observable data at the water treatment plant, which are influential on the residual chlorine concentration downstream of the sedimentation basin. The analysis showed that, for Plant, the model based on Random Forest had the lowest error compared to multiple regression models, neural network models, model trees, and other Random Forest models. The optimal predicted residual chlorine concentration downstream of the sedimentation basin presented in this study is expected to enable real-time control of chlorine dosing in previous treatment stages, thereby enhancing water treatment efficiency and reducing chemical costs.

Multi-Variate Tabular Data Processing and Visualization Scheme for Machine Learning based Analysis: A Case Study using Titanic Dataset (기계 학습 기반 분석을 위한 다변량 정형 데이터 처리 및 시각화 방법: Titanic 데이터셋 적용 사례 연구)

  • Juhyoung Sung;Kiwon Kwon;Kyoungwon Park;Byoungchul Song
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.121-130
    • /
    • 2024
  • As internet and communication technology (ICT) is improved exponentially, types and amount of available data also increase. Even though data analysis including statistics is significant to utilize this large amount of data, there are inevitable limits to process various and complex data in general way. Meanwhile, there are many attempts to apply machine learning (ML) in various fields to solve the problems according to the enhancement in computational performance and increase in demands for autonomous systems. Especially, data processing for the model input and designing the model to solve the objective function are critical to achieve the model performance. Data processing methods according to the type and property have been presented through many studies and the performance of ML highly varies depending on the methods. Nevertheless, there are difficulties in deciding which data processing method for data analysis since the types and characteristics of data have become more diverse. Specifically, multi-variate data processing is essential for solving non-linear problem based on ML. In this paper, we present a multi-variate tabular data processing scheme for ML-aided data analysis by using Titanic dataset from Kaggle including various kinds of data. We present the methods like input variable filtering applying statistical analysis and normalization according to the data property. In addition, we analyze the data structure using visualization. Lastly, we design an ML model and train the model by applying the proposed multi-variate data process. After that, we analyze the passenger's survival prediction performance of the trained model. We expect that the proposed multi-variate data processing and visualization can be extended to various environments for ML based analysis.

Deep Learning-based Fracture Mode Determination in Composite Laminates (복합 적층판의 딥러닝 기반 파괴 모드 결정)

  • Muhammad Muzammil Azad;Atta Ur Rehman Shah;M.N. Prabhakar;Heung Soo Kim
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.4
    • /
    • pp.225-232
    • /
    • 2024
  • This study focuses on the determination of the fracture mode in composite laminates using deep learning. With the increase in the use of laminated composites in numerous engineering applications, the insurance of their integrity and performance is of paramount importance. However, owing to the complex nature of these materials, the identification of fracture modes is often a tedious and time-consuming task that requires critical domain knowledge. Therefore, to alleviate these issues, this study aims to utilize modern artificial intelligence technology to automate the fractographic analysis of laminated composites. To accomplish this goal, scanning electron microscopy (SEM) images of fractured tensile test specimens are obtained from laminated composites to showcase various fracture modes. These SEM images are then categorized based on numerous fracture modes, including fiber breakage, fiber pull-out, mix-mode fracture, matrix brittle fracture, and matrix ductile fracture. Next, the collective data for all classes are divided into train, test, and validation datasets. Two state-of-the-art, deep learning-based pre-trained models, namely, DenseNet and GoogleNet, are trained to learn the discriminative features for each fracture mode. The DenseNet models shows training and testing accuracies of 94.01% and 75.49%, respectively, whereas those of the GoogleNet model are 84.55% and 54.48%, respectively. The trained deep learning models are then validated on unseen validation datasets. This validation demonstrates that the DenseNet model, owing to its deeper architecture, can extract high-quality features, resulting in 84.44% validation accuracy. This value is 36.84% higher than that of the GoogleNet model. Hence, these results affirm that the DenseNet model is effective in performing fractographic analyses of laminated composites by predicting fracture modes with high precision.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.

Roles and Preparation for the Future Nurse-Educators (미래 간호교육자의 역할과 이를 위한 준비)

  • Kim Susie
    • The Korean Nurse
    • /
    • v.20 no.4 s.112
    • /
    • pp.39-49
    • /
    • 1981
  • 기존 간호 영역 내 간호는 질적으로, 양적으로 급격히 팽창 확대되어 가고 있다. 많은 나라에서 건강관리체계가 부적절하게 분배되어 있으며 따라서 많은 사람들이 적절한 건강관리를 제공받지 못하고 있어 수준 높은 양질의 건강관리를 전체적으로 확대시키는 것이 시급하다. 혹 건강관리의 혜택을 받는다고 해도 이들 역시 보다 더 양질의 인간적인 간호를 요하고 있는 실정이다. 간호는 또한 간호영역 자체 내에서도 급격히 확대되어가고 있다. 예를들면, 미국같은 선진국가의 건강간호사(Nurse practitioner)는 간호전문직의 새로운 직종으로 건강관리체계에서 독자적인 실무자로 그 두각을 나타내고 있다. 의사의 심한 부족난으로 고심하는 발전도상에 있는 나라들에서는 간호원들에게 전통적인 간호기능 뿐 아니라 건강관리체계에서 보다 많은 역할을 수행하도록 기대하며 일선지방의 건강센터(Health center) 직종에 많은 간호원을 투입하고 있다. 가령 우리 한국정부에서 최근에 시도한 무의촌지역에서 졸업간호원들이 건강관리를 제공할 수 있도록 한 법적 조치는 이러한 구체적인 예라고 할 수 있다. 기존 간호영역내외의 이런 급격한 변화는 Melvin Toffler가 말한 대로 ''미래의 충격''을 초래하게 되었다. 따라서 이러한 역동적인 변화는 간호전문직에 대하여 몇가지 질문을 던져준다. 첫째, 미래사회에서 간호영역의 특성은 무엇인가? 둘째, 이러한 새로운 영역에서 요구되는 간호원을 길러내기 위해 간호교육자는 어떤 역할을 수행해야 하는가? 셋째 내일의 간호원을 양성하는 간호교육자를 준비시키기 위한 실질적이면서도 현실적인 전략은 무엇인가 등이다. 1. 미래사회에서 간호영역의 특성은 무엇인가? 미래의 간호원은 다음에 열거하는 여러가지 요인으로 인하여 지금까지의 것과는 판이한 환경에서 일하게 될 것이다. 1) 건강관리를 제공하는 과정에서 컴퓨터화되고 자동화된 기계 및 기구 등 새로운 기술을 많이 사용할 것이다. 2) 1차건강관리가 대부분 간호원에 의해 제공될 것이다. 3) 내일의 건강관리는 소비자 주축의 것이 될 것이다. 4) 간호영역내에 많은 새로운 전문분야들이 생길 것이다. 5) 미래의 건강관리체계는 사회적인 변화와 이의 요구에 더 민감한 반응을 하게 될 것이다. 6) 건강관리체계의 강조점이 의료진료에서 건강관리로 바뀔 것이다. 7) 건강관리체계에서의 간호원의 역할은 의료적인 진단과 치료계획의 기능에서 크게 탈피하여 병원내외에서 보다 더 독특한 실무형태로 발전될 것이다. 이러한 변화와 더불어 미래 간호영역에서 보다 효과적인 간호를 수행하기 위해 미래 간호원들은 지금까지의 간호원보다 더 광범위하고 깊은 교육과 훈련을 받아야 한다. 보다 발전된 기술환경에서 전인적인 접근을 하기위해 신체과학이나 의학뿐 아니라 행동과학 $\cdot$ 경영과학 등에 이르기까지 다양한 훈련을 받아야 할 필요가 있다. 또한 행동양상면에서 전문직인 답게 보다 진취적이고 표현적이며 자동적이고 응용과학적인 역할을 수행하도록 훈련을 받아야 한다. 그리하여 간호원은 효과적인 의사결정자$\cdot$문제해결자$\cdot$능숙한 실무자일 뿐 아니라 소비자의 건강요구를 예리하게 관찰하고 이 요구에 효과적인 존재를 발전시켜 나가는 연구자가 되어야 한다. 2. 미래의 간호교육자는 어떤 역할을 수행해야 하는가? 간호교육은 전문직으로서의 실무를 제공하기 위한 기초석이다. 이는 간호교육자야말로 미래사회에서 국민의 건강요구를 충족시키기는 능력있는 간호원을 공급하는 일에 전무해야 함을 시사해준다. 그러면 이러한 일을 달성하기 위해 간호교육자는 무엇을 해야 하는가? 우선 간호교육자는 두가지 측면에서 이 일을 수정해야 된다고 본다. 그 하나는 간호교육기관에서의 측면이고 다른 하나는 간호교육자 개인적인 측면엣서이다. 우선 간호교육기관에서 간호교육자는 1) 미래사회에서 요구되는 간호원을 교육시키기 위한 프로그램을 제공해야 한다. 2) 효과적인 교과과정의 발전과 수정보완을 계속적으로 진행시켜야 한다. 3) 잘된 교과과정에 따라 적절한 훈련을 철저히 시켜야 한다. 4) 간호교육자 자신이 미래의 예측된 현상을 오늘의 교육과정에 포함시킬 수 있는 자신감과 창의력을 가지고 모델이 되어야 한다. 5) 연구 및 학생들의 학습에 영향을 미치는 중요한 의사결정에 학생들을 참여시키도록 해야한다. 간호교육자 개인적인 측면에서는 교육자 자신들이 능력있고 신빙성있으며 간호의 이론$\cdot$실무$\cdot$연구면에 걸친 권위와 자동성$\cdot$독창성, 그리고 인간을 진정으로 이해하려는 자질을 갖추도록 계속 노력해야 한다. 3. 미래의 간호원을 양성하는 능력있는 간호교육자를 준비시키기 위한 실질적이면서도 현실적인 전략은 무엇인가? 내일의 도전을 충족시킬 수 있는 능력있는 간호교육자를 준비시키기 위한 실질적이고 현실적인 전략을 논함에 있어 우리나라의 실정을 참조하겠다. 전문직 간호교육자를 준비하는데 세가지 방법을 통해 할 수 있다고 생각한다. 첫째는 간호원 훈련수준을 전문직 실무를 수행할 수 있는 단계로 면허를 높이는 것이고, 둘째는 훈련수준을 더 향상시키기 위하여 학사 및 석사간호교육과정을 발전시키고 확대하는 것이며, 셋째는 현존하는 간호교육 프로그램의 질을 높이는 것이다. 첫째와 둘째방법은 정부의 관할이 직접 개입되는 방법이기 때문에 여기서는 생략하고 현존하는 교과과정을 발전시키고 그 질을 향상시키는 것에 대해서만 언급하고자 한다. 미래의 여러가지 도전에 부응할 수 있는 교육자를 준비시키는 교육과정의 발전을 두가지 면에서 추진시킬 수 있다고 본다. 첫째는 국제간의 교류를 통하여 idea 및 경험을 나눔으로서 교육과정의 질을 높일 수 있다. 서로 다른 나라의 간호교육자들이 정기적으로 모여 생각과 경험을 교환하고 연구하므로서 보다 체계적이고 효과적인 발전체인(chain)이 형성되는 것이다. ICN같은 국제적인 조직에 의해 이러한 모임을 시도하는 것인 가치있는 기회라고 생각한다. 국가간 또는 국제적인 간호교육자 훈련을 위한 교육과정의 교환은 한 나라안에서 그 idea를 확산시키는데 효과적인 영향을 미칠 수 있다. 충분한 간호교육전문가를 갖춘 간호교육기관이 새로운 교육과정을 개발하여 그렇지 못한 기관과의 연차적인 conference를 가지므로 확산시킬 수도 있으며 이런 방법은 경제적인 면에서도 효과적일 뿐만 아니라 그 나라 그 문화상황에 적합한 교과과정 개발에도 효과적일 수 있다. 간호교육자를 준비시키는 둘째전략은 현존간호교육자들이 간호이론과 실무$\cdot$연구를 통합하고 발전시키는데 있어서 당면하는 여러가지 요인-전인적인 간호에 적절한 과목을 이수하지 못하고 임상실무경험의 부족등-을 보충하는 방법이다. 이런 실제적인 문제를 잠정적으로 해결하기 위하여 1) 몇몇 대학에서 방학중에 계속교육 프로그램을 개발하여 현직 간호교육자들에게 필요하고 적절한 과목을 이수하도록 한다. 따라서 임상실무교육도 이때 실시할 수 있다. 2) 대학원과정 간호교육프로그램의 입학자의 자격에 2$\~$3년의 실무경험을 포함시키도록 한다. 결론적으로 교수와 학생간의 진정한 동반자관계는 자격을 구비한 능력있는 교수의 실천적인 모델을 통하여서 가능하게 이루어 질수 있다고 믿는 바이다.

  • PDF