• Title/Summary/Keyword: Work-learning parallel

Search Result 34, Processing Time 0.022 seconds

Use job analysis, The Effect of Participation of Work-based Parallelism System on the Performance of Firms : Focusing on the Moderating Effect of Education and Training Obligations (직무분석 활용, 일학습병행제가 기업성과에 미치는 영향 : 교육훈련 의무의 조절효과를 중심으로)

  • Sung, Su-Hyun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.3
    • /
    • pp.157-167
    • /
    • 2019
  • This study empirically analyzed the effects of the use of a single human resource development system in the enterprise on corporate performance using the Human Capital Enterprise Panel (HCCP) data. The results of the hierarchical regression analysis on the sales per log of job analysis use, The use of job analysis confirms that $R^2=.294$ and ${\beta}=.165$ can have a positive effect on sales per log, and Hypothesis 1 is supported. The participation in the work parallelism participation was negatively influenced by the sales per log in $R^2=.283$ and ${\beta}=-.129$, and Hypothesis 2 was rejected. This is attributed to the lack of data of 66, and it was judged that there were 45 new companies entering the company. In addition, we conducted a hierarchical regression analysis that confirms the moderating effect of the training and training obligation by using interaction variables of job analysis use and education and training obligation. It was confirmed that the use of job analysis could have a negative impact on the sales per log, and Hypothesis 3 was rejected. As the labor productivity increases, firms have supported the previous study that productivity effect is not significant because they do not want to invest in education and training. In addition, it was confirmed that the participation of the training system in the job training system could strengthen the positive sales (+). Therefore, Hypothesis 4 was supported. In order to reflect the effective aspects of job analysis, the voluntary activation of enterprises should be premised. In addition, if employing talented people with diverse backgrounds such as academic backgrounds, gender, religion, nationality, etc. and investing in human resources development through education and training focused on job analysis, recruitment of learning workers in parallel with work- It will be possible to contribute to the creation of performance.

NCS-based Education & Training and Qualification Proposal for Work-Learning Parallel Companies Introducing Smart Manufacturing Technology (스마트 제조기술을 도입하는 일학습병행 학습기업을 위한 NCS 기반 교육훈련 및 자격 제안)

  • Choi, Hwan Young
    • Journal of Practical Engineering Education
    • /
    • v.12 no.1
    • /
    • pp.117-125
    • /
    • 2020
  • According to the government's smart factory promotion project for small and medium-sized enterprises, more than 10,000 intelligent factories are scheduled or already built in the country and the government-led goal is to nurture 100,000 skilled workers by 2022. Smart Factory introduces numerous types of education and training courses from the supplier's point of view, such as training institutions belonging to local governments, some universities, and public organizations, in the form of an efficient resource management system and ICT technology convergence in the automated manufacturing equipment. The lack of linkage with the NCS, the standard for training, seems to have room for rethinking and direction. Results of survey is provided for the family companies of K-University in the metropolitan area and Chungnam area, and analyzes job demands by identifying whether or not they want to introduce smart factories. Defining the practitioners who will serve as a window for the introduction of smart factory technology within the company, setting up a training goal in consideration of the career path, and including the level of training required competency units, optional competency units, and training time suitable for introducing and operating smart factories. Author would like to present an NCS-based qualification design plan.

The Effects of Small-Scale Chemistry Laboratoty Programs in High School Chemistry II Class (고등학교 화학II 수업에 적용한 Small-Scale Chemistry 실험의 효과)

  • Hong, Ji-Hye;Park, Jong-Yoon
    • Journal of The Korean Association For Science Education
    • /
    • v.27 no.4
    • /
    • pp.318-327
    • /
    • 2007
  • The purpose of this study is to examine the effects of small-scale chemistry(SSC) laboratory activities implemented in high school chemistry II classes on the students' inquiry process skills and science-related attitudes. For this study, 112 students in the 12th grade were chosen and divided into an experimental and a control group. Seven SSC lab programs that can replace the traditional experiments in chemistry II textbooks were selected and administered to the experimental group while the traditional textbook experiments were administered to the control group. The results showed that there was a significant difference in the enhancement of inquiry process skills between the two groups while no significant difference was found in science-related attitudes. Further analysis showed that the difference in the inquiry process skills came from the basic inquiry process skills. The experimental group students thought that the SSC experiments have many advantages compared to the traditional experiments, e.g., individual work, learning lab and theory in parallel, short experiment time, safety, environmental aspects, etc. These results suggest that the SSC lab programs are valuable in high school chemistry classes and developing and distributing various SSC lab programs is needed to replace the traditional experiments in the current textbooks.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.