• Title/Summary/Keyword: Information Distillation

Search Result 55, Processing Time 0.028 seconds

Influence of HMDS additive on the properties of YAG:Ce nanophosphor

  • Choi, Kyu-Man;Kim, Yeo-Hwan;Lim, Hae-Jin;Yoon, Sang-Ok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.4 no.1
    • /
    • pp.61-67
    • /
    • 2011
  • Influence of hexamethyldisilazane(HMDS) in post processing technique by using n-butanol azeotropic distillation on the luminescence properties of YAG:Ce nanophosphor were studied. The organic solvent(n-butanol) azeotropic distillation which prevent powders from conglobation since lager molecules decrease the surface tension and more complete replace the residual water in the precipitate. HMDS that had larger molecules than that of n-butanol was added in azeotropic distillation. The phosphor synthesized from n-butanol azeotropic distillation exhibited lower agglomerate and better photoluminescence properties than that from HMDS added heterogeneous azeotropic distillation.

Design of Gas Concentration Process with Thermally Coupled Distillation Column Using HYSYS Simulation (HYSYS를 이용한 열복합 증류식 가스 농축공정의 설계)

  • 이주영;김영한;황규석
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.10
    • /
    • pp.842-846
    • /
    • 2002
  • Design of gas concentration process using a fully thermally coupled distillation is conducted with the commercial design software HYSYS. Detailed procedure of the design is explained, and the performance of the process is compared with that of a conventional system A structural design is exercised for the design convenience. The design outcome indicates that the procedure is simple and efficient. The structural information yielded from equilibrium distillation gives an easy formulation of distillation system which is the initial input required from the setup of the distillation system The performance of the new process indicates that an energy saving of 17.6 % is obtained compared with the conventional process while total number of trays maintains at the same.

Knowledge Distillation Based Continual Learning for PCB Part Detection (PCB 부품 검출을 위한 Knowledge Distillation 기반 Continual Learning)

  • Gang, Su Myung;Chung, Daewon;Lee, Joon Jae
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.7
    • /
    • pp.868-879
    • /
    • 2021
  • PCB (Printed Circuit Board) inspection using a deep learning model requires a large amount of data and storage. When the amount of stored data increases, problems such as learning time and insufficient storage space occur. In this study, the existing object detection model is changed to a continual learning model to enable the recognition and classification of PCB components that are constantly increasing. By changing the structure of the object detection model to a knowledge distillation model, we propose a method that allows knowledge distillation of information on existing classified parts while simultaneously learning information on new components. In classification scenario, the transfer learning model result is 75.9%, and the continual learning model proposed in this study shows 90.7%.

Research on apply to Knowledge Distillation for Crowd Counting Model Lightweight (Crowd Counting 경량화를 위한 Knowledge Distillation 적용 연구)

  • Yeon-Joo Hong;Hye-Ryung Jeon;Yu-Yeon Kim;Hyun-Woo Kang;Min-Gyun Park;Kyung-June Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.918-919
    • /
    • 2023
  • 딥러닝 기술이 발전함에 따라 모델의 복잡성 역시 증가하고 있다. 본 연구에서는 모델 경량화를 위해 Knowledge Distillation 기법을 Crowd Counting Model에 적용했다. M-SFANet을 Teacher 모델로, 파라미터수가 적은 MCNN 모델을 Student 모델로 채택해 Knowledge Distillation을 적용한 결과, 기존의 MCNN 모델보다 성능을 향상했다. 이는 정확도와 메모리 효율성 측면에서 많은 개선을 이루어 컴퓨팅 리소스가 부족한 기기에서도 본 모델을 실행할 수 있어 많은 활용이 가능할 것이다.

Improving Few-Shot Learning through Self-Distillation (Self-Distillation을 활용한 Few-Shot 학습 개선)

  • Kim, Tae-Hun;Choo, Jae-Gul
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.617-620
    • /
    • 2018
  • 딥러닝 기술에 있어서 대량의 학습 데이터가 필요하다는 한계점을 극복하기 위한 시도로서, 적은 데이터 만으로도 좋은 성능을 낼 수 있는 few-shot 학습 모델이 꾸준히 발전하고 있다. 하지만 few-shot 학습 모델의 가장 큰 단점인 적은 데이터로 인한 과적합 문제는 여전히 어려운 숙제로 남아있다. 본 논문에서는 모델 압축에 사용되는 distillation 기법을 사용하여 few-shot 학습 모델의 학습 문제를 개선하고자 한다. 이를 위해 대표적인 few-shot 모델인 Siamese Networks, Prototypical Networks, Matching Networks에 각각 distillation을 적용하였다. 본 논문의 실험결과로써 단순히 결과값에 대한 참/거짓 뿐만 아니라, 참/거짓에 대한 신뢰도까지 같이 학습함으로써 few-shot 모델의 학습 문제 개선에 도움이 된다는 것을 실험적으로 증명하였다.

Development of Small Distillation Column for Performance Evaluation of Distillation Column (증류탑 성능평가에 적합한 소형 증류탑 개발)

  • Kim, Byoung Chul;Cho, Tae Je;Kim, Young Han
    • Korean Chemical Engineering Research
    • /
    • v.48 no.5
    • /
    • pp.668-671
    • /
    • 2010
  • A lab scale distillation experiment is conducted with small size packing used in lab scale multi-tray distillation equipment for the performance evaluation of distillation system. A sufficient surface are yielded with 6.7 mm cylindrical packings made of stainless steel, and a good liquid holdup and residence time are resulted. The comparison between the theoretical tray from the HYSYS and the experimental distillation outcome indicates that a 7 cm HETP from 27 cm packing height and a 8 cm HETP from 45 cm packing height are obtained. Comparing with the 8 cm HETP of commercial structured packing shows a similar experimental results obtained here. The 7 cm HETP is available with a complete insulation, and the importance of the insulation is proved. The results of this study indicates that a practical distillation column used in field can be tested in lab.

Performance Analysis of Hint-KD Training Approach for the Teacher-Student Framework Using Deep Residual Networks (딥 residual network를 이용한 선생-학생 프레임워크에서 힌트-KD 학습 성능 분석)

  • Bae, Ji-Hoon;Yim, Junho;Yu, Jaehak;Kim, Kwihoon;Kim, Junmo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.5
    • /
    • pp.35-41
    • /
    • 2017
  • In this paper, we analyze the performance of the recently introduced Hint-knowledge distillation (KD) training approach based on the teacher-student framework for knowledge distillation and knowledge transfer. As a deep neural network (DNN) considered in this paper, the deep residual network (ResNet), which is currently regarded as the latest DNN, is used for the teacher-student framework. Therefore, when implementing the Hint-KD training, we investigate the impact on the weight of KD information based on the soften factor in terms of classification accuracy using the widely used open deep learning frameworks, Caffe. As a results, it can be seen that the recognition accuracy of the student model is improved when the fixed value of the KD information is maintained rather than the gradual decrease of the KD information during training.

Knowledge Distillation based-on Internal/External Correlation Learning

  • Hun-Beom Bak;Seung-Hwan Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.4
    • /
    • pp.31-39
    • /
    • 2023
  • In this paper, we propose an Internal/External Knowledge Distillation (IEKD), which utilizes both external correlations between feature maps of heterogeneous models and internal correlations between feature maps of the same model for transferring knowledge from a teacher model to a student model. To achieve this, we transform feature maps into a sequence format and extract new feature maps suitable for knowledge distillation by considering internal and external correlations through a transformer. We can learn both internal and external correlations by distilling the extracted feature maps and improve the accuracy of the student model by utilizing the extracted feature maps with feature matching. To demonstrate the effectiveness of our proposed knowledge distillation method, we achieved 76.23% Top-1 image classification accuracy on the CIFAR-100 dataset with the "ResNet-32×4/VGG-8" teacher and student combination and outperformed the state-of-the-art KD methods.

A Wireless Internet Proxy Server Cluster with Enhanced Distillation and Cashing Functions (압축과 캐싱 기능을 향상한 무선 인터넷 프록시 서버 클러스터)

  • Kwak, Hu-Keun;Hwang, Jae-Hoon;Chung, Kyu-Sik
    • Proceedings of the IEEK Conference
    • /
    • 2004.06a
    • /
    • pp.103-106
    • /
    • 2004
  • A wireless Internet proxy server cluster has to Distillation and Caching functions in order that a user on wireless internet can use existing wired internet service. Distillation function works to distill HTML documents and included images according to the defined preference by a user. When a user requests repeatedly, Caching function decreases response time by reusing original and distilled images or HTML documents. In this paper, we proposed enhanced distillation and caching functions. We performed experiments using 16 PCs and experimental results show the effectiveness of the proposed system compared to the existing system.

  • PDF

Lightweight Single Image Super-Resolution Convolution Neural Network in Portable Device

  • Wang, Jin;Wu, Yiming;He, Shiming;Sharma, Pradip Kumar;Yu, Xiaofeng;Alfarraj, Osama;Tolba, Amr
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.4065-4083
    • /
    • 2021
  • Super-resolution can improve the clarity of low-resolution (LR) images, which can increase the accuracy of high-level compute vision tasks. Portable devices have low computing power and storage performance. Large-scale neural network super-resolution methods are not suitable for portable devices. In order to save the computational cost and the number of parameters, Lightweight image processing method can improve the processing speed of portable devices. Therefore, we propose the Enhanced Information Multiple Distillation Network (EIMDN) to adapt lower delay and cost. The EIMDN takes feedback mechanism as the framework and obtains low level features through high level features. Further, we replace the feature extraction convolution operation in Information Multiple Distillation Block (IMDB), with Ghost module, and propose the Enhanced Information Multiple Distillation Block (EIMDB) to reduce the amount of calculation and the number of parameters. Finally, coordinate attention (CA) is used at the end of IMDB and EIMDB to enhance the important information extraction from Spaces and channels. Experimental results show that our proposed can achieve convergence faster with fewer parameters and computation, compared with other lightweight super-resolution methods. Under the condition of higher peak signal-to-noise ratio (PSNR) and higher structural similarity (SSIM), the performance of network reconstruction image texture and target contour is significantly improved.