DOI QR코드

DOI QR Code

Development of Application to Deal with Large Data Using Hadoop for 3D Printer

하둡을 이용한 3D 프린터용 대용량 데이터 처리 응용 개발

  • Received : 2019.07.17
  • Accepted : 2019.08.18
  • Published : 2020.01.31

Abstract

3D printing is one of the emerging technologies and getting a lot of attention. To do 3D printing, 3D model is first generated, and then converted to G-code which is 3D printer's operations. Facet, which is a small triangle, represents a small surface of 3D model. Depending on the height or precision of the 3D model, the number of facets becomes very large and so the conversion time from 3D model to G-code takes longer. Apach Hadoop is a software framework to support distributed processing for large data set and its application range gets widening. In this paper, Hadoop is used to do the conversion works time-efficient way. 2-phase distributed algorithm is developed first. In the algorithm, all facets are sorted according to its lowest Z-value, divided into N parts, and converted on several nodes independently. The algorithm is implemented in four steps; preprocessing - Map - Shuffling - Reduce of Hadoop. Finally, to show the performance evaluation, Hadoop systems are set up and converts testing 3D model while changing the height or precision.

3D 프린팅은 주목받는 신기술의 하나로 많은 관심을 받고 있다. 3D 프린팅을 하기 위해서는 먼저 3D 모델을 생성한 후, 이를 프린터가 인식할 수 있는 G-code로 변환하여야 한다. 대개 3D 모델은 페이셋이라고 하는 조그만 삼각형으로 면을 표현하는데, 모델의 크기나 정밀도에 따라 페이셋의 개수가 매우 많아져서 변환에 많은 시간이 걸리게 된다. 아파치 하둡은 대용량 데이터의 분산처리를 지원하는 프레임워크로서 그 활용 범위가 넓어지고 있다. 본 논문에서는 3D 모델을 G-code로 변환하는 작업을 효율적으로 수행하기 위해 하둡을 활용하고자 한다. 이를 위해 2단계의 분산 알고리즘을 개발하였다. 이 알고리즘은 여러 페이셋들을 먼저 Z축 값으로 정렬한 후, N등분하여 여러 노드에서 독립적으로 분산처리하도록 되어 있다. 실제 분산처리는 전처리 - 하둡의 Map - Shuffling - Reduce의 4 단계를 거쳐 구현되었다. 최종적으로 성능 평가를 위해 테스트용 3D 모델의 크기와 정밀도에 따른 처리 시간의 효율성을 보였다.

Keywords

References

  1. M. A. Beyer and D. Laney, "The importance of Big Data: A Definition," Gartner, 2012.
  2. D. Becker, T. D. King, and B. McMullen, "Big data, big data quality problem," Int'l Conf. on Big Data, pp.2644-2653, 2015.
  3. The Exponential Growth of Data, [Internet], https://insidebigdata.com/2017/02/16/the-exponential-growth-of-data, 2017.
  4. J. Teixeira, G. Barros, V. Teichrieb, and W. Correia, "3D Printing as a Means for Augmenting Existing Surfaces," Symposium on Virtual and Augmented Reality (SVR), pp. 24-28, 2016.
  5. P. F. Jacobs, "Rapid prototyping & Manufacturing: Fundamentals of Stereolithography," Society of Manufacturing Engineers, 1992.
  6. G. Turkington, "Hadoop - Beginner's Guide," PACKT, 2013
  7. M. Bhandarkar, "MapReduce programming with apache hadoop," Int'l Symp. on Parallel & Distributed Processing (IPDPS), Vol.1, pp.1, 2010.
  8. S. Vemula and C. Crick, "Hadoop image processing framework," IEEE Int'l Congress on Big Data, pp.506-513, 2015.
  9. K. E. Lee, M. J. Kim, D. Jeong, and S. Kim, "High volumes of data conversion based on Hadoop," The KIPS Spring Conference, C2-24, 2019.
  10. O. Rouiller, B. Bickel, J. Kautz, W. Matusik, and M. Alexa, "3D-Printing Spatially Varying BRDFs," IEEE Computer Graphics and Applications, Vol.33, pp.48-57, 2013.
  11. J. Wang, R. Zhao, and M. Pang, "Modeling Single-Gyroid Structures in Surface Mesh Models for 3D Printing," Int'l Conf. on Cyberworlds, Vol.1, pp.1-8, 2018.
  12. X. Luo, J. Wang, N. Liu, Z. Zhao, and Y. Zhou, "YaRep: A Personal 3D Printing Simulator," Int'l Conf. on Virtual Reality and Visualization(ICVRV), pp.408-411, 2014.
  13. E. Mohamed and Z. Hong, "Hadoop-MapReduce Job Scheduling Algorithms Survey," Int'l Conf. on Cloud Computing and Big Data, pp.237-242, 2016.
  14. Z. Fadika, E. Dede, M. Govindaraju, and L. Ramakrishnan, "MARIANE: MApReduce Implementation Adapted for HPC Environments," IEEE/ACM Int'l Workshop on Grid Computing, pp.82-89, 2011.
  15. N. Alasmari, "Optimising Cloud-based Hadoop 2.x Applications," IEEE/ACM Int'l Conf. on Utility and Cloud Computing Companion(UCC Companion), pp.43-46, 2018.