• Title/Summary/Keyword: big machine tools

Search Result 40, Processing Time 0.025 seconds

A Study on the Introduction of Intelligent Document Processing and Change of Record Management (지능형 문서처리 도입과 기록관리 변화에 관한 연구)

  • Ryu, Hanjo;Lee, Kyungnam;Hwang, Jinhyun;Yim, Jinhee
    • The Korean Journal of Archival Studies
    • /
    • no.68
    • /
    • pp.41-72
    • /
    • 2021
  • In order to analyze big data, documents should be converted to a open standard format to increase machine readability. It also need natural language processing tools. This study focused on the background of intelligent document processing and the status of research in the public sector, and predicted the changes in work that intelligent document processing would bring. This study noted the changes that intelligent document processing would bring to the archival work, and also considered changes in the role of archivist and their required competencies. Changes in archival work could be anticipated across a wide range of Records Management work and Archives Management work. In particular, it was expected to have a significant impact on the automation of repetitive archival tasks or the description and utilization of records. This study proposed the need to prepare new archival work procedures, methods, and necessary competencies in response to these change in archival work.

A Study on the Development of a Program to Body Circulation Measurement Using the Machine Learning and Depth Camera

  • Choi, Dong-Gyu;Jang, Jong-Wook
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.1
    • /
    • pp.122-129
    • /
    • 2020
  • The circumference of the body is not only an indicator in order to buy clothes in our life but an important factor which can increase the effectiveness healing properly after figuring out the shape of body in a hospital. There are several measurement tools and methods so as to know this, however, it spends a lot of time because of the method measured by hand for accurate identification, compared to the modern advanced societies. Also, the current equipments for automatic body scanning are not easy to use due to their big volume or high price generally. In this papers, OpenPose model which is a deep learning-based Skeleton Tracking is used in order to solve the problems previous methods have and for ease of application. It was researched to find joints and an approximation by applying the data of the deep camera via reference data of the measurement parts provided by the hospitals and to develop a program which is able to measure the circumference of the body lighter and easier by utilizing the elliptical circumference formula.

Static and Dynamic Characteristics of the Spindle Bearing System with a Gear Located on the Bearing Span (베어링 스팬상에 기어구동축을 갖는 스핀들 베어링 시스템의 정적 및 동적 해석방법에 관한 연구)

  • Choe, Jin-Gyeong;Big, Gyu-Yeol;Lee, Dae-Gil
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.20 no.5
    • /
    • pp.1477-1485
    • /
    • 1996
  • Since the spindle bearing systme is the main source of the total cutting point compliance of machine tool structures, in this work, the static and dynamic characteristics of the spndle bearing systme driven by the gear located on the bearing span were investigated using analytical and finite elemtn methods to improve the performance of the spindle bearing system. Based on the theretical results, a specially designed prototype spindle bvearing systme was manufactured. Using the manufactured spindle bearing system, the static and dynamic characteristics were measured. From the comparison of the experimental results with the theoretical results, it was found that the finite elemetn method predicted well the static and dynamic characteristics of the spindle bearing system.

A Study on Five-Axis Roughing of Impeller with Ruled Surface (룰드 곡면으로 된 임펠러의 5축 황삭 가공에 관한 연구)

  • Jang, Dong-Kyu;Lim, Ki-Nam;Yang, Gyun-Eui
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.24 no.7 s.196
    • /
    • pp.60-68
    • /
    • 2007
  • This paper presents an efficient 5-axis roughing method for centrifugal impeller. The efficient roughing is minimization of cutting time through minimizing tool tilting and rotating motions. To minimized cutting time, machining area is divided into sub-cutting regions using control points on hub curves and shroud curves of blade used to design and analyze centrifugal impeller. For sub-cutting regions, diameters of cutting tools are determined as big as possible. Then, tool paths are generated with the tilting axis and rotating axis of 5-axis machine limited and fixed, which can give more efficient machining speed and machining stability than the conventional methods. Experimental results show that the proposed method is more efficient than the conventional methods to mill with the only one cutting tool without dividing area and the previous methods to mill with simultaneous 5-axis processing with dividing area.

Supramax Bulk Carrier Market Forecasting with Technical Indicators and Neural Networks

  • Lim, Sang-Seop;Yun, Hee-Sung
    • Journal of Navigation and Port Research
    • /
    • v.42 no.5
    • /
    • pp.341-346
    • /
    • 2018
  • Supramax bulk carriers cover a wide range of ocean transportation requirements, from major to minor bulk cargoes. Market forecasting for this segment has posed a challenge to researchers, due to complexity involved, on the demand side of the forecasting model. This paper addresses this issue by using technical indicators as input features, instead of complicated supply-demand variables. Artificial neural networks (ANN), one of the most popular machine-learning tools, were used to replace classical time-series models. Results revealed that ANN outperformed the benchmark binomial logistic regression model, and predicted direction of the spot market with more than 70% accuracy. Results obtained in this paper, can enable chartering desks to make better short-term chartering decisions.

Genomic data Analysis System using GenoSync based on SQL in Distributed Environment

  • Seine Jang;Seok-Jae Moon
    • International journal of advanced smart convergence
    • /
    • v.13 no.3
    • /
    • pp.150-155
    • /
    • 2024
  • Genomic data plays a transformative role in medicine, biology, and forensic science, offering insights that drive advancements in clinical diagnosis, personalized medicine, and crime scene investigation. Despite its potential, the integration and analysis of diverse genomic datasets remain challenging due to compatibility issues and the specialized nature of existing tools. This paper presents the GenomeSync system, designed to overcome these limitations by utilizing the Hadoop framework for large-scale data handling and integration. GenomeSync enhances data accessibility and analysis through SQL-based search capabilities and machine learning techniques, facilitating the identification of genetic traits and the resolution of forensic cases. By pre-processing DNA profiles from crime scenes, the system calculates similarity scores to identify and aggregate related genomic data, enabling accurate prediction models and personalized treatment recommendations. GenomeSync offers greater flexibility and scalability, supporting complex analytical needs across industries. Its robust cloud-based infrastructure ensures data integrity and high performance, positioning GenomeSync as a crucial tool for reliable, data-driven decision-making in the genomic era.

S-PARAFAC: Distributed Tensor Decomposition using Apache Spark (S-PARAFAC: 아파치 스파크를 이용한 분산 텐서 분해)

  • Yang, Hye-Kyung;Yong, Hwan-Seung
    • Journal of KIISE
    • /
    • v.45 no.3
    • /
    • pp.280-287
    • /
    • 2018
  • Recently, the use of a recommendation system and tensor data analysis, which has high-dimensional data, is increasing, as they allow us to analyze the tensor and extract potential elements and patterns. However, due to the large size and complexity of the tensor, it needs to be decomposed in order to analyze the tensor data. While several tools are used for tensor decomposition such as rTensor, pyTensor, and MATLAB, since such tools run on a single machine, they are unable to handle large data. Also, while distributed tensor decomposition tools based on Hadoop can handle a scalable tensor, its computing speed is too slow. In this paper, we propose S-PARAFAC, which is a tensor decomposition tool based on Apache Spark, in distributed in-memory environments. We converted the PARAFAC algorithm into an Apache Spark version that enables rapid processing of tensor data. We also compared the performance of the Hadoop based tensor tool and S-PARAFAC. The result showed that S-PARAFAC is approximately 4~25 times faster than the Hadoop based tensor tool.

The World as Seen from Venice (1205-1533) as a Case Study of Scalable Web-Based Automatic Narratives for Interactive Global Histories

  • NANETTI, Andrea;CHEONG, Siew Ann
    • Asian review of World Histories
    • /
    • v.4 no.1
    • /
    • pp.3-34
    • /
    • 2016
  • This introduction is both a statement of a research problem and an account of the first research results for its solution. As more historical databases come online and overlap in coverage, we need to discuss the two main issues that prevent 'big' results from emerging so far. Firstly, historical data are seen by computer science people as unstructured, that is, historical records cannot be easily decomposed into unambiguous fields, like in population (birth and death records) and taxation data. Secondly, machine-learning tools developed for structured data cannot be applied as they are for historical research. We propose a complex network, narrative-driven approach to mining historical databases. In such a time-integrated network obtained by overlaying records from historical databases, the nodes are actors, while thelinks are actions. In the case study that we present (the world as seen from Venice, 1205-1533), the actors are governments, while the actions are limited to war, trade, and treaty to keep the case study tractable. We then identify key periods, key events, and hence key actors, key locations through a time-resolved examination of the actions. This tool allows historians to deal with historical data issues (e.g., source provenance identification, event validation, trade-conflict-diplomacy relationships, etc.). On a higher level, this automatic extraction of key narratives from a historical database allows historians to formulate hypotheses on the courses of history, and also allow them to test these hypotheses in other actions or in additional data sets. Our vision is that this narrative-driven analysis of historical data can lead to the development of multiple scale agent-based models, which can be simulated on a computer to generate ensembles of counterfactual histories that would deepen our understanding of how our actual history developed the way it did. The generation of such narratives, automatically and in a scalable way, will revolutionize the practice of history as a discipline, because historical knowledge, that is the treasure of human experiences (i.e. the heritage of the world), will become what might be inherited by machine learning algorithms and used in smart cities to highlight and explain present ties and illustrate potential future scenarios and visionarios.

Proposition of balanced comparative confidence considering all available diagnostic tools (모든 가능한 진단도구를 활용한 균형비교신뢰도의 제안)

  • Park, Hee Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.3
    • /
    • pp.611-618
    • /
    • 2015
  • By Wikipedia, big data is a broad term for data sets so large or complex that traditional data processing applications are inadequate. Data mining is the computational process of discovering patterns in huge data sets involving methods at the intersection of association rule, decision tree, clustering, artificial intelligence, machine learning. Association rule is a well researched method for discovering interesting relationships between itemsets in huge databases and has been applied in various fields. There are positive, negative, and inverse association rules according to the direction of association. If you want to set the evaluation criteria of association rule, it may be desirable to consider three types of association rules at the same time. To this end, we proposed a balanced comparative confidence considering sensitivity, specificity, false positive, and false negative, checked the conditions for association threshold by Piatetsky-Shapiro, and compared it with comparative confidence and inversely comparative confidence through a few experiments.

IoT based real time agriculture farming

  • Mateen, Ahmed;Zhu, Qingsheng;Afsar, Salman
    • International journal of advanced smart convergence
    • /
    • v.8 no.4
    • /
    • pp.16-25
    • /
    • 2019
  • The Internet of things (IOT) is remodeling the agribusiness empowering the agriculturists through the extensive range of strategies, for example, accuracy as well as practical farming to deal with challenges in the field. The paper aims making use of evolving technology i.e. IoT and smart agriculture using automation. The objective of this research paper to present tools and best practices for understanding the role of information and communication technologies in agriculture sector, motivate and make the illiterate farmers to understand the best insights given by the big data analytics using machine learning. The methodology used in this system can monitor the humidity, moisture level and can even detect motions. According to the data received from all the sensors the water pump, cutter and sprayer get automatically activated or deactivated. we investigate a remote monitoring system using Wi-Fi. These nodes send data wirelessly to a central server, which collects the data, stores it and will allow it to be analyzed then displayed as needed and can also be sent to the client mobile.