• Title/Summary/Keyword: Precision Engineering

Search Result 21,957, Processing Time 0.056 seconds

Design Factor Analysis of End-Effector for Oriental Melon Harvesting Robot in Greenhouse Cultivation (시설재배 참외 수확 로봇용 엔드이펙터의 설계 요인 분석)

  • Ha, Yu Shin;Kim, Tae Wook
    • Journal of Bio-Environment Control
    • /
    • v.22 no.3
    • /
    • pp.284-290
    • /
    • 2013
  • This study analyzed the geometric, compressive, cutting and friction properties of oriental melons in order to design a gripper capable of soft handling and a cutter for cutting oriental melon vine among the end effector of oriental melon as a preliminary step for developing the end effector of the robot capable of harvesting oriental melons in protected cultivation. As a result, the average length, diameter at the midpoint, weight, volume and roundness of the oriental melons were 108 mm, 70 mm, 188 g, 333 mL and 3.8 mm. Nonlinear regression analysis was performed on the equation $W=L^a{\times}D_2^b$ with variation of the length (L) and diameter (D2) of the weight (W) of the oriental melons. As a result, it was shown that there was a correlation between a of 2.0279 and b of -0.9998 as a constant value. The average diameter of the oriental melon vine was 3.8 mm, and most vines were distributed within a radius of 5 mm from the center. The average yield value, compressive strength and hardness of the oriental melons were $36.5N/cm^2$, $185.7N/cm^2$ and $636.7N/cm^2$, respectively. The average cutting force and shear strength of the oriental melon vines were $2.87{\times}10^{-2}\;N$ and $5.60N/cm^2$, respectively. The maximum friction coefficient of the oriental melons was rubber of 0.609, followed by aluminium of 0.393, stainless steel of 0.177 and teflon of 0.079. It was considered possible to apply it to the size of the gripper and cutter, turning radius, dynamics of drive motor and selection of materials and their quality in light of the position error and safety factor according to the movement when designing end effector based on the analyzed data.

Suggestion for Comprehensive Quality Assurance of Medical Linear Accelerator in Korea (국내 선형가속기의 포괄적인 품질관리체계에 대한 제언)

  • Choi, Sang Hyoun;Park, Dong-wook;Kim, Kum Bae;Kim, Dong Wook;Lee, Jaiki;Shin, Dong Oh
    • Progress in Medical Physics
    • /
    • v.26 no.4
    • /
    • pp.294-303
    • /
    • 2015
  • American Association of Physicists in Medicine (AAPM) Published Task Group 40 report which includes recommendations for comprehensive quality assurance (QA) for medical linear accelerator in 1994 and TG-142 report for recommendation for QA which includes procedures such as intensity-modulated radiotherapy (IMRT), stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT) in 2010. Recently, Nuclear Safety and Security Commission (NSSC) published NSSC notification no. 2015-005 which is "Technological standards for radiation safety of medical field". This notification regulate to establish guidelines for quality assurance which includes organization and job, devices, methods/frequency/tolerances and action levels for QA, and to implement quality assurance in each medical institution. For this reason, all of these facilities using medical machine for patient treatment should establish items, frequencies and tolerances for proper QA for medical treatment machine that use the techniques such as non-IMRT, IMRT and SRS/SBRT, and perform quality assurance. For domestic, however, there are lack of guidelines and reports of Korean Society of Medical Physicists (KSMP) for reference to establish systematic QA report in medical institutes. This report, therefore, suggested comprehensive quality assurance system such as the scheme of quality assurance system, which is considered for domestic conditions, based the notification of NSSC and AAPM TG-142 reports. We think that the quality assurance system suggested for medical linear accelerator also help establishing QA system for another high-precision radiation treatment machines.

Trend in Research and Application of Hard Carbon-based Thin Films (탄소계 경질 박막의 연구 및 산업 적용 동향)

  • Lee, Gyeong-Hwang;Park, Jong-Won;Yang, Ji-Hun;Jeong, Jae-In
    • Proceedings of the Korean Institute of Surface Engineering Conference
    • /
    • 2009.05a
    • /
    • pp.111-112
    • /
    • 2009
  • Diamond-like carbon (DLC) is a convenient term to indicate the compositions of the various forms of amorphous carbon (a-C), tetrahedral amorphous carbon (ta-C), hydrogenated amorphous carbon and tetrahedral amorphous carbon (a-C:H and ta-C:H). The a-C film with disordered graphitic ordering, such as soot, chars, glassy carbon, and evaporated a-C, is shown in the lower left hand corner. If the fraction of sp3 bonding reaches a high degree, such an a-C is denoted as tetrahedral amorphous carbon (ta-C), in order to distinguish it from sp2 a-C [2]. Two hydrocarbon polymers, that is, polyethylene (CH2)n and polyacetylene (CH)n, define the limits of the triangle in the right hand corner beyond which interconnecting C-C networks do not form, and only strait-chain molecules are formed. The DLC films, i.e. a-C, ta-C, a-C:H and ta-C:H, have some extreme properties similar to diamond, such as hardness, elastic modulus and chemical inertness. These films are great advantages for many applications. One of the most important applications of the carbon-based films is the coating for magnetic hard disk recording. The second successful application is wear protective and antireflective films for IR windows. The third application is wear protection of bearings and sliding friction parts. The fourth is precision gages for the automotive industry. Recently, exciting ongoing study [1] tries to deposit a carbon-based protective film on engine parts (e.g. engine cylinders and pistons) taking into account not only low friction and wear, but also self lubricating properties. Reduction of the oil consumption is expected. Currently, for an additional application field, the carbon-based films are extensively studied as excellent candidates for biocompatible films on biomedical implants. The carbon-based films consist of carbon, hydrogen and nitrogen, which are biologically harmless as well as the main elements of human body. Some in vitro and limited in vivo studies on the biological effects of carbon-based films have been studied [$2{\sim}5$].The carbon-based films have great potentials in many fields. However, a few technological issues for carbon-based film are still needed to be studied to improve the applicability. Aisenberg and Chabot [3] firstly prepared an amorphous carbon film on substrates remained at room temperature using a beam of carbon ions produced using argon plasma. Spencer et al. [4] had subsequently developed this field. Many deposition techniques for DLC films have been developed to increase the fraction of sp3 bonding in the films. The a-C films have been prepared by a variety of deposition methods such as ion plating, DC or RF sputtering, RF or DC plasma enhanced chemical vapor deposition (PECVD), electron cyclotron resonance chemical vapor deposition (ECR-CVD), ion implantation, ablation, pulsed laser deposition and cathodic arc deposition, from a variety of carbon target or gaseous sources materials [5]. Sputtering is the most common deposition method for a-C film. Deposited films by these plasma methods, such as plasma enhanced chemical vapor deposition (PECVD) [6], are ranged into the interior of the triangle. Application fields of DLC films investigated from papers. Many papers purposed to apply for tribology due to the carbon-based films of low friction and wear resistance. Figure 1 shows the percentage of DLC research interest for application field. The biggest portion is tribology field. It is occupied 57%. Second, biomedical field hold 14%. Nowadays, biomedical field is took notice in many countries and significantly increased the research papers. DLC films actually applied to many industries in 2005 as shown figure 2. The most applied fields are mold and machinery industries. It took over 50%. The automobile industry is more and more increase application parts. In the near future, automobile industry is expected a big market for DLC coating. Figure 1 Research interests of carbon-based filmsFigure 2 Demand ratio of DLC coating for industry in 2005. In this presentation, I will introduce a trend of carbon-based coating research and applications.

  • PDF

Strategies about Optimal Measurement Matrix of Environment Factors Inside Plastic Greenhouse (플라스틱온실 내부 환경 인자 다중센서 설치 위치 최적화 전략)

  • Lee, JungKyu;Kang, DongHyun;Oh, SangHoon;Lee, DongHoon
    • Journal of Bio-Environment Control
    • /
    • v.29 no.2
    • /
    • pp.161-170
    • /
    • 2020
  • There is systematic spatial variations in environmental properties due to sensitive reaction to external conditions at plastic greenhouse occupied 99.2% of domestic agricultural facilities. In order to construct 3 dimensional distribution of temperature, relative humidity, CO2 and illuminance, measurement matrix as 3 by 3 by 5 in direction of width, height and length, respectively, dividing indoor space of greenhouse was designed and tested at experimental site. Linear regression analysis was conducted to evaluate optimal estimation method in terms with horizontal and vertical variations. Even though sole measurement point for temperature and relative humidity could be feasible to assess indoor condition, multiple measurement matrix is inevitably required to improve spatial precision at certain time domain such as period of sunrise and sunset. In case with CO2, multiple measurement matrix could not successfully improve the spatial predictability during a whole experimental period. In case with illuminance, prediction performance was getting smaller after a time period of sunrise due to systematic interference such as indoor structure. Thus, multiple sensing methodology was proposed in direction of length at higher height than growing bed, which could compensate estimation error in spatial domain. Appropriate measurement matrix could be constructed considering the transition of stability in indoor environmental properties due to external variations. As a result, optimal measurement matrix should be carefully designed considering flexibility of construction relevant with the type of property, indoor structure, the purpose of crop and the period of growth. For an instance, partial cooling and heating system to save a consumption of energy supplement could be successfully accomplished by the deployment of multiple measurement matrix.

Scaling up of single fracture using a spectral analysis and computation of its permeability coefficient (스펙트럼 분석을 응용한 단일 균열 규모확장과 투수계수 산정)

  • 채병곤
    • The Journal of Engineering Geology
    • /
    • v.14 no.1
    • /
    • pp.29-46
    • /
    • 2004
  • It is important to identify geometries of fracture that act as a conduit of fluid flow for characterization of ground water flow in fractured rock. Fracture geometries control hydraulic conductivity and stream lines in a rock mass. However, we have difficulties to acquire whole geometric data of fractures in a field scale because of discontinuous distribution of outcrops and impossibility of continuous collecting of subsurface data. Therefore, it is needed to develop a method to describe whole feature of a target fracture geometry. This study suggests a new approach to develop a method to characterize on the whole feature of a target fracture geometry based on the Fourier transform. After sampling of specimens along a target fracture from borehole cores, effective frequencies among roughness components were selected by the Fourier transform on each specimen. Then, the selected effective frequencies were averaged on each frequency. Because the averaged spectrum includes all the frequency profiles of each specimen, it shows the representative components of the fracture roughness of the target fracture. The inverse Fourier transform is conducted to reconstruct an averaged whole roughness feature after low pass filtering. The reconstructed roughness feature also shows the representative roughness of the target subsurface fracture including the geometrical characteristics of each specimen. It also means that overall roughness feature by scaling up of a fracture. In order to identify the characteristics of permeability coefficients along the target fracture, fracture models were constructed based on the reconstructed roughness feature. The computation of permeability coefficient was performed by the homogenization analysis that can calculate accurate permeability coefficients with full consideration of fracture geometry. The results show a range between $10^{-4}{\;}and{\;}10^{-3}{\;}cm/sec$, indicating reasonable values of permeability coefficient along a large fracture. This approach will be effectively applied to the analysis of permeability characteristics along a large fracture as well as identification of the whole feature of a fracture in a field scale.

A re-appraisal of scoring items in state assessment of NATM tunnel considering influencing factors causing longitudinal cracks (종방향균열 영향인자 분석을 통한 NATM터널 정밀안전진단 상태평가 항목의 재검토)

  • Choo, Jin-Ho;Yoo, Chang-Kyoon;Oh, Young-Chul;Lee, In-Mo
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.21 no.4
    • /
    • pp.479-499
    • /
    • 2019
  • State assessment of an operational tunnel is usually done by performing visual inspection and durability tests by following the detailed guideline for safety inspection (SI) and/ or precision inspection for safety and diagnosis (PISD). In this study, 12 NATM tunnels, which have been operational for more than 10 years, were inspected to figure out the cause of longitudinal cracks for the purpose of modifying the scoring items in the state assessment NATM tunnel related to the longitudinal crack and the thickness of concrete lining. All investigated tunnels were classified into four groups depending on the shape and usage of each tunnel. The causes of longitudinal crack occurrence were analyzed by investigating the correlations between the longitudinal crack and the following four factors: the patterns of ground excavation; construction state of primary support system; characteristics of material properties of the concrete lining; and thickness of lining which was obtained by Ground Penetration Radar (GPR) tests. It was found that influencing factors causing longitudinal cracks in the lining were closely related with the construction condition of the primary support system, i.e. shotcrete, rockbolt, and steel-rib; crack occurrences were not much affected by the excavation patterns. As for the properties of concrete lining materials, occurrence of the longitudinal crack was mostly affected by the following three items: w/c ratio; contents of cement; and strength of lining. When estimating the lining thickness of the concrete lining by GPR tests and taking thickness effect into account in the statement assessment, it was concluded that increase of the index score by an average of 0.03 (ranging from 0.01 up to 0.071) is needed; a more realistic way of state assessment should be proposed in which the increased index score caused by lack of lining thickness should be taken into account.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

Development and Performance Evaluation of Multi-sensor Module for Use in Disaster Sites of Mobile Robot (조사로봇의 재난현장 활용을 위한 다중센서모듈 개발 및 성능평가에 관한 연구)

  • Jung, Yonghan;Hong, Junwooh;Han, Soohee;Shin, Dongyoon;Lim, Eontaek;Kim, Seongsam
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_3
    • /
    • pp.1827-1836
    • /
    • 2022
  • Disasters that occur unexpectedly are difficult to predict. In addition, the scale and damage are increasing compared to the past. Sometimes one disaster can develop into another disaster. Among the four stages of disaster management, search and rescue are carried out in the response stage when an emergency occurs. Therefore, personnel such as firefighters who are put into the scene are put in at a lot of risk. In this respect, in the initial response process at the disaster site, robots are a technology with high potential to reduce damage to human life and property. In addition, Light Detection And Ranging (LiDAR) can acquire a relatively wide range of 3D information using a laser. Due to its high accuracy and precision, it is a very useful sensor when considering the characteristics of a disaster site. Therefore, in this study, development and experiments were conducted so that the robot could perform real-time monitoring at the disaster site. Multi-sensor module was developed by combining LiDAR, Inertial Measurement Unit (IMU) sensor, and computing board. Then, this module was mounted on the robot, and a customized Simultaneous Localization and Mapping (SLAM) algorithm was developed. A method for stably mounting a multi-sensor module to a robot to maintain optimal accuracy at disaster sites was studied. And to check the performance of the module, SLAM was tested inside the disaster building, and various SLAM algorithms and distance comparisons were performed. As a result, PackSLAM developed in this study showed lower error compared to other algorithms, showing the possibility of application in disaster sites. In the future, in order to further enhance usability at disaster sites, various experiments will be conducted by establishing a rough terrain environment with many obstacles.

A study on improving self-inference performance through iterative retraining of false positives of deep-learning object detection in tunnels (터널 내 딥러닝 객체인식 오탐지 데이터의 반복 재학습을 통한 자가 추론 성능 향상 방법에 관한 연구)

  • Kyu Beom Lee;Hyu-Soung Shin
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.26 no.2
    • /
    • pp.129-152
    • /
    • 2024
  • In the application of deep learning object detection via CCTV in tunnels, a large number of false positive detections occur due to the poor environmental conditions of tunnels, such as low illumination and severe perspective effect. This problem directly impacts the reliability of the tunnel CCTV-based accident detection system reliant on object detection performance. Hence, it is necessary to reduce the number of false positive detections while also enhancing the number of true positive detections. Based on a deep learning object detection model, this paper proposes a false positive data training method that not only reduces false positives but also improves true positive detection performance through retraining of false positive data. This paper's false positive data training method is based on the following steps: initial training of a training dataset - inference of a validation dataset - correction of false positive data and dataset composition - addition to the training dataset and retraining. In this paper, experiments were conducted to verify the performance of this method. First, the optimal hyperparameters of the deep learning object detection model to be applied in this experiment were determined through previous experiments. Then, in this experiment, training image format was determined, and experiments were conducted sequentially to check the long-term performance improvement through retraining of repeated false detection datasets. As a result, in the first experiment, it was found that the inclusion of the background in the inferred image was more advantageous for object detection performance than the removal of the background excluding the object. In the second experiment, it was found that retraining by accumulating false positives from each level of retraining was more advantageous than retraining independently for each level of retraining in terms of continuous improvement of object detection performance. After retraining the false positive data with the method determined in the two experiments, the car object class showed excellent inference performance with an AP value of 0.95 or higher after the first retraining, and by the fifth retraining, the inference performance was improved by about 1.06 times compared to the initial inference. And the person object class continued to improve its inference performance as retraining progressed, and by the 18th retraining, it showed that it could self-improve its inference performance by more than 2.3 times compared to the initial inference.

Development of Gated Myocardial SPECT Analysis Software and Evaluation of Left Ventricular Contraction Function (게이트 심근 SPECT 분석 소프트웨어의 개발과 좌심실 수축 기능 평가)

  • Lee, Byeong-Il;Lee, Dong-Soo;Lee, Jae-Sung;Chung, June-Key;Lee, Myung-Chul;Choi, Heung-Kook
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.2
    • /
    • pp.73-82
    • /
    • 2003
  • Objectives: A new software (Cardiac SPECT Analyzer: CSA) was developed for quantification of volumes and election fraction on gated myocardial SPECT. Volumes and ejection fraction by CSA were validated by comparing with those quantified by Quantitative Gated SPECT (QGS) software. Materials and Methods: Gated myocardial SPECT was peformed in 40 patients with ejection fraction from 15% to 85%. In 26 patients, gated myocardial SPECT was acquired again with the patients in situ. A cylinder model was used to eliminate noise semi-automatically and profile data was extracted using Gaussian fitting after smoothing. The boundary points of endo- and epicardium were found using an iterative learning algorithm. Enddiastolic (EDV) and endsystolic volumes (ESV) and election fraction (EF) were calculated. These values were compared with those calculated by QGS and the same gated SPECT data was repeatedly quantified by CSA and variation of the values on sequential measurements of the same patients on the repeated acquisition. Results: From the 40 patient data, EF, EDV and ESV by CSA were correlated with those by QGS with the correlation coefficients of 0.97, 0.92, 0.96. Two standard deviation (SD) of EF on Bland Altman plot was 10.1%. Repeated measurements of EF, EDV, and ESV by CSA were correlated with each other with the coefficients of 0.96, 0.99, and 0.99 for EF, EDV and ESV respectively. On repeated acquisition, reproducibility was also excellent with correlation coefficients of 0.89, 0.97, 0.98, and coefficient of variation of 8.2%, 5.4mL, 8.5mL and 2SD of 10.6%, 21.2mL, and 16.4mL on Bland Altman plot for EF, EDV and ESV. Conclusion: We developed the software of CSA for quantification of volumes and ejection fraction on gated myocardial SPECT. Volumes and ejection fraction quantified using this software was found valid for its correctness and precision.