• Title/Summary/Keyword: Precision Point

Search Result 1,340, Processing Time 0.027 seconds

A Study on Field Compost Detection by Using Unmanned AerialVehicle Image and Semantic Segmentation Technique based Deep Learning (무인항공기 영상과 딥러닝 기반의 의미론적 분할 기법을 활용한 야적퇴비 탐지 연구)

  • Kim, Na-Kyeong;Park, Mi-So;Jeong, Min-Ji;Hwang, Do-Hyun;Yoon, Hong-Joo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.367-378
    • /
    • 2021
  • Field compost is a representative non-point pollution source for livestock. If the field compost flows into the water system due to rainfall, nutrients such as phosphorus and nitrogen contained in the field compost can adversely affect the water quality of the river. In this paper, we propose a method for detecting field compost using unmanned aerial vehicle images and deep learning-based semantic segmentation. Based on 39 ortho images acquired in the study area, about 30,000 data were obtained through data augmentation. Then, the accuracy was evaluated by applying the semantic segmentation algorithm developed based on U-net and the filtering technique of Open CV. As a result of the accuracy evaluation, the pixel accuracy was 99.97%, the precision was 83.80%, the recall rate was 60.95%, and the F1-Score was 70.57%. The low recall compared to precision is due to the underestimation of compost pixels when there is a small proportion of compost pixels at the edges of the image. After, It seems that accuracy can be improved by combining additional data sets with additional bands other than the RGB band.

Test Set Construction for Quality Evaluation of NAK Portal's Search Service and the Status Analysis (국가기록포털 검색서비스 품질 점검을 위한 평가셋 구축 및 현황 분석)

  • Jeong Ho, Na;Hyeon-Gi, So;Gyung Rok, Yeom;Jung-Ok, Lee;Hyo-Jung, Oh
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.22 no.4
    • /
    • pp.25-43
    • /
    • 2022
  • The ultimate record management's purpose is preservation and utilization. However, the National Archives of Korea (NAK)s Portal has problems such as search system aging and search tools dualization. As a result, the users' search satisfaction is not satisfied, and the improvement demand increases. This study aimed to evaluate the NAK's search quality as a preliminary study for NAK search system advancement. To this end, we analyzed the current status of CAMS and NAK's Portal. Then, we established the test sets and evaluated the NAK's Portal quality from the user's point of view. Evaluation results were analyzed using Precision, Recall, F-score, and MRR. The analysis results showed that the overall search performance was low, particularly in the "advanced subject search," which showed low performance in Precision, Recall, and MRR. Thus, improvement is urgently needed. The test sets established for this study are expected to be used as a basis for objectively measuring the improvement of the search performance after the NAK search system advancement.

Instrumentation Management of Differential Settlement of the Deep Soft Ground with Dredged Clay Reclaimed in the Upper (대심도 준설 매립지반에서의 층별침하 계측관리에 관한 사례 연구)

  • Tae-Hyung Kim;Seung-Chan Kang;Ji-Gun Chang;Soung-Hun Heo
    • Journal of the Korean Geosynthetics Society
    • /
    • v.22 no.1
    • /
    • pp.87-96
    • /
    • 2023
  • There are a lot of difference between the surface settlement and the differential settlement measured at the Busan New Port, where the dredged and reclaimed clay layer exists and below the clay is originally thickly distributed. To find the cause and solution of this, the actual conditions of each differential settlement used for the soft ground improvement, characteristics, installation method, measurement frequency, measurement data management, and data analysis of each type were considered. In the deep soft ground improvement work where large deformation occurs, the bending deformation of the screw-type differential settlement gauge is less than that of other types of measuring instruments, so there is less risk of loss, and the reliability of data is relatively high as the instruments are installed by drilling for each stratum. Since the greater the amount of high-precision settlement measurement data, the higher the settlement analysis precision. It is necessary to manage with higher criteria than the measurement frequency suggested in the standard specification. For the data management of the differential settlement gauge, it is desirable to create graphs of the settlement and embankment height of the relevant section over time, such as surface, differential, and settlement of pore water pressure gauge for each point. In the case of multi-layered ground with different compression characteristics, it is more appropriate to perform settlement analysis by calculating the consolidation characteristics of each stratum using a differential settlement data.

Assessment of Possibility of Adopting the Error Tolerance of Geometric Correction on Producing 1/5,000 Digital Topographic Map for Unaccessible Area Using the PLEIADES Images and TerraSAR Control Point (PLEIADES 영상과 TerraSAR 기준점을 활용한 비접근지역의 1/5,000 수치지형도 제작을 위한 기하보정의 허용오차 만족 가능성 평가)

  • Jin Kyu, Shin;Young Jin, Lee;Gyung Jong, Kim;Jun Hyuk, Lee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.2
    • /
    • pp.83-94
    • /
    • 2015
  • Recently, the necessity of spatial data in unaccessible area was challenged to set up various plans and policies for preparing the unification and the cooperative projects between South-North Korea. Therefore, this paper planned to evaluate the possibility of adopting the error tolerance in Geometric correction for 1/5,000 digital topographic mapping, using the PLEIADES images and the TerraSAR GCPs (Ground Control Points). The geometric correction was performed by changing the number and placement of GCPs by GPS (Global Positioning System) surveying, as the optimal placement of 5 GCPs were selected considering the geometric stability and steady rate. The positional accuracy evaluated by the TerraSAR GCPs, which were selected by optimal placement of GCPs. The RMSE in control points were X=±0.64m, Y=±0.46m, Z=±0.28m. While the result of geometric correction for PLEIADES images confirmed that the RMSE in control points were X=±0.34m, Y=±0.27m, Z=±0.11m, the RMSE in check points were X=±0.50m, Y=±0.30m, Z=±0.66m. Through this study, we believe if spatial data can integrate with the PLEIADES images and the optimal TerraSAR GCPs, it will be able to obtain the high-precision spatial data for adopting the regulation of 1/5,000 digital topographic map, which adjusts the computation as well as the error bound.

A Study on the Geometric Correction Accuracy Evaluation of Satellite Images Using Daum Map API (Daum Map API를 이용한 위성영상의 기하보정 정확도 평가)

  • Lee, Seong-Geun;Lee, Ho-Jin;Kim, Tae-Geun;Cho, Gi-Sung
    • Journal of Cadastre & Land InformatiX
    • /
    • v.46 no.2
    • /
    • pp.183-196
    • /
    • 2016
  • Ground control points are needed for precision geometric correction of satellite images, and the coordinates of a high-quality ground control point can be obtained from the GPS measurement. However, considering the GPS measurement requires an excessive amount o f t ime a nd e fforts, there is a need for coming up with an alternative solution to replace it. Therefore, we examined the possibility of replacing the existing GPS measurement with coordinates available at online maps to acquire the coordinates of ground control points. To this end, we examined error amounts between the coordinates of ground control points obtained through Daum Map API, and them compared the accuracies between three types of coordinate transformation equations which were used for geometric correction of satellite images. In addition, we used the coordinate transformation equation with the highest accuracy, the coordinates of ground control point obtained through the GPS measurement and those acquired through D aum M ap A PI, and conducted geometric correction on them to compare their accuracy and evaluate their effectiveness. According to the results, the 3rd order polynomial transformation equation showed the highest accuracy among three types of coordinates transformation equations. In the case of using mid-resolution satellite images such as those taken by Landsat-8, it seems that it is possible to use geometrically corrected images that have been obtained after acquiring the coordinates of ground control points through Daum Map API.

Adaptable Center Detection of a Laser Line with a Normalization Approach using Hessian-matrix Eigenvalues

  • Xu, Guan;Sun, Lina;Li, Xiaotao;Su, Jian;Hao, Zhaobing;Lu, Xue
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.4
    • /
    • pp.317-329
    • /
    • 2014
  • In vision measurement systems based on structured light, the key point of detection precision is to determine accurately the central position of the projected laser line in the image. The purpose of this research is to extract laser line centers based on a decision function generated to distinguish the real centers from candidate points with a high recognition rate. First, preprocessing of an image adopting a difference image method is conducted to realize image segmentation of the laser line. Second, the feature points in an integral pixel level are selected as the initiating light line centers by the eigenvalues of the Hessian matrix. Third, according to the light intensity distribution of a laser line obeying a Gaussian distribution in transverse section and a constant distribution in longitudinal section, a normalized model of Hessian matrix eigenvalues for the candidate centers of the laser line is presented to balance reasonably the two eigenvalues that indicate the variation tendencies of the second-order partial derivatives of the Gaussian function and constant function, respectively. The proposed model integrates a Gaussian recognition function and a sinusoidal recognition function. The Gaussian recognition function estimates the characteristic that one eigenvalue approaches zero, and enhances the sensitivity of the decision function to that characteristic, which corresponds to the longitudinal direction of the laser line. The sinusoidal recognition function evaluates the feature that the other eigenvalue is negative with a large absolute value, making the decision function more sensitive to that feature, which is related to the transverse direction of the laser line. In the proposed model the decision function is weighted for higher values to the real centers synthetically, considering the properties in the longitudinal and transverse directions of the laser line. Moreover, this method provides a decision value from 0 to 1 for arbitrary candidate centers, which yields a normalized measure for different laser lines in different images. The normalized results of pixels close to 1 are determined to be the real centers by progressive scanning of the image columns. Finally, the zero point of a second-order Taylor expansion in the eigenvector's direction is employed to refine further the extraction results of the central points at the subpixel level. The experimental results show that the method based on this normalization model accurately extracts the coordinates of laser line centers and obtains a higher recognition rate in two group experiments.

Investigation of the Noise Reduction in the Hollow Cylinder Structure (중공 원통형 구조물의 전달소음 감소 방안 연구)

  • Lee, Sang-Won;Lee, Jong-Kil;Jo, Chi-Yong
    • 대한공업교육학회지
    • /
    • v.36 no.1
    • /
    • pp.115-130
    • /
    • 2011
  • When the hollow cylinder structure moves in underwater with high speed structural can be propagated from the end of the structure to the front side. This noise can reduce the sensitivity of the conformal array which installed in the surface of the cylinder. To reduce this noise propagation it is suggested to install two self-reduction rings at the surrounding of the cylinder which is 500mm in diameter and 840mm in length. The places of the two noise reduction rings are 120mm and 240mm point from the end of the structure. Two noise reduction rings reduced 10.1 % of maximum stress. When outside noise frequency applied to the structure from the 4kZ to 6kHz, 20dB noise reduction was calculated using 6 order polynomial equation. When outside noise frequency also applied to the structure with 200Hz, 500Hz, 900Hz, maximum sound pressure level point moved to the end of the structure. Most conformal sensors are fabricated at the front side of the structure. Based on the simulation results proposed two rings can be reduced noise propagation from the tail of the structure effectively.

Atmospheric Turbulence Simulator for Adaptive Optics Evaluation on an Optical Test Bench

  • Lee, Jun Ho;Shin, Sunmy;Park, Gyu Nam;Rhee, Hyug-Gyo;Yang, Ho-Soon
    • Current Optics and Photonics
    • /
    • v.1 no.2
    • /
    • pp.107-112
    • /
    • 2017
  • An adaptive optics system can be simulated or analyzed to predict its closed-loop performance. However, this type of prediction based on various assumptions can occasionally produce outcomes which are far from actual experience. Thus, every adaptive optics system is desired to be tested in a closed loop on an optical test bench before its application to a telescope. In the close-loop test bench, we need an atmospheric simulator that simulates atmospheric disturbances, mostly in phase, in terms of spatial and temporal behavior. We report the development of an atmospheric turbulence simulator consisting of two point sources, a commercially available deformable mirror with a $12{\times}12$ actuator array, and two random phase plates. The simulator generates an atmospherically distorted single or binary star with varying stellar magnitudes and angular separations. We conduct a simulation of a binary star by optically combining two point sources mounted on independent precision stages. The light intensity of each source (an LED with a pin hole) is adjustable to the corresponding stellar magnitude, while its angular separation is precisely adjusted by moving the corresponding stage. First, the atmospheric phase disturbance at a single instance, i.e., a phase screen, is generated via a computer simulation based on the thin-layer Kolmogorov atmospheric model and its temporal evolution is predicted based on the frozen flow hypothesis. The deformable mirror is then continuously best-fitted to the time-sequenced phase screens based on the least square method. Similarly, we also implement another simulation by rotating two random phase plates which were manufactured to have atmospheric-disturbance-like residual aberrations. This later method is limited in its ability to simulate atmospheric disturbances, but it is easy and inexpensive to implement. With these two methods, individually or in unison, we can simulate typical atmospheric disturbances observed at the Bohyun Observatory in South Korea, which corresponds to an area from 7 to 15 cm with regard to the Fried parameter at a telescope pupil plane of 500 nm.

Efficacy Evaluation of Anti-wrinkle Products in Japan

  • Masaki Hitoshi
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.29 no.2 s.43
    • /
    • pp.67-77
    • /
    • 2003
  • Two categories of cosmetic products, cosmetics and quasi-drugs, have been established by the Ministry of Health, Labor and Welfare (MHLW) in Japan. Japanese pharmaceutical law has defined that products categorized as cosmetics do not exhibit any effects on human skin. In fact, cosmetic products are not permitted to claim any efficacy. On the other hand, products in the quasi-drug category can claim several efficacies such as anti-inflammatory effects, whitening/lightning effects, hair growth effects and so on. Unfortunately, the Japanese MHLW has not yet approved the efficacy of anti-aging/anti-wrinkle effects as a claim point. However, the population is aging, and the demand for anti-aging/anti-wrinkle products is increasing year by year. Japanese cosmetic companies have proposed to the MHLW that anti-aging/anti-wrinkle agents be approved as a claim concept of a quasi-drug. However, unified evaluation methods for anti-aging/anti-wrinkle effects have not been established. Currently, each company evaluates the efficacy of products/materials using their own original methods. Thus, to request approval of the MHLW, the establishment of a unified evaluation method is needed. Consequently, the Japan Cosmetic Industry Association (JCIA) has established a task force to develop guidelines for evaluating anti-wrinkle effects in 1998. In conclusion, the JCIA would like to adopt visual and image analysis scales to evaluate the anti-wrinkle effects objectively. Generally, wrinkles are roughly classified into three groups as fine wrinkles, linear deep wrinkles and crow's feet. However, academic societies of dermatology or cosmetics have not yet established a definition of wrinkles in Japan. Thus, in advance of setting up an evaluation method, the definition of wrinkles f3r evaluation must be decided. Wrinkles are defined by the task force of the JCIA as follows; furrows that people can recognize visually and that appear on the forehead, the corners of the eyes and the backs of the neck with aging. In addition, furrows are emphasized by exposure to solar light and by dry conditions. Visual evaluation is the most sensitive method and can be applied to most types of wrinkles. However, visual evaluation is hard to express digitally as results. Besides, in the case of image analysis, comparisons of data obtained from distinct examinations can not be done, because data from image analysis are relative values. Thus, to enhance the reliability of the evaluations, the adoption of an objective scale was required. The principle of the evaluation method is to analyze images taken from silicone replicas of wrinkle areas using several parameters, such as the proportion of the wrinkle $area({\%})$, the mean depth of the wrinkles (mm), the mean depth of the deepest wrinkle (m) and the deepest point on the deepest wrinkle. Lights are shown on the skin replica from an orthogonal direction of the main orientation of the wrinkle, and the resulting shadow images are quantified by the image analysis method. To increase the precision of the data or to allow comparisons of independent examinations, a scale with furrows of several depths, 200, 400, 600, 800, and $1000{\mu}m$, is adapted in the evaluation system. I will explain the guidelines established by the JCIA in the presentation.

A Pharmacogenomic-based Antidepressant Treatment for Patients with Major Depressive Disorder: Results from an 8-week, Randomized, Single-blinded Clinical Trial

  • Han, Changsu;Wang, Sheng-Min;Bahk, Won-Myong;Lee, Soo-Jung;Patkar, Ashwin A.;Masand, Prakash S.;Mandelli, Laura;Pae, Chi-Un;Serretti, Alessandro
    • Clinical Psychopharmacology and Neuroscience
    • /
    • v.16 no.4
    • /
    • pp.469-480
    • /
    • 2018
  • Objective: Pharmacogenomic-based antidepressant treatment (PGATx) may result in more precise pharmacotherapy of major depressive disorder (MDD) with better drug therapy guidance. Methods: An 8-week, randomized, single-blind clinical trial was conducted to evaluate the effectiveness and tolerability of PGATx in 100 patients with MDD. All recruited patients were randomly allocated either to PGATx (n=52) or treatment as usual (TAU, n=48) groups. The primary endpoint was a change of total score of the Hamilton Depression Rating Scale-17 (HAMD-17) from baseline to end of treatment. Response rate (at least 50% reduction in HAMD-17 score from baseline), remission rate (HAMD-17 score ${\leq}7$ at the end of treatment) as well as the change of total score of Frequency, Intensity, and Burden of Side Effects Ratings (FIBSER) from baseline to end of treatment were also investigated. Results: The mean change of HAMD-17 score was significantly different between two groups favoring PGATx by -4.1 point of difference (p=0.010) at the end of treatment. The mean change in the FIBSER score from baseline was significantly different between two treatment groups favoring PGATx by -2.5 point of difference (p=0.028). The response rate (71.7 % vs. 43.6%, p=0.014) were also significantly higher in PGATx than in TAU at the end of treatment, while the remission rate was numerically higher in PGATx than in TAU groups without statistical difference (45.5% vs. 25.6%, p=0.071). The reason for early drop-out associated with adverse events was also numerically higher in TAU (n=9, 50.0%) than in PGATx (n=4, 30.8%). Conclusion: The present study clearly demonstrate that PGATx may be a better treatment option in the treatment of MDD in terms of effectiveness and tolerability; however, study shortcomings may limit a generalization. Adequately-powered, well-designed, subsequent studies should be mandatory to prove its practicability and clinical utility for routine practice.