• Title/Summary/Keyword: A$^{*}$알고리즘

Search Result 30,534, Processing Time 0.058 seconds

Validating a New Approach to Quantify Posterior Corneal Curvature in Vivo (각막 후면 지형 측정을 위한 새로운 방법의 신뢰도 분석 및 평가)

  • Yoon, Jeong Ho;Avudainayagam, Kodikullam;Avudainayagam, Chitralekha;Swarbrick, Helen A.
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.17 no.2
    • /
    • pp.223-232
    • /
    • 2012
  • Purpose: Validating a new research method to determine posterior corneal curvature and asphericity(Q) in vivo, based on measurements of anterior corneal topography and corneal thickness. Methods: Anterior corneal topographic data, derived from the Medmont E300 corneal topographer, and total corneal thickness data measured along the horizontal corneal meridian using the Holden-Payor optical pachometer, were used to calculate the anterior and posterior corneal apical radii of curvature and Q. To calculate accurate total corneal thickness the local radius of anterior corneal curvature, and an exact solution for the relationship between real and apparent thickness were taken into consideration. This method differs from previous approach. An elliptical curve for anterior and posterior cornea were calculated by using best fit algorism of the anterior corneal topographic data and derived coordinates of the posterior cornea respectively. For validation of the calculations of the posterior corneal topography, ten polymethyl methacrylate (PMMA) lenses and right eyes of five adult subjects were examined. Results: The mean absolute accuracy (${\pm}$standard deviation(SD)) of calculated posterior apical radius and Q of ten PMMA lenses was $0.053{\pm}0.044mm$ (95% confidence interval (CI) -0.033 to 0.139), and $0.10{\pm}0.10$ (95% CI -0.10 to 0.31) respectively. The mean absolute repeatability coefficient (${\pm}SD$) of the calculated posterior apical radius and Q of five human eyes was $0.07{\pm}0.06mm$ (95% CI -0.05 to 0.19) and $0.09{\pm}0.07$ (95% CI -0.05 to 0.23), respectively. Conclusions: The result shows that acceptable accuracy in calculations of posterior apical radius and Q was achieved. This new method shows promise for application to the living human cornea.

Development of Measuring Technique for Milk Composition by Using Visible-Near Infrared Spectroscopy (가시광선-근적외선 분광법을 이용한 유성분 측정 기술 개발)

  • Choi, Chang-Hyun;Yun, Hyun-Woong;Kim, Yong-Joo
    • Food Science and Preservation
    • /
    • v.19 no.1
    • /
    • pp.95-103
    • /
    • 2012
  • The objective of this study was to develop models for the predict of the milk properties (fat, protein, SNF, lactose, MUN) of unhomogenized milk using the visible and near-infrared (NIR) spectroscopic technique. A total of 180 milk samples were collected from dairy farms. To determine optimal measurement temperature, the temperatures of the milk samples were kept at three levels ($5^{\circ}C$, $20^{\circ}C$, and $40^{\circ}C$). A spectrophotometer was used to measure the reflectance spectra of the milk samples. Multilinear-regression (MLR) models with stepwise method were developed for the selection of the optimal wavelength. The preprocessing methods were used to minimize the spectroscopic noise, and the partial-least-square (PLS) models were developed to prediction of the milk properties of the unhomogenized milk. The PLS results showed that there was a good correlation between the predicted and measured milk properties of the samples at $40^{\circ}C$ and at 400~2,500 nm. The optimal-wavelength range of fat and protein were 1,600~1,800 nm, and normalization improved the prediction performance. The SNF and lactose were optimized at 1,600~1,900 nm, and the MUN at 600~800 nm. The best preprocessing method for SNF, lactose, and MUN turned out to be smoothing, MSC, and second derivative. The Correlation coefficients between the predicted and measured fat, protein, SNF, lactose, and MUN were 0.98, 0.90, 0.82, 0.75, and 0.61, respectively. The study results indicate that the models can be used to assess milk quality.

Reliability Based Stability Analysis and Design Criteria for Reinforced Concrete Retaining Wall (신뢰성(信賴性) 이론(理論)에 의한 R.C.옹벽(擁壁)의 안정해석(安定解析) 및 설계규준(設計規準))

  • Cho, Tae Song;Cho, Hyo Nam;Chun, Chai Myung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.3 no.3
    • /
    • pp.71-86
    • /
    • 1983
  • Current R.C. retaining wall design is bared on WSD, but the reliability based design method is more rational than the WSD. For this reason, this study proposes a reliability based design criteria for the cantilever retaining wall, which is most common type of retaining wall, and also proposes the theoretical bases of nominal safety factors of stability analysis by introducing the reliability theory. The limit state equations of stability analysis and design of each part of cantilever retaining wall are derived and the uncertainty measuring algorithms of each equation are also derived by MFOSM using Coulomb's coefficient of the active earth pressure and Hansen's bearing capacity formula. The levels of uncertainties corresponding to these algorithms are proposed appropriate values considering our actuality. The target reliability indices (overturning: ${\beta}_0$=4.0, sliding: ${\beta}_0$=3.5, bearing capacity: [${\beta}_0$=3.0, design for flexure: [${\beta}_0$=3.0, design for shear: ${\beta}_0$=3.2) are selected as optimal values considering our practice based on the calibration with the current R.C. retaining wall design safety provisions. Load and resistance factors are measured by using the proposed uncertainties and the selected target reliability indices. Furthermore, a set of nominal safety factors, allowable stresses, and allowable shear stresses are proposed for the current WSD design provisions. It may be asserted that the proposed LRFD reliability based design criteria for the R.C. retaining wall may have to be incorporated into the current R.C. design codes as a design provision corresponding to the USD provisions of the current R.C. design code.

  • PDF

Advanced Improvement for Frequent Pattern Mining using Bit-Clustering (비트 클러스터링을 이용한 빈발 패턴 탐사의 성능 개선 방안)

  • Kim, Eui-Chan;Kim, Kye-Hyun;Lee, Chul-Yong;Park, Eun-Ji
    • Journal of Korea Spatial Information System Society
    • /
    • v.9 no.1
    • /
    • pp.105-115
    • /
    • 2007
  • Data mining extracts interesting knowledge from a large database. Among numerous data mining techniques, research work is primarily concentrated on clustering and association rules. The clustering technique of the active research topics mainly deals with analyzing spatial and attribute data. And, the technique of association rules deals with identifying frequent patterns. There was an advanced apriori algorithm using an existing bit-clustering algorithm. In an effort to identify an alternative algorithm to improve apriori, we investigated FP-Growth and discussed the possibility of adopting bit-clustering as the alternative method to solve the problems with FP-Growth. FP-Growth using bit-clustering demonstrated better performance than the existing method. We used chess data in our experiments. Chess data were used in the pattern mining evaluation. We made a creation of FP-Tree with different minimum support values. In the case of high minimum support values, similar results that the existing techniques demonstrated were obtained. In other cases, however, the performance of the technique proposed in this paper showed better results in comparison with the existing technique. As a result, the technique proposed in this paper was considered to lead to higher performance. In addition, the method to apply bit-clustering to GML data was proposed.

  • PDF

The Effects of a MR Torso Coil on CT Attenuation Correction for PET (PET/CT 검사에 있어서 MR Torso Coil의 CT 감쇄보정에 대한 영향 평가)

  • Lee, Seung Jae;Bahn, Young Kag;Oh, Shin Hyun;Gang, Cheon-Gu;Lim, Han Sang;Kim, Jae Sam;Lee, Chang Ho;Seo, Soo-Hyun;Park, Yong Sung
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.2
    • /
    • pp.81-86
    • /
    • 2012
  • Purpose : Combined MR/PET scanners that use the MRI for PET AC face the challenge of absent surface coils in MR images and thus cannot directly account for attenuation in the coils. To make up for the weak point of MR attenuation correction, Three Modality System (PET/CT +MR) were used in Severance hospital. The goal of this work was to investigate the effects of MR Torso Coil on CT attenuation correction for PET. Materials and Methods : PET artifacts were evaluated when the MR Torso Coil was present of CTAC data with changing various kV and mA in uniformity water phantom and 1994 NEMA cylinderical phantom. They evaluated and compared the following two scenarios: (1) The uniform cylinder phantom and the MR Torso Coil scanned and reconstructed using CT-AC; (2) 1994 NEMA cylinderical phantom and the MR Torso Coil scanned and reconstructed using CT-AC. Results : Streak artifacts were present in CT images containing the MR Torso Coil due to metal components. These artifacts persisted after the CT images were converted for PET-AC. CT scans tended to over-estimate the linear attenuation coefficient when the kV and mA is increasing of the metal components when using conventional methods for converting from CT number. Conclusion : The presence of MR coils during PET/CT scanning can cause subtle artifacts and potentially important quantification errors. Alternative CT techniques that mitigate artifacts should be used to improve AC accuracy. When possible, removing segments of an MR coil prior to the PET/CT exam is recommended. Further, MR coils could be redesigned to reduce artifacts by rearranging placement of the most attenuating materials.

  • PDF

The Understanding and Application of Noise Reduction Software in Static Images (정적 영상에서 Noise Reduction Software의 이해와 적용)

  • Lee, Hyung-Jin;Song, Ho-Jun;Seung, Jong-Min;Choi, Jin-Wook;Kim, Jin-Eui;Kim, Hyun-Joo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.54-60
    • /
    • 2010
  • Purpose: Nuclear medicine manufacturers provide various softwares which shorten imaging time using their own image processing techniques such as UlatraSPECT, ASTONISH, Flash3D, Evolution, and nSPEED. Seoul National University Hospital has introduced softwares from Siemens and Philips, but it was still hard to understand algorithm difference between those two softwares. Thus, the purpose of this study was to figure out the difference of two softwares in planar images and research the possibility of application to images produced with high energy isotopes. Materials and Methods: First, a phantom study was performed to understand the difference of softwares in static studies. Various amounts of count were acquired and the images were analyzed quantitatively after application of PIXON, Siemens and ASTONISH, Philips, respectively. Then, we applied them to some applicable static studies and searched for merits and demerits. And also, they have been applied to images produced with high energy isotopes. Finally, A blind test was conducted by nuclear medicine doctors except phantom images. Results: There was nearly no difference between pre and post processing image with PIXON for FWHM test using capillary source whereas ASTONISH was improved. But, both of standard deviation(SD) and variance were decreased for PIXON while ASTONISH was highly increased. And in background variability comparison test using IEC phantom, PIXON has been decreased over all while ASTONISH has shown to be somewhat increased. Contrast ratio in each spheres has also been increased for both methods. For image scale, window width has been increased for 4~5 times after processing with PIXON while ASTONISH showed nearly no difference. After phantom test analysis, ASTONISH seemed to be applicable for some studies which needs quantitative analysis or high contrast, and PIXON seemed to be applicable for insufficient counts studies or long time studies. Conclusion: Quantitative values used for usual analysis were generally improved after application of the two softwares, however it seems that it's hard to maintain the consistency for all of nuclear medicine studies because result images can not be the same due to the difference of algorithm characteristic rather than the difference of gamma cameras. And also, it's hard to expect high image quality with the time shortening method such as whole body scan. But it will be possible to apply to static studies considering the algorithm characteristic or we can expect a change of image quality through application to high energy isotope images.

  • PDF

A Study on The RFID/WSN Integrated system for Ubiquitous Computing Environment (유비쿼터스 컴퓨팅 환경을 위한 RFID/WSN 통합 관리 시스템에 관한 연구)

  • Park, Yong-Min;Lee, Jun-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.49 no.1
    • /
    • pp.31-46
    • /
    • 2012
  • The most critical technology to implement ubiquitous health care is Ubiquitous Sensor Network (USN) technology which makes use of various sensor technologies, processor integration technology, and wireless network technology-Radio Frequency Identification (RFID) and Wireless Sensor Network (WSN)-to easily gather and monitor actual physical environment information from a remote site. With the feature, the USN technology can make the information technology of the existing virtual space expanded to actual environments. However, although the RFID and the WSN have technical similarities and mutual effects, they have been recognized to be studied separately, and sufficient studies have not been conducted on the technical integration of the RFID and the WSN. Therefore, EPCglobal which realized the issue proposed the EPC Sensor Network to efficiently integrate and interoperate the RFID and WSN technologies based on the international standard EPCglobal network. The proposed EPC Sensor Network technology uses the Complex Event Processing method in the middleware to integrate data occurring through the RFID and the WSN in a single environment and to interoperate the events based on the EPCglobal network. However, as the EPC Sensor Network technology continuously performs its operation even in the case that the minimum conditions are not to be met to find complex events in the middleware, its operation cost rises. Moreover, since the technology is based on the EPCglobal network, it can neither perform its operation only for the sake of sensor data, nor connect or interoperate with each information system in which the most important information in the ubiquitous computing environment is saved. Therefore, to address the problems of the existing system, we proposed the design and implementation of USN integration management system. For this, we first proposed an integration system that manages RFID and WSN data based on Session Initiation Protocol (SIP). Secondly, we defined the minimum conditions of the complex events to detect unnecessary complex events in the middleware, and proposed an algorithm that can extract complex events only when the minimum conditions are to be met. To evaluate the performance of the proposed methods we implemented SIP-based integration management system.

Quantitative Conductivity Estimation Error due to Statistical Noise in Complex $B_1{^+}$ Map (정량적 도전율측정의 오차와 $B_1{^+}$ map의 노이즈에 관한 분석)

  • Shin, Jaewook;Lee, Joonsung;Kim, Min-Oh;Choi, Narae;Seo, Jin Keun;Kim, Dong-Hyun
    • Investigative Magnetic Resonance Imaging
    • /
    • v.18 no.4
    • /
    • pp.303-313
    • /
    • 2014
  • Purpose : In-vivo conductivity reconstruction using transmit field ($B_1{^+}$) information of MRI was proposed. We assessed the accuracy of conductivity reconstruction in the presence of statistical noise in complex $B_1{^+}$ map and provided a parametric model of the conductivity-to-noise ratio value. Materials and Methods: The $B_1{^+}$ distribution was simulated for a cylindrical phantom model. By adding complex Gaussian noise to the simulated $B_1{^+}$ map, quantitative conductivity estimation error was evaluated. The quantitative evaluation process was repeated over several different parameters such as Larmor frequency, object radius and SNR of $B_1{^+}$ map. A parametric model for the conductivity-to-noise ratio was developed according to these various parameters. Results: According to the simulation results, conductivity estimation is more sensitive to statistical noise in $B_1{^+}$ phase than to noise in $B_1{^+}$ magnitude. The conductivity estimate of the object of interest does not depend on the external object surrounding it. The conductivity-to-noise ratio is proportional to the signal-to-noise ratio of the $B_1{^+}$ map, Larmor frequency, the conductivity value itself and the number of averaged pixels. To estimate accurate conductivity value of the targeted tissue, SNR of $B_1{^+}$ map and adequate filtering size have to be taken into account for conductivity reconstruction process. In addition, the simulation result was verified at 3T conventional MRI scanner. Conclusion: Through all these relationships, quantitative conductivity estimation error due to statistical noise in $B_1{^+}$ map is modeled. By using this model, further issues regarding filtering and reconstruction algorithms can be investigated for MREPT.

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

A Study on Spatial Pattern of Impact Area of Intersection Using Digital Tachograph Data and Traffic Assignment Model (차량 운행기록정보와 통행배정 모형을 이용한 교차로 영향권의 공간적 패턴에 관한 연구)

  • PARK, Seungjun;HONG, Kiman;KIM, Taegyun;SEO, Hyeon;CHO, Joong Rae;HONG, Young Suk
    • Journal of Korean Society of Transportation
    • /
    • v.36 no.2
    • /
    • pp.155-168
    • /
    • 2018
  • In this study, we studied the directional pattern of entering the intersection from the intersection upstream link prior to predicting short future (such as 5 or 10 minutes) intersection direction traffic volume on the interrupted flow, and examined the possibility of traffic volume prediction using traffic assignment model. The analysis method of this study is to investigate the similarity of patterns by performing cluster analysis with the ratio of traffic volume by intersection direction divided by 2 hours using taxi DTG (Digital Tachograph) data (1 week). Also, for linking with the result of the traffic assignment model, this study compares the impact area of 5 minutes or 10 minutes from the center of the intersection with the analysis result of taxi DTG data. To do this, we have developed an algorithm to set the impact area of intersection, using the taxi DTG data and traffic assignment model. As a result of the analysis, the intersection entry pattern of the taxi is grouped into 12, and the Cubic Clustering Criterion indicating the confidence level of clustering is 6.92. As a result of correlation analysis with the impact area of the traffic assignment model, the correlation coefficient for the impact area of 5 minutes was analyzed as 0.86, and significant results were obtained. However, it was analyzed that the correlation coefficient is slightly lowered to 0.69 in the impact area of 10 minutes from the center of the intersection, but this was due to insufficient accuracy of O/D (Origin/Destination) travel and network data. In future, if accuracy of traffic network and accuracy of O/D traffic by time are improved, it is expected that it will be able to utilize traffic volume data calculated from traffic assignment model when controlling traffic signals at intersections.