• Title/Summary/Keyword: Three-dimensional evaluation

Search Result 1,043, Processing Time 0.036 seconds

A numerical analysis study on the effects of rock mass anisotropy on tunnel excavation (암반의 이방성이 터널 굴착에 미치는 영향에 대한 수치해석적 연구)

  • Ji-Seok Yun;Sang-Hyeok Shin;Han-Eol Kim;Han-Kyu Yoo
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.26 no.4
    • /
    • pp.327-344
    • /
    • 2024
  • In general tunnel design and analysis, rock masses are often assumed to be isotropic. Under isotropic conditions, material properties are uniform in all directions, leading to a higher evaluation of tunnel stability. However, actual rock masses exhibit anisotropic characteristics due to discontinuities such as joints, bedding planes, and faults, which cause material properties to vary with direction. This anisotropy significantly affects the stress distribution during tunnel excavation, leading to non-uniform deformation and increased risk of damage. Therefore, thorough pre-analysis is essential. This study analyzes the displacement and stress changes occurring during tunnel excavation based on rock anisotropy. A three-dimensional numerical analysis was performed, selecting anisotropy index and dip angles as variables. The results showed that as the anisotropy index increased, the displacement in the tunnel increased, and stress concentration became more pronounced. The maximum displacement and shear stress were observed where the dip planes met the tunnel.

Evaluation of Electron Boost Fields based on Surgical Clips and Operative Scars in Definitive Breast Irradiation (유방보존술 후 방사선치료에서 수술 흉터와 삽입된 클립을 이용한 전자설 추가 방사선 조사야 평가)

  • Lee, Re-Na;Chung, Eun-Ah;Lee, Ji-Hye;Suh, Hyun-Suk
    • Radiation Oncology Journal
    • /
    • v.23 no.4
    • /
    • pp.236-242
    • /
    • 2005
  • Purpose: To evaluate the role of surgical clips and scars in determining electron boost field for early stage breast cancer undergoing conserving surgery and postoperative radiotherapy and to provide an optimal method in drawing the boost field. Materials and Methods: Twenty patients who had $4{\sim}7$ surgical clips in the excision cavity were selected for this study. The depth informations were obtained to determine electron energy by measuring the distance from the skin to chest wall (SCD) and to the clip implanted in the most posterior area of tumor bed. Three different electron fields were outlined on a simulation film. The radiological tumor bed was determined by connecting all the clips implanted during surgery Clinical field (CF) was drawn by adding 3 cm margin around surgical scar. Surgical field (SF) was drawn by adding 2 cm margin around surgical clips and an Ideal field (IF) was outlined by adding 2 cm margin around both scar and clips. These fields were digitized into our planning system to measure the area of each separate field. The areas of the three different electron boost fields were compared. Finally, surgical clips were contoured on axial CT images and dose volume histogram was plotted to investigate 3-dimensional coverage of the clips. Results : The average depth difference between SCD and the maximal clip location was $0.7{\pm}0.55cm$. Greater difference of 5 mm or more was seen in 12 patients. The average shift between the borders of scar and clips were 1.7 1.2, 1.2, and 0.9 cm in superior, inferior, medial, and lateral directions, respectively. The area of the CF was larger than SF and IF in 6y20 patients. In 15/20 patients, the area difference between SF and if was less than 5%. One to three clips were seen outside the CF in 15/20 patients. In addition, dosimetrically inadequate coverage of clips (less than 80% of prescribed dose) were observed in 17/20 patients when CF was used as the boost field. Conclusion: The electron field determined from clinical scar underestimates the tumor bed in superior-inferior direction significantly and thereby underdosing the tissue at risk. The electron field obtained from surgical clips alone dose not cover the entire scar properly As a consequence, our technique, which combines the surgical clips and clinical scars in determining electron boost field, was proved to be effective in minimizing the geographical miss as well as normal tissue complications.

Performance Evaluation of Radiochromic Films and Dosimetry CheckTM for Patient-specific QA in Helical Tomotherapy (나선형 토모테라피 방사선치료의 환자별 품질관리를 위한 라디오크로믹 필름 및 Dosimetry CheckTM의 성능평가)

  • Park, Su Yeon;Chae, Moon Ki;Lim, Jun Teak;Kwon, Dong Yeol;Kim, Hak Joon;Chung, Eun Ah;Kim, Jong Sik
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.32
    • /
    • pp.93-109
    • /
    • 2020
  • Purpose: The radiochromic film (Gafchromic EBT3, Ashland Advanced Materials, USA) and 3-dimensional analysis system dosimetry checkTM (DC, MathResolutions, USA) were evaluated for patient-specific quality assurance (QA) of helical tomotherapy. Materials and Methods: Depending on the tumors' positions, three types of targets, which are the abdominal tumor (130.6㎤), retroperitoneal tumor (849.0㎤), and the whole abdominal metastasis tumor (3131.0㎤) applied to the humanoid phantom (Anderson Rando Phantom, USA). We established a total of 12 comparative treatment plans by the four geometric conditions of the beam irradiation, which are the different field widths (FW) of 2.5-cm, 5.0-cm, and pitches of 0.287, 0.43. Ionization measurements (1D) with EBT3 by inserting the cheese phantom (2D) were compared to DC measurements of the 3D dose reconstruction on CT images from beam fluence log information. For the clinical feasibility evaluation of the DC, dose reconstruction has been performed using the same cheese phantom with the EBT3 method. Recalculated dose distributions revealed the dose error information during the actual irradiation on the same CT images quantitatively compared to the treatment plan. The Thread effect, which might appear in the Helical Tomotherapy, was analyzed by ripple amplitude (%). We also performed gamma index analysis (DD: 3mm/ DTA: 3%, pass threshold limit: 95%) for pattern check of the dose distribution. Results: Ripple amplitude measurement resulted in the highest average of 23.1% in the peritoneum tumor. In the radiochromic film analysis, the absolute dose was on average 0.9±0.4%, and gamma index analysis was on average 96.4±2.2% (Passing rate: >95%), which could be limited to the large target sizes such as the whole abdominal metastasis tumor. In the DC analysis with the humanoid phantom for FW of 5.0-cm, the three regions' average was 91.8±6.4% in the 2D and 3D plan. The three planes (axial, coronal, and sagittal) and dose profile could be analyzed with the entire peritoneum tumor and the whole abdominal metastasis target, with planned dose distributions. The dose errors based on the dose-volume histogram in the DC evaluations increased depending on FW and pitch. Conclusion: The DC method could implement a dose error analysis on the 3D patient image data by the measured beam fluence log information only without any dosimetry tools for patient-specific quality assurance. Also, there may be no limit to apply for the tumor location and size; therefore, the DC could be useful in patient-specific QAl during the treatment of Helical Tomotherapy of large and irregular tumors.

A study of facial soft tissue of Korean adults with normal occlusion using a three-dimensional laser scanner (3차원 레이저 스캐너를 이용한 한국 성인 정상교합자의 안면 연조직에 대한 연구)

  • Baik, Hyoung-Seon;Jeon, Jai-Min;Lee, Hwa-Jin
    • The korean journal of orthodontics
    • /
    • v.36 no.1 s.114
    • /
    • pp.14-29
    • /
    • 2006
  • Developments in computer technology have made possible the 3-dimensional (3-D) evaluation of hard and soft tissues in orthodontic diagnosis, treatment planning and post-treatment results. In this study, Korean adults with normal occlusion (male 30, female 30) were scanned by a 3-D laser scanner, then 3-D facial images formed by the Rapidform 2004 program (Inus Technology Inc., Seoul, Korea.). Reference planes in the facial soft tissue 3-D images were established and a 3-D coordinate system (X axis-left/right, Y axis-superior/inferior, Z axis-anterior/posterior) was established by using the soft tissue nasion as the zero point. Twenty-nine measurement points were established on the 3-D image and 43 linear measurements, 8 angular measurements, 29 linear distance ratios were obtained. The results are as follows; there were significant differences between males and females in the nasofrontal angle $(male:\;142^{\circ},\;female:\;147^{\circ})$ and transverse nasal prominence $(male:\;112^{\circ},\;female:\;116^{\circ})$ (p<0.05). The transverse upper lip prominence was $107^{\circ}$ in males, $106^{\circ}$ in females and the transverse mandibular prominence was $76^{\circ}$ in both males and females. Li-Me' was 0.4 times the length of Go-Me'(mandibular body length) and the mouth height was also 0.4 times the width of the mouth width. The linear distance ratio from the coronal reference plane of FT, Zy, Pn, ULPm, Li, Me' was -1/-1/1/0.5/0.5/-0.6 respectively. The 3-D facial model of Korean adults with normal occlusion were be constructed using coordinate values and linear measurement values. These data may be used as a reference in 3-D diagnosis and treatment planning for malocclusion and dentofacial deformity patients and applied for 3-D analysis of facial soft tissue changes before and after orthodontic treatment and orthognathic surgery.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

A Spatio-Temporal Clustering Technique for the Moving Object Path Search (이동 객체 경로 탐색을 위한 시공간 클러스터링 기법)

  • Lee, Ki-Young;Kang, Hong-Koo;Yun, Jae-Kwan;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.7 no.3 s.15
    • /
    • pp.67-81
    • /
    • 2005
  • Recently, the interest and research on the development of new application services such as the Location Based Service and Telemetics providing the emergency service, neighbor information search, and route search according to the development of the Geographic Information System have been increasing. User's search in the spatio-temporal database which is used in the field of Location Based Service or Telemetics usually fixes the current time on the time axis and queries the spatial and aspatial attributes. Thus, if the range of query on the time axis is extensive, it is difficult to efficiently deal with the search operation. For solving this problem, the snapshot, a method to summarize the location data of moving objects, was introduced. However, if the range to store data is wide, more space for storing data is required. And, the snapshot is created even for unnecessary space that is not frequently used for search. Thus, non storage space and memory are generally used in the snapshot method. Therefore, in this paper, we suggests the Hash-based Spatio-Temporal Clustering Algorithm(H-STCA) that extends the two-dimensional spatial hash algorithm used for the spatial clustering in the past to the three-dimensional spatial hash algorithm for overcoming the disadvantages of the snapshot method. And, this paper also suggests the knowledge extraction algorithm to extract the knowledge for the path search of moving objects from the past location data based on the suggested H-STCA algorithm. Moreover, as the results of the performance evaluation, the snapshot clustering method using H-STCA, in the search time, storage structure construction time, optimal path search time, related to the huge amount of moving object data demonstrated the higher performance than the spatio-temporal index methods and the original snapshot method. Especially, for the snapshot clustering method using H-STCA, the more the number of moving objects was increased, the more the performance was improved, as compared to the existing spatio-temporal index methods and the original snapshot method.

  • PDF

The NCAM Land-Atmosphere Modeling Package (LAMP) Version 1: Implementation and Evaluation (국가농림기상센터 지면대기모델링패키지(NCAM-LAMP) 버전 1: 구축 및 평가)

  • Lee, Seung-Jae;Song, Jiae;Kim, Yu-Jung
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.18 no.4
    • /
    • pp.307-319
    • /
    • 2016
  • A Land-Atmosphere Modeling Package (LAMP) for supporting agricultural and forest management was developed at the National Center for AgroMeteorology (NCAM). The package is comprised of two components; one is the Weather Research and Forecasting modeling system (WRF) coupled with Noah-Multiparameterization options (Noah-MP) Land Surface Model (LSM) and the other is an offline one-dimensional LSM. The objective of this paper is to briefly describe the two components of the NCAM-LAMP and to evaluate their initial performance. The coupled WRF/Noah-MP system is configured with a parent domain over East Asia and three nested domains with a finest horizontal grid size of 810 m. The innermost domain covers two Gwangneung deciduous and coniferous KoFlux sites (GDK and GCK). The model is integrated for about 8 days with the initial and boundary conditions taken from the National Centers for Environmental Prediction (NCEP) Final Analysis (FNL) data. The verification variables are 2-m air temperature, 10-m wind, 2-m humidity, and surface precipitation for the WRF/Noah-MP coupled system. Skill scores are calculated for each domain and two dynamic vegetation options using the difference between the observed data from the Korea Meteorological Administration (KMA) and the simulated data from the WRF/Noah-MP coupled system. The accuracy of precipitation simulation is examined using a contingency table that is made up of the Probability of Detection (POD) and the Equitable Threat Score (ETS). The standalone LSM simulation is conducted for one year with the original settings and is compared with the KoFlux site observation for net radiation, sensible heat flux, latent heat flux, and soil moisture variables. According to results, the innermost domain (810 m resolution) among all domains showed the minimum root mean square error for 2-m air temperature, 10-m wind, and 2-m humidity. Turning on the dynamic vegetation had a tendency of reducing 10-m wind simulation errors in all domains. The first nested domain (7,290 m resolution) showed the highest precipitation score, but showed little advantage compared with using the dynamic vegetation. On the other hand, the offline one-dimensional Noah-MP LSM simulation captured the site observed pattern and magnitude of radiative fluxes and soil moisture, and it left room for further improvement through supplementing the model input of leaf area index and finding a proper combination of model physics.

Evaluation of Contralateral Breast Surface Dose in FIF (Field In Field) Tangential Irradiation Technique for Patients Undergone Breast Conservative Surgery (보존적 유방절제 환자의 방사선치료 시 종속조사면 병합방법에 따른 반대편 유방의 표면선량평가)

  • Park, Byung-Moon;Bang, Dong-Wan;Bae, Yong-Ki;Lee, Jeong-Woo;Kim, You-Hyun
    • Journal of radiological science and technology
    • /
    • v.31 no.4
    • /
    • pp.401-406
    • /
    • 2008
  • The aim of this study is to evaluate contra-lateral breast (CLB) surface dose in Field-in-Field (FIF) technique for breast conserving surgery patients. For evaluation of surface dose in FIF technique, we have compared with other techniques, which were open fields (Open), metal wedge (MW), and enhanced dynamic wedge (EDW) techniques under same geometrical condition and prescribed dose. The three dimensional treatment planning system was used for dose optimization. For the verification of dose calculation, measurements using MOSFET detectors with Anderson Rando phantom were performed. The measured points for four different techniques were at the depth of 0cm (epidermis) and 0.5cm bolus (dermis), and spacing toward 2cm, 4cm, 6cm, 8cm, 10cm apart from the edge of tangential medial beam. The dose calculations were done in 0.25cm grid resolution by modified Batho method for inhomogeneity correction. In the planning results, the surface doses were differentiated in the range of $19.6{\sim}36.9%$, $33.2{\sim}138.2%$ for MW, $1.0{\sim}7.9%$, $1.6{\sim}37.4%$ for EDW, and for FIF at the depth of epidermis and dermis as compared to Open respectively. In the measurements, the surface doses were differentiated in the range of $11.1{\sim}71%$, $22.9{\sim}161%$ for MW, $4.1{\sim}15.5%$, $8.2{\sim}37.9%$ for EDW, and 4.9% for FIF at the depth of epidermis and dermis as compared to Open respectively. The surface doses were considered as underestimating in the planning calculation as compared to the measurement with MOSFET detectors. Was concluded as the lowest one among the techniques, even if it was compared with Open method. Our conclusion could be stated that the FIF technique could make the optimum dose distribution in Breast target, while effectively reduce the probability of secondary carcinogenesis due to undesirable scattered radiation to contra-lateral breast.

  • PDF

Hierarchical Overlapping Clustering to Detect Complex Concepts (중복을 허용한 계층적 클러스터링에 의한 복합 개념 탐지 방법)

  • Hong, Su-Jeong;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.111-125
    • /
    • 2011
  • Clustering is a process of grouping similar or relevant documents into a cluster and assigning a meaningful concept to the cluster. By this process, clustering facilitates fast and correct search for the relevant documents by narrowing down the range of searching only to the collection of documents belonging to related clusters. For effective clustering, techniques are required for identifying similar documents and grouping them into a cluster, and discovering a concept that is most relevant to the cluster. One of the problems often appearing in this context is the detection of a complex concept that overlaps with several simple concepts at the same hierarchical level. Previous clustering methods were unable to identify and represent a complex concept that belongs to several different clusters at the same level in the concept hierarchy, and also could not validate the semantic hierarchical relationship between a complex concept and each of simple concepts. In order to solve these problems, this paper proposes a new clustering method that identifies and represents complex concepts efficiently. We developed the Hierarchical Overlapping Clustering (HOC) algorithm that modified the traditional Agglomerative Hierarchical Clustering algorithm to allow overlapped clusters at the same level in the concept hierarchy. The HOC algorithm represents the clustering result not by a tree but by a lattice to detect complex concepts. We developed a system that employs the HOC algorithm to carry out the goal of complex concept detection. This system operates in three phases; 1) the preprocessing of documents, 2) the clustering using the HOC algorithm, and 3) the validation of semantic hierarchical relationships among the concepts in the lattice obtained as a result of clustering. The preprocessing phase represents the documents as x-y coordinate values in a 2-dimensional space by considering the weights of terms appearing in the documents. First, it goes through some refinement process by applying stopwords removal and stemming to extract index terms. Then, each index term is assigned a TF-IDF weight value and the x-y coordinate value for each document is determined by combining the TF-IDF values of the terms in it. The clustering phase uses the HOC algorithm in which the similarity between the documents is calculated by applying the Euclidean distance method. Initially, a cluster is generated for each document by grouping those documents that are closest to it. Then, the distance between any two clusters is measured, grouping the closest clusters as a new cluster. This process is repeated until the root cluster is generated. In the validation phase, the feature selection method is applied to validate the appropriateness of the cluster concepts built by the HOC algorithm to see if they have meaningful hierarchical relationships. Feature selection is a method of extracting key features from a document by identifying and assigning weight values to important and representative terms in the document. In order to correctly select key features, a method is needed to determine how each term contributes to the class of the document. Among several methods achieving this goal, this paper adopted the $x^2$�� statistics, which measures the dependency degree of a term t to a class c, and represents the relationship between t and c by a numerical value. To demonstrate the effectiveness of the HOC algorithm, a series of performance evaluation is carried out by using a well-known Reuter-21578 news collection. The result of performance evaluation showed that the HOC algorithm greatly contributes to detecting and producing complex concepts by generating the concept hierarchy in a lattice structure.

Analysis of Quantization Noise in Magnetic Resonance Imaging Systems (자기공명영상 시스템의 양자화잡음 분석)

  • Ahn C.B.
    • Investigative Magnetic Resonance Imaging
    • /
    • v.8 no.1
    • /
    • pp.42-49
    • /
    • 2004
  • Purpose : The quantization noise in magnetic resonance imaging (MRI) systems is analyzed. The signal-to-quantization noise ratio (SQNR) in the reconstructed image is derived from the level of quantization in the signal in spatial frequency domain. Based on the derived formula, the SQNRs in various main magnetic fields with different receiver systems are evaluated. From the evaluation, the quantization noise could be a major noise source determining overall system signal-to-noise ratio (SNR) in high field MRI system. A few methods to reduce the quantization noise are suggested. Materials and methods : In Fourier imaging methods, spin density distribution is encoded by phase and frequency encoding gradients in such a way that it becomes a distribution in the spatial frequency domain. Thus the quantization noise in the spatial frequency domain is expressed in terms of the SQNR in the reconstructed image. The validity of the derived formula is confirmed by experiments and computer simulation. Results : Using the derived formula, the SQNRs in various main magnetic fields with various receiver systems are evaluated. Since the quantization noise is proportional to the signal amplitude, yet it cannot be reduced by simple signal averaging, it could be a serious problem in high field imaging. In many receiver systems employing analog-to-digital converters (ADC) of 16 bits/sample, the quantization noise could be a major noise source limiting overall system SNR, especially in a high field imaging. Conclusion : The field strength of MRI system keeps going higher for functional imaging and spectroscopy. In high field MRI system, signal amplitude becomes larger with more susceptibility effect and wider spectral separation. Since the quantization noise is proportional to the signal amplitude, if the conversion bits of the ADCs in the receiver system are not large enough, the increase of signal amplitude may not be fully utilized for the SNR enhancement due to the increase of the quantization noise. Evaluation of the SQNR for various systems using the formula shows that the quantization noise could be a major noise source limiting overall system SNR, especially in three dimensional imaging in a high field imaging. Oversampling and off-center sampling would be an alternative solution to reduce the quantization noise without replacement of the receiver system.

  • PDF