• Title/Summary/Keyword: three-dimensional information

Search Result 2,268, Processing Time 0.03 seconds

Fracture Pattern and Physical Property of the Granodiorite for Stone Resources in the Nangsan Area (낭산일대에 분포하는 화강섬록암 암석자원의 열극체계 및 물리적 특성)

  • Yun, Hyun-Soo;Hong, Sei-Sun;Park, Deok-Won
    • The Journal of the Petrological Society of Korea
    • /
    • v.16 no.3
    • /
    • pp.144-161
    • /
    • 2007
  • The studied Nangsan area is widely covered by the Jurassic biotite granodiorite, which is mainly light grey in color and medium-grained in texture. Results of the regional fracture pattern analysis for the granodiorite body are as follows. Strike directions of fractures show three dominant sets in terms of frequency order. The sets are in an order of a (1) $N80^{\circ}{\sim}90^{\circ}E$ (1st-order)>(2) $N70^{\circ}{\sim}80^{\circ}E$ (2nd-order)>(3) $NS{\sim}N10^{\circ}E$ (3rd-order). Spacings of the fractures are mostly predominant in less than 200 cm. Therefore, the granodiorite of the area has more potential for non-dimensional stones than dimension ones. And orientations of vertical quarrying planes can be also divided into two groups in terms of frequency $N14^{\circ}W{\sim}N16^{\circ}E$ (1st-order) and (2) $N78^{\circ}E{\sim}N88^{\circ}E$ (2nd-order). The orientations of the two groups are more or less different from those of the regional fracture patterns. These can be mainly attributed to the preferred orientations of microcrack developed in the quarries. Of physical properties, specific gravity, absorption ratio, porosity, compressive strength, tensile strength and abrasive hardness are 2.65, 0.28%, 0.73%, $1,628kg/cm^2,\;100kg/cm^2$ and 31, respectively. Contrary to the porosity, both granites of the Nangsan and Sogrisan areas show almost similar values of the abrasive hardness. These can be explained by the differences of Qz+Af modes, which can be regarded as an index for abrasive resistance. Meanwhile, it is anticipated that comprehensive understanding of the orientations of vertical quarrying planes and characteristics of various physical properties will be utilized as an important information for stone resources.

A Microgravity for Mapping and Monitoring the Subsurface Cavities (지하 공동의 탐지와 모니터링을 위한 고정밀 중력탐사)

  • Park, Yeong-Sue;Rim, Hyoung-Rae;Lim, Mu-Taek;Koo, Sung-Bon
    • Geophysics and Geophysical Exploration
    • /
    • v.10 no.4
    • /
    • pp.383-392
    • /
    • 2007
  • Karstic features and mining-related cavities not only lead to severe restrictions in land utilizations, but also constitute serious concern about geohazard and groundwater contamination. A microgravity survey was applied for detecting, mapping and monitoring karstic cavities in the test site at Muan prepared by KIGAM. The gravity data were collected using an AutoGrav CG-3 gravimeter at about 800 stations by 5 m interval along paddy paths. The density distribution beneath the profiles was drawn by two dimensional inversion based on the minimum support stabilizing functional, which generated better focused images of density discontinuities. We also imaged three dimensional density distribution by growing body inversion with solution from Euler deconvolution as a priori information. The density image showed that the cavities were dissolved, enlarged and connected into a cavity network system, which was supported by drill hole logs. A time-lapse microgravity was executed on the road in the test site for monitoring the change of the subsurface density distribution before and after grouting. The data were adjusted for reducing the effects due to the different condition of each survey, and inverted to density distributions. They show the change of density structure during the lapsed time, which implies the effects of grouting. This case history at the Muan test site showed that the microgravity with accuracy and precision of ${\mu}Gal$ is an effective and practical tool for detecting, mapping and monitoring the subsurface cavities.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Estimation of $T_2{^*}$ Relaxation Times for the Glandular Tissue and Fat of Breast at 3T MRI System (3테슬러 자기공명영상기기에서 유방의 유선조직과 지방조직의 $T_2{^*}$이완시간 측정)

  • Ryu, Jung Kyu;Oh, Jang-Hoon;Kim, Hyug-Gi;Rhee, Sun Jung;Seo, Mirinae;Jahng, Geon-Ho
    • Investigative Magnetic Resonance Imaging
    • /
    • v.18 no.1
    • /
    • pp.1-6
    • /
    • 2014
  • Purpose : $T_2{^*}$ relaxation time which includes susceptibility information represents unique feature of tissue. The objective of this study was to investigate $T_2{^*}$ relaxation times of the normal glandular tissue and fat of breast using a 3T MRI system. Materials and Methods: Seven-echo MR Images were acquired from 52 female subjects (age $49{\pm}12 $years; range, 25 to 75) using a three-dimensional (3D) gradient-echo sequence. Echo times were between 2.28 ms to 25.72 ms in 3.91 ms steps. Voxel-based $T_2{^*}$ relaxation times and $R_2{^*}$ relaxation rate maps were calculated by using the linear curve fitting for each subject. The 3D regions-of-interest (ROI) of the normal glandular tissue and fat were drawn on the longest echo-time image to obtain $T_2{^*}$ and $R_2{^*}$ values. Mean values of those parameters were calculated over all subjects. Results: The 3D ROI sizes were $4818{\pm}4679$ voxels and $1455{\pm}785$ voxels for the normal glandular tissue and fat, respectively. The mean $T_2{^*}$ values were $22.40{\pm}5.61ms$ and $36.36{\pm}8.77ms$ for normal glandular tissue and fat, respectively. The mean $R_2{^*}$ values were $0.0524{\pm}0.0134/ms$ and $0.0297{\pm}0.0069/ms$ for the normal glandular tissue and fat, respectively. Conclusion: $T_2{^*}$ and $R_2{^*}$ values were measured from human breast tissues. $T_2{^*}$ of the normal glandular tissue was shorter than that of fat. Measurement of $T_2{^*}$ relaxation time could be important to understand susceptibility effects in the breast cancer and the normal tissue.

Exploring Mask Appeal: Vertical vs. Horizontal Fold Flat Masks Using Eye-Tracking (마스크 매력 탐구: 아이트래킹을 활용한 수직 접이형 대 수평 접이형 마스크 비교 분석)

  • Junsik Lee;Nan-Hee Jeong;Ji-Chan Yun;Do-Hyung Park;Se-Bum Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.271-286
    • /
    • 2023
  • The global COVID-19 pandemic has transformed face masks from situational accessories to indispensable items in daily life, prompting a shift in public perception and behavior. While the relaxation of mandatory mask-wearing regulations is underway, a significant number of individuals continue to embrace face masks, turning them into a form of personal expression and identity. This phenomenon has given rise to the Fashion Mask industry, characterized by unique designs and colors, experiencing rapid growth in the market. However, existing research on masks is predominantly focused on their efficacy in preventing infection or exploring attitudes during the pandemic, leaving a gap in understanding consumer preferences for mask design. We address this gap by investigating consumer perceptions and preferences for two prevalent mask designs-horizontal fold flat masks and vertical fold flat masks. Through a comprehensive approach involving surveys and eye-tracking experiments, we aim to unravel the subtle differences in how consumers perceive these designs. Our research questions focus on determining which design is more appealing and exploring the reasons behind any observed differences. The study's findings reveal a clear preference for vertical fold flat masks, which are not only preferred but also perceived as unique, sophisticated, three-dimensional, and lively. The eye-tracking analysis provides insights into the visual attention patterns associated with mask designs, highlighting the pivotal role of the fold line in influencing these patterns. This research contributes to the evolving understanding of masks as a fashion statement and provides valuable insights for manufacturers and marketers in the Fashion Mask industry. The results have implications beyond the pandemic, emphasizing the importance of design elements in sustaining consumer interest in face masks.

Finite Element Method Modeling for Individual Malocclusions: Development and Application of the Basic Algorithm (유한요소법을 이용한 환자별 교정시스템 구축의 기초 알고리즘 개발과 적용)

  • Shin, Jung-Woog;Nahm, Dong-Seok;Kim, Tae-Woo;Lee, Sung Jae
    • The korean journal of orthodontics
    • /
    • v.27 no.5 s.64
    • /
    • pp.815-824
    • /
    • 1997
  • The purpose of this study is to develop the basic algorithm for the finite element method modeling of individual malocclusions. Usually, a great deal of time is spent in preprocessing. To reduce the time required, we developed a standardized procedure for measuring the position of each tooth and a program to automatically preprocess. The following procedures were carried to complete this study. 1. Twenty-eight teeth morphologies were constructed three-dimensionally for the finite element analysis and saved as separate files. 2. Standard brackets were attached so that the FA points coincide with the center of the brackets. 3. The study model of a patient was made. 4. Using the study model, the crown inclination, angulation, and the vertical distance from the tip of a tooth was measured by using specially designed tools. 5. The arch form was determined from a picture of the model with an image processing technique. 6. The measured data were input as a rotational matrix. 7. The program provides an output file containing the necessary information about the three-dimensional position of teeth, which is applicable to several finite element programs commonly used. The program for a basic algorithm was made with Turbo-C and the subsequent outfile was applied to ANSYS. This standardized model measuring procedure and the program reduce the time required, especially for preprocessing and can be applied to other malocclusions easily.

  • PDF

A Preliminary Study for Nonlinear Dynamic Analysis of EEG in Patients with Dementia of Alzheimer's Type Using Lyapunov Exponent (리아프노프 지수를 이용한 알쯔하이머형 치매 환자 뇌파의 비선형 역동 분석을 위한 예비연구)

  • Chae, Jeong-Ho;Kim, Dai-Jin;Choi, Sung-Bin;Bahk, Won-Myong;Lee, Chung Tai;Kim, Kwang-Soo;Jeong, Jaeseung;Kim, Soo-Yong
    • Korean Journal of Biological Psychiatry
    • /
    • v.5 no.1
    • /
    • pp.95-101
    • /
    • 1998
  • The changes of electroencephalogram(EEG) in patients with dementia of Alzheimer's type are most commonly studied by analyzing power or magnitude in traditionally defined frequency bands. However because of the absence of an identified metric which quantifies the complex amount of information, there are many limitations in using such a linear method. According to the chaos theory, irregular signals of EEG can be also resulted from low dimensional deterministic chaos. Chaotic nonlinear dynamics in the EEG can be studied by calculating the largest Lyapunov exponent($L_1$). The authors have analyzed EEG epochs from three patients with dementia of Alzheimer's type and three matched control subjects. The largest $L_1$ is calculated from EEG epochs consisting of 16,384 data points per channel in 15 channels. The results showed that patients with dementia of Alzheimer's type had significantly lower $L_1$ than non-demented controls on 8 channels. Topographic analysis showed that the $L_1$ were significantly lower in patients with Alzheimer's disease on all the frontal, temporal, central, and occipital head regions. These results show that brains of patients with dementia of Alzheimer's type have a decreased chaotic quality of electrophysiological behavior. We conclude that the nonlinear analysis such as calculating the $L_1$ can be a promising tool for detecting relative changes in the complexity of brain dynamics.

  • PDF

Urban archaeological investigations using surface 3D Ground Penetrating Radar and Electrical Resistivity Tomography methods (3차원 지표레이다와 전기비저항 탐사를 이용한 도심지 유적 조사)

  • Papadopoulos, Nikos;Sarris, Apostolos;Yi, Myeong-Jong;Kim, Jung-Ho
    • Geophysics and Geophysical Exploration
    • /
    • v.12 no.1
    • /
    • pp.56-68
    • /
    • 2009
  • Ongoing and extensive urbanisation, which is frequently accompanied with careless construction works, may threaten important archaeological structures that are still buried in the urban areas. Ground Penetrating Radar (GPR) and Electrical Resistivity Tomography (ERT) methods are most promising alternatives for resolving buried archaeological structures in urban territories. In this work, three case studies are presented, each of which involves an integrated geophysical survey employing the surface three-dimensional (3D) ERT and GPR techniques, in order to archaeologically characterise the investigated areas. The test field sites are located at the historical centres of two of the most populated cities of the island of Crete, in Greece. The ERT and GPR data were collected along a dense network of parallel profiles. The subsurface resistivity structure was reconstructed by processing the apparent resistivity data with a 3D inversion algorithm. The GPR sections were processed with a systematic way, applying specific filters to the data in order to enhance their information content. Finally, horizontal depth slices representing the 3D variation of the physical properties were created. The GPR and ERT images significantly contributed in reconstructing the complex subsurface properties in these urban areas. Strong GPR reflections and highresistivity anomalies were correlated with possible archaeological structures. Subsequent excavations in specific places at both sites verified the geophysical results. The specific case studies demonstrated the applicability of ERT and GPR techniques during the design and construction stages of urban infrastructure works, indicating areas of archaeological significance and guiding archaeological excavations before construction work.

Hierarchical Overlapping Clustering to Detect Complex Concepts (중복을 허용한 계층적 클러스터링에 의한 복합 개념 탐지 방법)

  • Hong, Su-Jeong;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.111-125
    • /
    • 2011
  • Clustering is a process of grouping similar or relevant documents into a cluster and assigning a meaningful concept to the cluster. By this process, clustering facilitates fast and correct search for the relevant documents by narrowing down the range of searching only to the collection of documents belonging to related clusters. For effective clustering, techniques are required for identifying similar documents and grouping them into a cluster, and discovering a concept that is most relevant to the cluster. One of the problems often appearing in this context is the detection of a complex concept that overlaps with several simple concepts at the same hierarchical level. Previous clustering methods were unable to identify and represent a complex concept that belongs to several different clusters at the same level in the concept hierarchy, and also could not validate the semantic hierarchical relationship between a complex concept and each of simple concepts. In order to solve these problems, this paper proposes a new clustering method that identifies and represents complex concepts efficiently. We developed the Hierarchical Overlapping Clustering (HOC) algorithm that modified the traditional Agglomerative Hierarchical Clustering algorithm to allow overlapped clusters at the same level in the concept hierarchy. The HOC algorithm represents the clustering result not by a tree but by a lattice to detect complex concepts. We developed a system that employs the HOC algorithm to carry out the goal of complex concept detection. This system operates in three phases; 1) the preprocessing of documents, 2) the clustering using the HOC algorithm, and 3) the validation of semantic hierarchical relationships among the concepts in the lattice obtained as a result of clustering. The preprocessing phase represents the documents as x-y coordinate values in a 2-dimensional space by considering the weights of terms appearing in the documents. First, it goes through some refinement process by applying stopwords removal and stemming to extract index terms. Then, each index term is assigned a TF-IDF weight value and the x-y coordinate value for each document is determined by combining the TF-IDF values of the terms in it. The clustering phase uses the HOC algorithm in which the similarity between the documents is calculated by applying the Euclidean distance method. Initially, a cluster is generated for each document by grouping those documents that are closest to it. Then, the distance between any two clusters is measured, grouping the closest clusters as a new cluster. This process is repeated until the root cluster is generated. In the validation phase, the feature selection method is applied to validate the appropriateness of the cluster concepts built by the HOC algorithm to see if they have meaningful hierarchical relationships. Feature selection is a method of extracting key features from a document by identifying and assigning weight values to important and representative terms in the document. In order to correctly select key features, a method is needed to determine how each term contributes to the class of the document. Among several methods achieving this goal, this paper adopted the $x^2$�� statistics, which measures the dependency degree of a term t to a class c, and represents the relationship between t and c by a numerical value. To demonstrate the effectiveness of the HOC algorithm, a series of performance evaluation is carried out by using a well-known Reuter-21578 news collection. The result of performance evaluation showed that the HOC algorithm greatly contributes to detecting and producing complex concepts by generating the concept hierarchy in a lattice structure.

Feasibility of Automated Detection of Inter-fractional Deviation in Patient Positioning Using Structural Similarity Index: Preliminary Results (Structural Similarity Index 인자를 이용한 방사선 분할 조사간 환자 체위 변화의 자동화 검출능 평가: 초기 보고)

  • Youn, Hanbean;Jeon, Hosang;Lee, Jayeong;Lee, Juhye;Nam, Jiho;Park, Dahl;Kim, Wontaek;Ki, Yongkan;Kim, Donghyun
    • Progress in Medical Physics
    • /
    • v.26 no.4
    • /
    • pp.258-266
    • /
    • 2015
  • The modern radiotherapy technique which delivers a large amount of dose to patients asks to confirm the positions of patients or tumors more accurately by using X-ray projection images of high-definition. However, a rapid increase in patient's exposure and image information for CT image acquisition may be additional burden on the patient. In this study, by introducing structural similarity (SSIM) index that can effectively extract the structural information of the image, we analyze the differences between daily acquired x-ray images of a patient to verify the accuracy of patient positioning. First, for simulating a moving target, the spherical computational phantoms changing the sizes and positions were created to acquire projected images. Differences between the images were automatically detected and analyzed by extracting their SSIM values. In addition, as a clinical test, differences between daily acquired x-ray images of a patient for 12 days were detected in the same way. As a result, we confirmed that the SSIM index was changed in the range of 0.85~1 (0.006~1 when a region of interest (ROI) was applied) as the sizes or positions of the phantom changed. The SSIM was more sensitive to the change of the phantom when the ROI was limited to the phantom itself. In the clinical test, the daily change of patient positions was 0.799~0.853 in SSIM values, those well described differences among images. Therefore, we expect that SSIM index can provide an objective and quantitative technique to verify the patient position using simple x-ray images, instead of time and cost intensive three-dimensional x-ray images.