• Title/Summary/Keyword: structure detection

Search Result 2,026, Processing Time 0.031 seconds

A novel approach for the definition and detection of structural irregularity in reinforced concrete buildings

  • S.P. Akshara;M. Abdul Akbar;T.M. Madhavan Pillai;Renil Sabhadiya;Rakesh Pasunuti
    • Structural Monitoring and Maintenance
    • /
    • v.11 no.2
    • /
    • pp.101-126
    • /
    • 2024
  • To avoid irregularities in buildings, design codes worldwide have introduced detailed guidelines for their check and rectification. However, the criteria used to define and identify each of the plan and vertical irregularities are specific and may vary between codes of different countries, thus making their implementation difficult. This short communication paper proposes a novel approach for quantifying different types of structural irregularities using a common parameter named as unified identification factor, which is exclusively defined for the columns based on their axial loads and tributary areas. The calculation of the identification factor is demonstrated through the analysis of rectangular and circular reinforced concrete models using ETABS v18.0.2, which are further modified to generate plan irregular (torsional irregularity, cut-out in floor slab and non-parallel lateral force system) and vertical irregular (mass irregularity, vertical geometric irregularity and floating columns) models. The identification factor is calculated for all the columns of a building and the range within which the value lies is identified. The results indicate that the range will be very wide for an irregular building when compared to that with a regular configuration, thus implying a strong correlation of the identification factor with the structural irregularity. Further, the identification factor is compared for different columns within a floor and between floors for each building model. The findings suggest that the value will be abnormally high or low for a column in the vicinity of an irregularity. The proposed factor could thus be used in the preliminary structural design phase, so as to eliminate the complications that might arise due to the geometry of the structure when subjected to lateral loads. The unified approach could also be incorporated in future revisions of codes, as a replacement for the numerous criteria currently used for classifying different types of irregularities.

Hierarchical Overlapping Clustering to Detect Complex Concepts (중복을 허용한 계층적 클러스터링에 의한 복합 개념 탐지 방법)

  • Hong, Su-Jeong;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.111-125
    • /
    • 2011
  • Clustering is a process of grouping similar or relevant documents into a cluster and assigning a meaningful concept to the cluster. By this process, clustering facilitates fast and correct search for the relevant documents by narrowing down the range of searching only to the collection of documents belonging to related clusters. For effective clustering, techniques are required for identifying similar documents and grouping them into a cluster, and discovering a concept that is most relevant to the cluster. One of the problems often appearing in this context is the detection of a complex concept that overlaps with several simple concepts at the same hierarchical level. Previous clustering methods were unable to identify and represent a complex concept that belongs to several different clusters at the same level in the concept hierarchy, and also could not validate the semantic hierarchical relationship between a complex concept and each of simple concepts. In order to solve these problems, this paper proposes a new clustering method that identifies and represents complex concepts efficiently. We developed the Hierarchical Overlapping Clustering (HOC) algorithm that modified the traditional Agglomerative Hierarchical Clustering algorithm to allow overlapped clusters at the same level in the concept hierarchy. The HOC algorithm represents the clustering result not by a tree but by a lattice to detect complex concepts. We developed a system that employs the HOC algorithm to carry out the goal of complex concept detection. This system operates in three phases; 1) the preprocessing of documents, 2) the clustering using the HOC algorithm, and 3) the validation of semantic hierarchical relationships among the concepts in the lattice obtained as a result of clustering. The preprocessing phase represents the documents as x-y coordinate values in a 2-dimensional space by considering the weights of terms appearing in the documents. First, it goes through some refinement process by applying stopwords removal and stemming to extract index terms. Then, each index term is assigned a TF-IDF weight value and the x-y coordinate value for each document is determined by combining the TF-IDF values of the terms in it. The clustering phase uses the HOC algorithm in which the similarity between the documents is calculated by applying the Euclidean distance method. Initially, a cluster is generated for each document by grouping those documents that are closest to it. Then, the distance between any two clusters is measured, grouping the closest clusters as a new cluster. This process is repeated until the root cluster is generated. In the validation phase, the feature selection method is applied to validate the appropriateness of the cluster concepts built by the HOC algorithm to see if they have meaningful hierarchical relationships. Feature selection is a method of extracting key features from a document by identifying and assigning weight values to important and representative terms in the document. In order to correctly select key features, a method is needed to determine how each term contributes to the class of the document. Among several methods achieving this goal, this paper adopted the $x^2$�� statistics, which measures the dependency degree of a term t to a class c, and represents the relationship between t and c by a numerical value. To demonstrate the effectiveness of the HOC algorithm, a series of performance evaluation is carried out by using a well-known Reuter-21578 news collection. The result of performance evaluation showed that the HOC algorithm greatly contributes to detecting and producing complex concepts by generating the concept hierarchy in a lattice structure.

A Methodology of Ship Detection Using High-Resolution Satellite Optical Image (고해상도 광학 인공위성 영상을 활용한 선박탐지 방법)

  • Park, Jae-Jin;Oh, Sangwoo;Park, Kyung-Ae;Lee, Min-Sun;Jang, Jae-Cheol;Lee, Moonjin
    • Journal of the Korean earth science society
    • /
    • v.39 no.3
    • /
    • pp.241-249
    • /
    • 2018
  • As the international trade increases, vessel traffics around the Korean Peninsula are also increasing. Maritime accidents hence take place more frequently in the southern coast of Korea where many big and small ports are located. Accidents involving ship collision and sinking result in a substantial human and material damage as well as the marine environmental pollution. Therefore, it is necessary to locate the ships quickly when such accidents occur. In this study, we suggest a new ship detection index by comparing and analyzing the reflectivity of each channel of the Korea MultiPurpose SATellite-2 (KOMPSAT-2) images of the area around the Gwangyang Bay. A threshold value of 0.1 is set based on a histogram analysis, and all vessels are detected when compared with RGB composite images. After selecting a relatively large ship as a representative sample, the distribution of spatial reflectivity around the ship is studied. Uniform shadows are detected on the northwest side of the vessel. This indicates that the sun is in the southeast, the azimuth of the actual satellite image is $144.80^{\circ}$, and the azimuth angle of the sun can be estimated using the shadow position. The reflectivity of the shadows is 0.005 lower than the surrounding sea and ship. The shadow height varies with the position of the bow and the stern, perhaps due to the relative heights of the ship deck and the structure. The results of this study can help search technology for missing vessels using optical satellite images in the event of a marine accident around the Korean Peninsula.

Characteristics of the Factor Structure of the Child Behavior Checklist Dysregulation Profile for School-aged Children (학령기 아동의 CBCL 조절곤란프로파일(Child Behavior Checklist Dysregulation Profile)의 요인구조와 특성)

  • Kim, Eun-young;Ha, Eun-hye
    • Korean Journal of School Psychology
    • /
    • v.17 no.1
    • /
    • pp.17-38
    • /
    • 2020
  • This study examined the factor structure of the Child Behavior Checklist Dysregulation Profile(CBCL-DP) for school-aged children in Korea identified differences in the level of maladjustment and problematic behaviors between the clinical group which had characteristics of CBCL-DP and the control group which did not. Confirmative factor analysis was performed on three alternative models from the literature to determine which was the most appropriate factor structure for the CBCL-DP. The result showed that the bi-factor model fit the sample data better than both the one and second-factor models. To confirm that the bi-factor model was the most appropriate factor structure, regression paths with relevant variables examined. The showed that CBCL-DP with the bi-factor model was associated with executive function difficulty as reported by parents and with school adjustment and all sub-factors of strength and difficulty as reported by teachers. The results also showed that this model had a different relationship with anxiety/depression, aggressive behavior, and attention problems than the other models. The clinical group was shown to have more executive function difficulty, worse adjustment of school life and to be less likely to engage in desired behaviors than the control group. These results indicate the CBCL-DP is more related to negative outcomes than any other factor, and that the bi-factor model was found to best fit the sample data, consistent with other studies. The early discovery of CBCL-DP can be used to provide interventions for high-risk children who exhibit emotional and behavioral problems, making its detection a significant diagnostic tool. The implications of these result, the limitations of this study, and areas for future research are discussed in this paper.

A study on the location of fire fighting appliances in cargo ships (화물선 소화설비 비치에 대한 연구)

  • Ha, Weon-Jae
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.40 no.9
    • /
    • pp.852-858
    • /
    • 2016
  • To safeguard the accommodation spaces on cargo ships from fire, structural fire protection provisions introduced by SOLAS and these measures retard the propagation of flames and smoke. SOLAS also specifies provisions for fire fighting drills. These provisions are a combination of regulations regarding structure and equipment and those dealing with the human element for the fire protection and effective responses in the event of fire. Requirements related to the human element play a supporting role to the requirements for structure and equipment because the present accommodation structure and equipment are insufficient for extinguishing a fire, therefore, fire-extinguishing activity performed by crew members is essential. To reduce human error and ensure effective fire fighting, it is necessary to install a fire-fighting system and improve the fire fighting process. The fundamental concept of fire fighting exercises is to commence fire fighting before the fire grows too big to extinguish. It is essential to relocate the storage place of fire fighting equipment to expedite the fire-fighting exercise. This study was carried out to reduce human risk for this purpose, the fire control station was relocated to a site that could be accessed from the open deck. Further, two sets of a fire fighter's outfit were stored at the same site. This relocation eliminated the risk of the crew reentering to operate the fire fighting system in the fire control station and allowed the crew to pick up the fire fighters' outfits quickly in the event of a fire. In addition, it was proposed that the IIC method be made mandatory. This method is combination of automatic fire detection system and sprinkler system which can reduce the risk of the fire fighting exercises for the crew and to suppress fire in the initial stage. This study was carried out to provide a foundation to the possible amendment of the relevant SOLAS regulations and national legislation.

Effect of substrate bias voltage on a-C:H film (기판 bias 전압이 a-C:H 박막의 특성에 미치는 영향)

  • 유영조;김효근;장홍규;오재석;김근식
    • Journal of the Korean Vacuum Society
    • /
    • v.6 no.4
    • /
    • pp.348-353
    • /
    • 1997
  • Hydrogenated amorphous carbon(a-C:H) films were deposited on p-type Si(100) by DC saddle-field plasma enhanced CVD to investigate the effect of substrate bias on optical properties and structural changes. They were deposited using pure methane gas at a wide range of substrate bias at room temperature and 90 mtorr. The substrate bias voltage ($V_s$) was employed from $V_s=0 V$ to $V_s=400 V$. The information of optical properties was investigated by photoluminescence and transmitance. Chemical bondings of a-C:H have been explored from FT-IR and Raman spectroscopy. The thickness and relative hydrogen content of the films were measured by Rutherford backscattering spectroscopy (RBS) and elastic recoil detection (ERD) technigue. The growth rate of a-C:H film was decreased with the increase of $V_s$, but the hydrogen content of the film was increased with the increase of $V_s$. The a-C:H films deposited at the lowest $V_s$ contain the smallest amount of hydrogen with most of C-H bonds in the of $CH_2$ configuration, whereas the films produced at higher $V_s$ reveal dominant the $CH_3$ bonding structure. The emission of white photoluminescence from the films were observed even with naked eyes at room temperature and the PL intensity of the film has the maximum value at $V_s$=200 V. With $V_s$ lower than 200 V, the PL intensity of the film increased with V, but for V, higher than 200 V, the PL intensity decreased with the increase of $V_s$. The peak energy of the PL spectra slightly shifted to the higher energy with the increase of $V_s$. The optical bandgap of the film, determined by optical transmittance, was increased from 1.5 eV at $V_s$=0V to 2.3 eV at $V_s$=400 V. But there were no obvious relations between the PL peak and the optical gap which were measured by Tauc process.

  • PDF

Empirical Forecast of Corotating Interacting Regions and Geomagnetic Storms Based on Coronal Hole Information (코로나 홀을 이용한 CIR과 지자기 폭풍의 경험적 예보 연구)

  • Lee, Ji-Hye;Moon, Yong-Jae;Choi, Yun-Hee;Yoo, Kye-Hwa
    • Journal of Astronomy and Space Sciences
    • /
    • v.26 no.3
    • /
    • pp.305-316
    • /
    • 2009
  • In this study, we suggest an empirical forecast of CIR (Corotating Interaction Regions) and geomagnetic storm based on the information of coronal holes (CH). For this we used CH data obtained from He I $10830{\AA}$ maps at National Solar Observatory-Kitt Peak from January 1996 to November 2003 and the CIR and storm data that Choi et al. (2009) identified. Considering the relationship among coronal holes, CIRs, and geomagnetic storms (Choi et al. 2009), we propose the criteria for geoeffective coronal holes; the center of CH is located between $N40^{\circ}$ and $S40^{\circ}$ and between $E40^{\circ}$ and $W20^{\circ}$, and its area in percentage of solar hemispheric area is larger than the following areas: (1) case 1: 0.36%, (2) case 2: 0.66%, (3) case 3: 0.36% for 1996-2000, and 0.66% for 2001-2003. Then we present contingency tables between prediction and observation for three cases and their dependence on solar cycle phase. From the contingency tables, we determined several statistical parameters for forecast evaluation such as PODy (the probability of detection yes), FAR (the false alarm ratio), Bias (the ratio of "yes" predictions to "yes" observations) and CSI (critical success index). Considering the importance of PODy and CSI, we found that the best criterion is case 3; CH-CIR: PODy=0.77, FAR=0.66, Bias=2.28, CSI=0.30. CH-storm: PODy=0.81, FAR=0.84, Bias=5.00, CSI=0.16. It is also found that the parameters after the solar maximum are much better than those before the solar maximum. Our results show that the forecasting of CIR based on coronal hole information is meaningful but the forecast of goemagnetic storm is challenging.

Characteristics of Electrode Potential and AC Impendance of Perchlorate Ion-Selective Electrodes Based on Quaternary Phosphonium Salts in PVC Membranes (제4급 인산염을 이용한 과염소산 이온선택성 PVC막 전극의 전극전위와 AC 임피던스 특성)

  • 안형환
    • Membrane Journal
    • /
    • v.9 no.4
    • /
    • pp.230-239
    • /
    • 1999
  • Perchlorate ion-selective electrodes in PVC membranes that respond linearly to concentration 106 M were developed by incorporating the quaternary phosphonium salts as a canier. The effects of the chemical structure, the contents of canier, the kind of plasticizer and the membrane thickness on electrode characteristics such as the electrode slope, the linear respone range and the detection limit were studied. With this results, the detectable pH range, selectivity coefficients and AC impedance characteristics were compared and investigated. The perchlorate ion substituents of the quaternary phosphonium salts like tetraoctylphosphonium perchlorate (TOPP) , tetraphenylphosphonium perchlorate(TPPP), and tetrabutylphosphonium perchlorate(TBPP) as a canier were used. The electrode characteristics were better in the ascending order of TBPP < TPPP < TOPP, with the increase of carbon chain length of the alkyl group. Dioctylsebacate(OOS) was best as a plasticizer, the canier contents were better with 11.76 wt% and the optimum membrane thickness was 0.19 mm. Under the above condition, the electrode slope was 56.58 mV/$^P{ClO}_4$,the linear response range was $10^{-1}$\times$10^{-6}$ M, the detection limit was 9.66 x $10^{-7}$ M. The performance of electrode was better than Orion electrode. The electrode potential was stable within the pH range from 3 to 11. The order of the selectivity coefficients for the perchlorate ion was sol < F < Br < 1. With the result of impedance spectrum, it was found that the equivalent circuit for the electrode could be expressed by a series combination of solution resistance, parallel circuit consisting of the double layer capacitance and bulk resistance and Warburg impedance. And solution resistance was almost not appeared and Warburg impedance was highly appeared by diffusion. Then Warburg coefficient was 1.32$\times$$10^74 $\Omega$ $\cdot$ ${cm}^2/s^{1/2}$.

  • PDF

The Method for Real-time Complex Event Detection of Unstructured Big data (비정형 빅데이터의 실시간 복합 이벤트 탐지를 위한 기법)

  • Lee, Jun Heui;Baek, Sung Ha;Lee, Soon Jo;Bae, Hae Young
    • Spatial Information Research
    • /
    • v.20 no.5
    • /
    • pp.99-109
    • /
    • 2012
  • Recently, due to the growth of social media and spread of smart-phone, the amount of data has considerably increased by full use of SNS (Social Network Service). According to it, the Big Data concept is come up and many researchers are seeking solutions to make the best use of big data. To maximize the creative value of the big data held by many companies, it is required to combine them with existing data. The physical and theoretical storage structures of data sources are so different that a system which can integrate and manage them is needed. In order to process big data, MapReduce is developed as a system which has advantages over processing data fast by distributed processing. However, it is difficult to construct and store a system for all key words. Due to the process of storage and search, it is to some extent difficult to do real-time processing. And it makes extra expenses to process complex event without structure of processing different data. In order to solve this problem, the existing Complex Event Processing System is supposed to be used. When it comes to complex event processing system, it gets data from different sources and combines them with each other to make it possible to do complex event processing that is useful for real-time processing specially in stream data. Nevertheless, unstructured data based on text of SNS and internet articles is managed as text type and there is a need to compare strings every time the query processing should be done. And it results in poor performance. Therefore, we try to make it possible to manage unstructured data and do query process fast in complex event processing system. And we extend the data complex function for giving theoretical schema of string. It is completed by changing the string key word into integer type with filtering which uses keyword set. In addition, by using the Complex Event Processing System and processing stream data at real-time of in-memory, we try to reduce the time of reading the query processing after it is stored in the disk.

Automated Analyses of Ground-Penetrating Radar Images to Determine Spatial Distribution of Buried Cultural Heritage (매장 문화재 공간 분포 결정을 위한 지하투과레이더 영상 분석 자동화 기법 탐색)

  • Kwon, Moonhee;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.55 no.5
    • /
    • pp.551-561
    • /
    • 2022
  • Geophysical exploration methods are very useful for generating high-resolution images of underground structures, and such methods can be applied to investigation of buried cultural properties and for determining their exact locations. In this study, image feature extraction and image segmentation methods were applied to automatically distinguish the structures of buried relics from the high-resolution ground-penetrating radar (GPR) images obtained at the center of Silla Kingdom, Gyeongju, South Korea. The major purpose for image feature extraction analyses is identifying the circular features from building remains and the linear features from ancient roads and fences. Feature extraction is implemented by applying the Canny edge detection and Hough transform algorithms. We applied the Hough transforms to the edge image resulted from the Canny algorithm in order to determine the locations the target features. However, the Hough transform requires different parameter settings for each survey sector. As for image segmentation, we applied the connected element labeling algorithm and object-based image analysis using Orfeo Toolbox (OTB) in QGIS. The connected components labeled image shows the signals associated with the target buried relics are effectively connected and labeled. However, we often find multiple labels are assigned to a single structure on the given GPR data. Object-based image analysis was conducted by using a Large-Scale Mean-Shift (LSMS) image segmentation. In this analysis, a vector layer containing pixel values for each segmented polygon was estimated first and then used to build a train-validation dataset by assigning the polygons to one class associated with the buried relics and another class for the background field. With the Random Forest Classifier, we find that the polygons on the LSMS image segmentation layer can be successfully classified into the polygons of the buried relics and those of the background. Thus, we propose that these automatic classification methods applied to the GPR images of buried cultural heritage in this study can be useful to obtain consistent analyses results for planning excavation processes.