• Title/Summary/Keyword: 로그 구조

Search Result 305, Processing Time 0.027 seconds

Development of Continuous Indirect Connectivity Model for Evaluation of Hub Operations at Airport (공항의 허브화 평가를 위한 연속연결성지수모형 개발)

  • Lee, Sang-Yong;Yu, Gwang-Ui;Park, Yong-Hwa
    • Journal of Korean Society of Transportation
    • /
    • v.27 no.4
    • /
    • pp.195-206
    • /
    • 2009
  • The deregulation of aviation markets in Europe and the United Sates had led airlines to reconfigure their networks into hub-and-spoke systems. Recent trends of "Open Skies" in the Asian aviation market are also expected to prompt the reformation of airlines' networks in the region. A significant connectivity index is a crucial tool for airlines and airport authorities to estimate the degree of hub-and-spoke operations. Therefore, this paper suggests a new index, Continuous Indirect Connectivity Index (CICI), for measuring the coordination of airlines' flight schedules, applying it to the Asian, European and the American aviation markets. CICI consists of three components:(i) temporal connectivity to identify the attractiveness between connection flights, (ii) spatial connectivity to differentiate the attractiveness by de-routing distance with continuous linear function, and (iii) relative intensity to reflect the effect of direct flight frequency on transfer routes. CICI is evaluated to examine a casual relationship through regression analyses with two dependent variables of the number of transfer passengers and transfer rates. Compared with Danesi's index and Doganis' index through evaluation processes, CICI has a higher coefficient value of determination, implying that it explains the relationship between connectivity and transfer passengers more precisely.

Survival analysis on the business types of small business using Cox's proportional hazard regression model (콕스 비례위험 모형을 이용한 중소기업의 업종별 생존율 및 생존요인 분석)

  • Park, Jin-Kyung;Oh, Kwang-Ho;Kim, Min-Soo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.2
    • /
    • pp.257-269
    • /
    • 2012
  • Global crisis expedites the change in the environment of industry and puts small size enterprises in danger of mass bankruptcy. Because of this, domestic small size enterprises is an urgent need of restructuring. Based on the small business data registered in the Credit Guarantee Fund, we estimated the survival probability in the context of the survival analysis. We also analyzed the survival time which are distinguished depending on the types of business in the small business. Financial variables were also conducted using COX regression analysis of small businesses by types of business. In terms of types of business wholesale and retail trade industry and services were relatively high in the survival probability than light, heavy, and the construction industries. Especially the construction industry showed the lowest survival probability. In addition, we found that construction industry, the bigger BIS (bank of international settlements capital ratio) and current ratio are, the smaller default-rate is. But the bigger borrowing bond is, the bigger default-rate is. In the light industry, the bigger BIS and ROA (return on assets) are, the smaller a default-rate is. In the wholesale and retail trade industry, the bigger bis and current ratio are, the smaller a default-rate is. In the heavy industry, the bigger BIS, ROA, current ratio are, the smaller default-rate is. Finally, in the services industry, the bigger current ratio is, the smaller a default-rate is.

Analysis of extreme wind speed and precipitation using copula (코플라함수를 이용한 극단치 강풍과 강수 분석)

  • Kwon, Taeyong;Yoon, Sanghoo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.4
    • /
    • pp.797-810
    • /
    • 2017
  • The Korean peninsula is exposed to typhoons every year. Typhoons cause huge socioeconomic damage because tropical cyclones tend to occur with strong winds and heavy precipitation. In order to understand the complex dependence structure between strong winds and heavy precipitation, the copula links a set of univariate distributions to a multivariate distribution and has been actively studied in the field of hydrology. In this study, we carried out analysis using data of wind speed and precipitation collected from the weather stations in Busan and Jeju. Log-Normal, Gamma, and Weibull distributions were considered to explain marginal distributions of the copula. Kolmogorov-Smirnov, Cramer-von-Mises, and Anderson-Darling test statistics were employed for testing the goodness-of-fit of marginal distribution. Observed pseudo data were calculated through inverse transformation method for establishing the copula. Elliptical, archimedean, and extreme copula were considered to explain the dependence structure between strong winds and heavy precipitation. In selecting the best copula, we employed the Cramer-von-Mises test and cross-validation. In Busan, precipitation according to average wind speed followed t copula and precipitation just as maximum wind speed adopted Clayton copula. In Jeju, precipitation according to maximum wind speed complied Normal copula and average wind speed as stated in precipitation followed Frank copula and maximum wind speed according to precipitation observed Husler-Reiss copula.

Mobile App Analytics using Media Repertoire Approach (미디어 레퍼토리를 이용한 스마트폰 애플리케이션 이용 패턴 유형 분석)

  • Kwon, Sung Eun;Jang, Shu In;Hwangbo, Hyunwoo
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.4
    • /
    • pp.133-154
    • /
    • 2021
  • Today smart phone is the most common media with a vehicle called 'application'. In order to understand how media users select applications and build their repertoire, this study conducted two-step approach using big data from smart phone log for 4 weeks in November 2019, and finally classified 8 media repertoire groups. Each of the eight media repertoire groups showed differences in time spent of mobile application category compared to other groups, and also showed differences between groups in demographic distribution. In addition to the academic contribution of identifying the mobile application repertoire with large scale behavioral data, this study also has significance in proposing a two-step approach that overcomes 'outlier issue' in behavioral data by extracting prototype vectors using SOM (Sefl-Organized Map) and applying it to k-means clustering for optimization of the classification. The study is also meaningful in that it categorizes customers using e-commerce services, identifies customer structure based on behavioral data, and provides practical guides to e-commerce communities that execute appropriate services or marketing decisions for each customer group.

Statistical Properties of Material Strength of Concrete, Re-Bar and Strand Used in Domestic Construction Site (국내 현장의 콘크리트, 철근 및 강연선 재료 강도에 대한 통계 특성 분석)

  • Paik, In-Yeol;Shim, Chang-Su;Chung, Young-Soo;Sang, Hee-Jung
    • Journal of the Korea Concrete Institute
    • /
    • v.23 no.4
    • /
    • pp.421-430
    • /
    • 2011
  • As a fundamental study to introduce the reliability-based design code, a statistical study is conducted for the material strength data collected from domestic construction sites. In order to develop a rational design code based on statistics and reliability theory, it is essential to obtain the statistical properties of material strength. Material strength data for concrete, reinforcing bars, and prestressing strands which are used in domestic construction sites are collected and statistically analyzed. Then, the statistical properties are compared with those used in the process of the reliability-based calibration of internationally leading design codes. The statistical properties of the domestic data are such that the bias factor is relatively uniform between 1.13 and 1.20 and the coefficient of variation is below 0.10. Reinforcing bar data show difference among different manufacturers but there is not much difference among re-bar diameters. In the case of tendons, which are high strength materials, both of the domestic and foreign data show smaller values of the bias factor and the coefficient of variation than those of concrete and re-bar. Statistical distribution of all the material strength can be properly assumed as normal, log-normal, or Gumbel distribution after analyzing the classified data by individual construction site and manufacturer rather than the mixed data obtained from different sources in order to express the individual distribution of each structure.

Development of Empirical Fragility Function for High-speed Railway System Using 2004 Niigata Earthquake Case History (2004 니가타 지진 사례 분석을 통한 고속철도 시스템의 지진 취약도 곡선 개발)

  • Yang, Seunghoon;Kwak, Dongyoup
    • Journal of the Korean Geotechnical Society
    • /
    • v.35 no.11
    • /
    • pp.111-119
    • /
    • 2019
  • The high-speed railway system is mainly composed of tunnel, bridge, and viaduct to meet the straightness needed for keeping the high speed up to 400 km/s. Seismic fragility for the high-speed railway infrastructure can be assessed as two ways: one way is studying each element of infrastructure analytically or numerically, but it requires lots of research efforts due to wide range of railway system. On the other hand, empirical method can be used to access the fragility of an entire system efficiently, which requires case history data. In this study, we collect the 2004 MW 6.6 Niigata earthquake case history data to develop empirical seismic fragility function for a railway system. Five types of intensity measures (IMs) and damage levels are assigned to all segments of target system for which the unit length is 200 m. From statistical analysis, probability of exceedance for a certain damage level (DL) is calculated as a function of IM. For those probability data points, log-normal CDF is fitted using MLE method, which forms fragility function for each damage level of exceedance. Evaluating fragility functions calculated, we observe that T=3.0 spectral acceleration (SAT3.0) is superior to other IMs, which has lower standard deviation of log-normal CDF and low error of the fit. This indicates that long-period ground motion has more impacts on railway infrastructure system such as tunnel and bridge. It is observed that when SAT3.0 = 0.1 g, P(DL>1) = 2%, and SAT3.0 = 0.2 g, P(DL>1) = 23.9%.

Process development of a virally-safe dental xenograft material from porcine bones (바이러스 안전성이 보증된 돼지유래 골 이식재 제조 공정 개발)

  • Kim, Dong-Myong;Kang, Ho-Chang;Cha, Hyung-Joon;Bae, Jung Eun;Kim, In Seop
    • Korean Journal of Microbiology
    • /
    • v.52 no.2
    • /
    • pp.140-147
    • /
    • 2016
  • A process for manufacturing virally-safe porcine bone hydroxyapatite (HA) has been developed to serve as advanced xenograft material for dental applications. Porcine bone pieces were defatted with successive treatments of 30% hydrogen peroxide and 80% ethyl alcohol. The defatted porcine bone pieces were heat-treated in an oxygen atmosphere box furnace at $1,300^{\circ}C$ to remove collagen and organic compounds. The bone pieces were ground with a grinder and then the bone powder was sterilized by gamma irradiation. Morphological characteristics such as SEM (Scanning Electron Microscopy) and TEM (Transmission Electron Microscopy) images of the resulting porcine bone HA (THE Graft$^{(R)}$) were similar to those of a commercial bovine bone HA (Bio-Oss$^{(R)}$). In order to evaluate the efficacy of $1,300^{\circ}C$ heat treatment and gamma irradiation at a dose of 25 kGy for the inactivation of porcine viruses during the manufacture of porcine bone HA, a variety of experimental porcine viruses including transmissible gastroenteritis virus (TGEV), pseudorabies virus (PRV), porcine rotavirus (PRoV), and porcine parvovirus (PPV) were chosen. TGEV, PRV, PRoV, and PPV were completely inactivated to undetectable levels during the $1,300^{\circ}C$ heat treatment. The mean log reduction factors achieved were $${\geq_-}4.65$$ for TGEV, $${\geq_-}5.81$$ for PRV, $${\geq_-}6.28$$ for PRoV, and $${\geq_-}5.21$$ for PPV. Gamma irradiation was also very effective at inactivating the viruses. TGEV, PRV, PRoV, and PPV were completely inactivated to undetectable levels during the gamma irradiation. The mean log reduction factors achieved were $${\geq_-}4.65$$ for TGEV, $${\geq_-}5.87$$ for PRV, $${\geq_-}6.05$$ for PRoV, and $${\geq_-}4.89$$ for PPV. The cumulative log reduction factors achieved using the two different virus inactivation processes were $${\geq_-}9.30$$ for TGEV, $${\geq_-}11.68$$ for PRV, $${\geq_-}12.33$$ for PRoV, and $${\geq_-}10.10$$ for PPV. These results indicate that the manufacturing process for porcine bone HA from porcine-bone material has sufficient virus-reducing capacity to achieve a high margin of virus safety.

A Semantic Classification Model for e-Catalogs (전자 카탈로그를 위한 의미적 분류 모형)

  • Kim Dongkyu;Lee Sang-goo;Chun Jonghoon;Choi Dong-Hoon
    • Journal of KIISE:Databases
    • /
    • v.33 no.1
    • /
    • pp.102-116
    • /
    • 2006
  • Electronic catalogs (or e-catalogs) hold information about the goods and services offered or requested by the participants, and consequently, form the basis of an e-commerce transaction. Catalog management is complicated by a number of factors and product classification is at the core of these issues. Classification hierarchy is used for spend analysis, custom3 regulation, and product identification. Classification is the foundation on which product databases are designed, and plays a central role in almost all aspects of management and use of product information. However, product classification has received little formal treatment in terms of underlying model, operations, and semantics. We believe that the lack of a logical model for classification Introduces a number of problems not only for the classification itself but also for the product database in general. It needs to meet diverse user views to support efficient and convenient use of product information. It needs to be changed and evolved very often without breaking consistency in the cases of introduction of new products, extinction of existing products, class reorganization, and class specialization. It also needs to be merged and mapped with other classification schemes without information loss when B2B transactions occur. For these requirements, a classification scheme should be so dynamic that it takes in them within right time and cost. The existing classification schemes widely used today such as UNSPSC and eClass, however, have a lot of limitations to meet these requirements for dynamic features of classification. In this paper, we try to understand what it means to classify products and present how best to represent classification schemes so as to capture the semantics behind the classifications and facilitate mappings between them. Product information implies a plenty of semantics such as class attributes like material, time, place, etc., and integrity constraints. In this paper, we analyze the dynamic features of product databases and the limitation of existing code based classification schemes. And describe the semantic classification model, which satisfies the requirements for dynamic features oi product databases. It provides a means to explicitly and formally express more semantics for product classes and organizes class relationships into a graph. We believe the model proposed in this paper satisfies the requirements and challenges that have been raised by previous works.

Assessment of Extreme Wind Risk for Window Systems in Apartment Buildings Based on Probabilistic Model (확률 모형 기반의 아파트 창호 시스템 강풍 위험도 평가)

  • Ham, Hee Jung;Yun, Woo-Seok;Choi, Seung Hun;Lee, Sungsu;Kim, Ho-Jeong
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.28 no.6
    • /
    • pp.625-633
    • /
    • 2015
  • In this study, a coupled probabilistic framework is developed to assess wind risk on apartment buildings by using the convolution of wind hazard and fragility functions. In this framework, typhoon induced extreme wind is estimated by applying the developed Monte Carlo simulation model to the climatological data of typhoons affecting Korean peninsular from 1951 to 2013. The Monte Carlo simulation technique is also used to assess wind fragility function for 4 different damage states by comparing the probability distributions of the window system's resistance performance and wind load. Wind hazard and fragility functions are modeled by the Weibull and lognormal probability distributions based on simulated wind speeds and failure probabilities. The modeled functions are convoluted to obtain the wind risk for the different damage levels. The developed probabilistic framework clearly shows that wind risk are influenced by various important characteristics of terrain and apartment building such as location of building, exposure category, topographic condition, roof angle, height of building, etc. The risk model presented in this paper can be used as tools to predict economic loss estimation and to establish wind risk mitigation plan for the existing building inventory.

The Recovery Method for MySQL InnoDB Using Feature of IBD Structure (IBD 구조적특징을이용한 MySQL InnoDB의레코드복구기법)

  • Jang, Jeewon;Jeoung, Doowon;Lee, Sang Jin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.2
    • /
    • pp.59-66
    • /
    • 2017
  • MySQL database is the second place in the market share of the current database. Especially InnoDB storage engine has been used in the default storage engine from the version of MySQL5.5. And many companies are using the MySQL database with InnoDB storage engine. Study on the structural features and the log of the InnoDB storage engine in the field of digital forensics has been steadily underway, but for how to restore on a record-by-record basis for the deleted data, has not been studied. In the process of digital forensic investigation, database administrators damaged evidence for the purpose of destruction of evidence. For this reason, it is important in the process of forensic investigation to recover deleted record in database. In this paper, We proposed the method of recovering deleted data on a record-by-record in database by analyzing the structure of MySQL InnoDB storage engine. And we prove this method by tools. This method can be prevented by database anti forensic, and used to recover deleted data when incident which is related with MySQL InnoDB database is occurred.