• Title/Summary/Keyword: 데이터 분할 기준

Search Result 214, Processing Time 0.03 seconds

Wind Load Analysis owing to the Computation Fluid Dynamics and Wind Tunnel Test of a Container Crane (컨테이너 크레인의 전산유동해석과 풍동실험에 의한 풍하중 분석)

  • Lee, Su-Hong;Han, Dong-Seop;Han, Geun-Jo
    • Journal of Navigation and Port Research
    • /
    • v.33 no.3
    • /
    • pp.215-220
    • /
    • 2009
  • Container cranes are vulnerable structure to difficult weather conditions bemuse there is no shielding facility to protect them from strong wind. This study was carried out to analyze the effect of wind load on the structure of a container crane according to the change of the boom shape using wind tunnel test and computation fluid dynamics. And we provide a container crane designer with data which am be used in a wind resistance design of a container crane assuming that a wind load 75m/s wind velocity is applied in a container crane. In this study, we applied mean wind load conformed to 'Design Criteria of Wind Load' in 'Load Criteria of Building Structures' and an external fluid field was divided as interval of 10 degrees to analyze the effect according to a wind direction. In this conditions, we carried out the wind tunnel test and the computation fluid dynamic analysis and than we analyzed the wind load which was needed to design the container crane.

A Performance Improvement on Navigation Applying Measurement Estimation in Urban Weak Signal Environment (도심에서의 측정치 추정을 적용한 항법성능 향상 연구)

  • Park, Sul Gee;Cho, Deuk Jae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.11
    • /
    • pp.2745-2752
    • /
    • 2014
  • In recent years, Transport Demand Management has been conducted for the efficient management of transport. In ITS applications in particular, the prerequisite is accurate and reliable positioning. However, the major problems are satellite signal outage, and multipath. This paper proposes that outage and multipath measurement can be detected and estimated using elevation angle and signal to noise ratio data association relation in stand-alone GPS. In order to verify the performance of the proposed method, it is then evaluated by the car test. the evaluation test environment has low accuracy and unreliable positioning because of signal outage or multipath such as steep hill and high buildings. In the evaluation test result, 918times abnormal signal occurred and it was confirmed that the proposed method showed more improved 9.48m(RMS) horizontal positioning error than without proposed method.

Review of the Estimation Method of Methane Emission from Waste Landfill for Korean Greenhouse Gas and Energy Target Management System (온실가스·에너지 목표관리제를 위한 폐기물 매립시설 메탄배출량의 적정 산정방법에 관한 고찰)

  • Seo, Dong-Cheon;Nah, Je-Hyun;Bae, Sung-Jin;Lee, Dong-Hoon
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.35 no.12
    • /
    • pp.867-876
    • /
    • 2013
  • To promote the carbon emission trading scheme and reduce greenhouse gas (GHG) emission as following 'Korean GHG & Energy Target Management System', GHG emissions should be accurately determined in each industrial sector. For the estimation method of GHG emission from waste landfill, there are several error parameters, therefore we reviewed the estimation method and proposed a revised method. Methane generation from landfill must be calculated by the selected method based on methane recovery rate, 0.75. However, this methodology is not considered about uncertainty factor. So it is desirable that $CH_4$ generation is estimated using first order decay model and methane recovery should use field monitoring data. If not, $CH_4$ recovery could be applied from other study results; 0.60 of operational landfill with gas vent and flaring system, 0.65 of operational site with landfill gas recovery system, 0.90 of closed landfill with final cover. Other parameters such as degradable organic carbon (DOC) and fraction of DOC decompose ($DOC_f$) need to derive the default value from studies to reflect a Korean waste status. Proper application of MCF that is selected by operation and management of landfill requires more precise criteria.

The Validity Test of Statistical Matching Simulation Using the Data of Korea Venture Firms and Korea Innovation Survey (벤처기업정밀실태조사와 한국기업혁신조사 데이터를 활용한 통계적 매칭의 타당성 검증)

  • An, Kyungmin;Lee, Young-Chan
    • Knowledge Management Research
    • /
    • v.24 no.1
    • /
    • pp.245-271
    • /
    • 2023
  • The change to the data economy requires a new analysis beyond ordinary research in the management field. Data matching refers to a technique or processing method that combines data sets collected from different samples with the same population. In this study, statistical matching was performed using random hotdeck and Mahalanobis distance functions using 2020 Survey of Korea Venture Firms and 2020 Korea Innovation Survey datas. Among the variables used for statistical matching simulation, the industry and the number of workers were set to be completely consistent, and region, business power, listed market, and sales were set as common variables. Simulation verification was confirmed by mean test and kernel density. As a result of the analysis, it was confirmed that statistical matching was appropriate because there was a difference in the average test, but a similar pattern was shown in the kernel density. This result attempted to expand the spectrum of the research method by experimenting with a data matching research methodology that has not been sufficiently attempted in the management field, and suggests implications in terms of data utilization and diversity.

MRS Pattern Classification Using Fusion Method based on SpPCA and MLP (SpPCA와 MLP에 기반을 둔 응합법칙에 의한 MRS 패턴분류)

  • Song Chang kyu;Lee Dae jong;Jeon Byeong seok;Ryu Jeong woong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.9C
    • /
    • pp.922-929
    • /
    • 2005
  • In this paper, we propose the MRS p:Ittern classification techniques by the fusion scheme based on the SpPCA and MLP. A conventional PCA teclulique for the dimension reduction has the problem that it can't find a optimal transformation matrix if the property of input data is nonlinear. To overcome this drawback we extract features by the SpPCA technique which use the local patterns rather than whole patterns. In a next classification step, individual classifier based on MLP calculates the similarity of each class for local features. Finally, MRS patterns is classified by the fusion scheme to effectively combine the individual information. As the simulation results to verify the effectiveness, the proposed method showed more improved classification results than conventional methods.

Model selection method for categorical data with non-response (무응답을 가지고 있는 범주형 자료에 대한 모형 선택 방법)

  • Yoon, Yong-Hwa;Choi, Bo-Seung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.4
    • /
    • pp.627-641
    • /
    • 2012
  • We consider a model estimation and model selection methods for the multi-way contingency table data with non-response or missing values. We also consider hierarchical Bayesian model in order to handle a boundary solution problem that can happen in the maximum likelihood estimation under non-ignorable non-response model and we deal with a model selection method to find the best model for the data. We utilized Bayes factors to handle model selection problem under Bayesian approach. We applied proposed method to the pre-election survey for the 2004 Korean National Assembly race. As a result, we got the non-ignorable non-response model was favored and the variable of voting intention was most suitable.

A Study on the Road Capacity Reduction Rate of Freeway Tunnel Section (고속도로 터널부 도로 용량 감소율에 관한 연구)

  • Sunhoon Kim;Dongmin Lee;Sooncheon Hwang
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.3
    • /
    • pp.17-28
    • /
    • 2024
  • In this study, the capacity of the tunnel and the general section was calculated and compared using the VDS detector data, and the decrease rate in capacity of the tunnel section was analyzed by tunnel type. To compare the capacity of the tunnel and the general section, the Product Limit Method (PLM) was applied to the VDS detector data. As a result of comparing the capacity of the tunnel and general section, the capacity of the tunnel section decreased by about 6.5% compared to the general section. To classify the tunnel type, the tunnel extension and the number of lanes were used as variables, and there was a difference in the decrease rate of capacity by tunnel group classified by each criterion.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Development of a Small Animal Positron Emission Tomography Using Dual-layer Phoswich Detector and Position Sensitive Photomultiplier Tube: Preliminary Results (두층 섬광결정과 위치민감형광전자증배관을 이용한 소동물 양전자방출단층촬영기 개발: 기초실험 결과)

  • Jeong, Myung-Hwan;Choi, Yong;Chung, Yong-Hyun;Song, Tae-Yong;Jung, Jin-Ho;Hong, Key-Jo;Min, Byung-Jun;Choe, Yearn-Seong;Lee, Kyung-Han;Kim, Byung-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.5
    • /
    • pp.338-343
    • /
    • 2004
  • Purpose: The purpose of this study was to develop a small animal PET using dual layer phoswich detector to minimize parallax error that degrades spatial resolution at the outer part of field-of-view (FOV). Materials and Methods: A simulation tool GATE (Geant4 Application for Tomographic Emission) was used to derive optimal parameters of small PET, and PET was developed employing the parameters. Lutetium Oxyorthosilicate (LSO) and Lutetium-Yttrium Aluminate-Perovskite(LuYAP) was used to construct dual layer phoswitch crystal. $8{\times}8$ arrays of LSO and LuYAP pixels, $2mm{\times}2mm{\times}8mm$ in size, were coupled to a 64-channel position sensitive photomultiplier tube. The system consisted of 16 detector modules arranged to one ring configuration (ring inner diameter 10 cm, FOV of 8 cm). The data from phoswich detector modules were fed into an ADC board in the data acquisition and preprocessing PC via sockets, decoder block, FPGA board, and bus board. These were linked to the master PC that stored the events data on hard disk. Results: In a preliminary test of the system, reconstructed images were obtained by using a pair of detectors and sensitivity and spatial resolution were measured. Spatial resolution was 2.3 mm FWHM and sensitivity was 10.9 $cps/{\mu}Ci$ at the center of FOV. Conclusion: The radioactivity distribution patterns were accurately represented in sinograms and images obtained by PET with a pair of detectors. These preliminary results indicate that it is promising to develop a high performance small animal PET.

Skeleton Code Generation for Transforming an XML Document with DTD using Metadata Interface (메타데이터 인터페이스를 이용한 DTD 기반 XML 문서 변환기의 골격 원시 코드 생성)

  • Choe Gui-Ja;Nam Young-Kwang
    • The KIPS Transactions:PartD
    • /
    • v.13D no.4 s.107
    • /
    • pp.549-556
    • /
    • 2006
  • In this paper, we propose a system for generating skeleton programs for directly transforming an XML document to another document, whose structure is defined in the target DTD with GUI environment. With the generated code, the users can easily update or insert their own codes into the program so that they can convert the document as the way that they want and can be connected with other classes or library files. Since most of the currently available code generation systems or methods for transforming XML documents use XSLT or XQuery, it is very difficult or impossible for users to manipulate the source code for further updates or refinements. As the generated code in this paper reveals the code along the XPaths of the target DTD, the result code is quite readable. The code generating procedure is simple; once the user maps the related elements represented as trees in the GUI interface, the source document is transformed into the target document and its corresponding Java source program is generated, where DTD is given or extracted from XML documents automatically by parsing it. The mapping is classified 1:1, 1:N, and N:1, according to the structure and semantics of elements of the DTD. The functions for changing the structure of elements designated by the user are amalgamated into the metadata interface. A real world example of transforming articles written in XML file into a bibliographical XML document is shown with the transformed result and its code.