• Title/Summary/Keyword: point grid method

Search Result 379, Processing Time 0.022 seconds

Analytical Evaluation of Behavior of Precast PSC Box Curve Bridge Based on Design Variables (프리캐스트 PSC 중공 박스 곡선교의 설계변수에 관한 해석적 거동 평가)

  • Kim, Sung-Bae;Kim, Sung-Jae;Park, Jeong-Cheon;Uhm, Ki-Ha;Kim, Jang-Ho Jay
    • Journal of the Korea Concrete Institute
    • /
    • v.26 no.3
    • /
    • pp.267-275
    • /
    • 2014
  • Recently, the construction of curved bridge has increased, thus researchers perform the analytic studies on PSC curved bridge. However, the grid analysis method that are mostly used in the construction industry is not adequate to acquire the precise behavior evaluation of curved PSC briges. Therefore, the precise finite element analysis considering the effective variables were performed to establish the basis for the design method of curved PSC bridge by using 3D elements and bar element. The evaluated variables in this analysis were the number of girders, loading point, section figure, change of prestressing force. The results show the load carrying capacity of the 3 girder type bridge is 200% of that of the 2 girder type, and that applying load on outer girder makes the load resistance capacity and the deflection deviation of 2 girders smaller. The structural capacity of the bridge is improved when the section size is increased, but the efficiency of it is not sufficient enough compare to that of the change of prestressing forces. The change of prestressing forces shows that the camber and the load carrying capacity are linearly increased as PS force is increased. Moreover, when the PS force applied on outer girder is increased than that of inner girder, the deviation of deflection the girders decreases, thereby the stability of the bridge is enhanced.

The development of parallel computation method for the fire-driven-flow in the subway station (도시철도역사에서 화재유동에 대한 병렬계산방법연구)

  • Jang, Yong-Jun;Lee, Chang-Hyun;Kim, Hag-Beom;Park, Won-Hee
    • Proceedings of the KSR Conference
    • /
    • 2008.06a
    • /
    • pp.1809-1815
    • /
    • 2008
  • This experiment simulated the fire driven flow of an underground station through parallel processing method. Fire analysis program FDS(Fire Dynamics Simulation), using LES(Large Eddy Simulation), has been used and a 6-node parallel cluster, each node with 3.0Ghz_2set installed, has been used for parallel computation. Simulation model was based on the Kwangju-geumnan subway station. Underground station, and the total time for simulation was set at 600s. First, the whole underground passage was divided to 1-Mesh and 8-Mesh in order to compare the parallel computation of a single CPU and Multi-CPU. With matrix numbers($15{\times}10^6$) more than what a single CPU can handle, fire driven flow from the center of the platform and the subway itself was analyzed. As a result, there seemed to be almost no difference between the single CPU's result and the Multi-CPU's ones. $3{\times}10^6$ grid point one employed to test the computing time with 2CPU and 7CPU computation were computable two times and fire times faster than 1CPU respectively. In this study it was confirmed that CPU could be overcome by using parallel computation.

  • PDF

Modeling Three-dimensional Free Surface Flow around Thin Wall Incorporation Hydrodynamic Pressure on δ-coordinate (δ-좌표계에서 동수압 계산 수중벽체 인근흐름 수치모형실험)

  • Kim, Hyo-Seob;Yoo, Ho-Jun;Jin, Jae-Yul;Jang, Chang-Hwan;Lee, Jung-Su;Baek, Seung-Won
    • Journal of Wetlands Research
    • /
    • v.16 no.3
    • /
    • pp.327-336
    • /
    • 2014
  • Submerged thin walls are extreme case of submerged rectangular blocks, and could be used for many purposes in rivers or coastal zones, e.g. to tsunami. To understand flow characteristics including flow and pressure fields around a specific submerged thin wall a numerical model was applied which includes computation of hydrodynamic pressure on ${\sigma}$-coordinate. ${\sigma}$-coordinate has strong merits for simulation of subcritical flow over mild-sloped beds. On the other hand ${\sigma}$-coordinate is quite poor to treat sharp structures on the bed. There have been a few trials to incorporate dynamic pressure in ${\sigma}$-coordinate by some researchers. One of the previous approaches includes process of sloving the Poisson equation. However, the above method includes many high-order terms, and requires long cpu for simulation. Another method SOLA was developed by Hirt et al. for computation of dynamic pressure, but it was valid for straight grid system only. Previous SOLA was modified for ${\sigma}$-coordinate for the present purpose and was adopted in a model system, CST3D. Computed flow field shows reasonable behaviour including vorticity is much stronger than the upstream and downstream of the structure. The model was verified to laboratory experiments at a 2DV flume. Time-average flow vectors were measured by using one-dimensional electro-magnetic velocimeter. Computed flow field agrees well with the measured flow field within 10 % error from the speed point of view at 5 profiles. It is thought that the modified SOLA scheme is useful for ${\sigma}$-coordinate system.

Experimental Study of Estimating the Optimized Parameters in OI (서남해안 관측자료를 활용한 OI 자료동화의 최적 매개변수 산정 연구)

  • Gu, Bon-Ho;Woo, Seung-Buhm;Kim, Sangil
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.31 no.6
    • /
    • pp.458-467
    • /
    • 2019
  • The purpose of this study is the suggestion of optimized parameters in OI (Optimal Interpolation) by experimental study. The observation of applying optimal interpolation is ADCP (Acoustic Doppler Current Profiler) data at the southwestern sea of Korea. FVCOM (Finite Volume Coastal Ocean Model) is used for the barotropic model. OI is to the estimation of the gain matrix by a minimum value between the background error covariance and the observation error covariance using the least square method. The scaling factor and correlation radius are very important parameters for OI. It is used to calculate the weight between observation data and model data in the model domain. The optimized parameters from the experiments were found by the Taylor diagram. Constantly each observation point requires optimizing each parameter for the best assimilation. Also, a high accuracy of numerical model means background error covariance is low and then it can decrease all of the parameters in OI. In conclusion, it is expected to have prepared the foundation for research for the selection of ocean observation points and the construction of ocean prediction systems in the future.

A Comparison of the Gravimetric Geoid and the Geometric Geoid Using GPS/Leveling Data (GPS/Leveling 데이터를 이용한 기하지오이드와 중력지오이드의 비교 분석)

  • Kim, Young-Gil;Choi, Yun-Soo;Kwon, Jay-Hyoun;Hong, Chang-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.2
    • /
    • pp.217-222
    • /
    • 2010
  • The geoid is the level surface that closely approximates mean sea level and usually used for the origin of vertical datum. For the computation of geoid, various sources of gravity measurements are used in South Korea and, as a consequence, the geoid models may show different results. however, a limited analysis has been performed due to a lack of controlled data, namely the GPS/Leveling data. Therefore, in this study, the gravimetric geoids are compared with the geodetic geoid which is obtained through the GPS/Leveling procedures. The gravimetric geoids are categorized into geoid from airborne gravimetry, geoid from the terrestrial gravimetry, NGII geoid(geoids published by National Geographic Information Institute) and NORI geoid(geoi published by National Oceanographic Research Institute), respectively. For the analysis, the geometric geoid is obtained at each unified national control point and the difference between geodetic and gravimetric geoid is computed. Also, the geoid height data is gridded on a regular $10{\times}10-km$ grid so that the FFT method can be applied to analyze the geoid height differences in frequency domain. The results show that no significant differences in standard deviation are observed when the geoids from the airborne and terrestrial gravimetry are compared with the geomertric geoid while relatively large difference are shown when NGII geoid and NORI geoid are compared with geometric geoid. Also, NGII geoid and NORI geoid are analyzed in frequency domain and the deviations occurs in long-wavelength domain.

Prediction of a hit drama with a pattern analysis on early viewing ratings (초기 시청시간 패턴 분석을 통한 대흥행 드라마 예측)

  • Nam, Kihwan;Seong, Nohyoon
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.33-49
    • /
    • 2018
  • The impact of TV Drama success on TV Rating and the channel promotion effectiveness is very high. The cultural and business impact has been also demonstrated through the Korean Wave. Therefore, the early prediction of the blockbuster success of TV Drama is very important from the strategic perspective of the media industry. Previous studies have tried to predict the audience ratings and success of drama based on various methods. However, most of the studies have made simple predictions using intuitive methods such as the main actor and time zone. These studies have limitations in predicting. In this study, we propose a model for predicting the popularity of drama by analyzing the customer's viewing pattern based on various theories. This is not only a theoretical contribution but also has a contribution from the practical point of view that can be used in actual broadcasting companies. In this study, we collected data of 280 TV mini-series dramas, broadcasted over the terrestrial channels for 10 years from 2003 to 2012. From the data, we selected the most highly ranked and the least highly ranked 45 TV drama and analyzed the viewing patterns of them by 11-step. The various assumptions and conditions for modeling are based on existing studies, or by the opinions of actual broadcasters and by data mining techniques. Then, we developed a prediction model by measuring the viewing-time distance (difference) using Euclidean and Correlation method, which is termed in our study similarity (the sum of distance). Through the similarity measure, we predicted the success of dramas from the viewer's initial viewing-time pattern distribution using 1~5 episodes. In order to confirm that the model is shaken according to the measurement method, various distance measurement methods were applied and the model was checked for its dryness. And when the model was established, we could make a more predictive model using a grid search. Furthermore, we classified the viewers who had watched TV drama more than 70% of the total airtime as the "passionate viewer" when a new drama is broadcasted. Then we compared the drama's passionate viewer percentage the most highly ranked and the least highly ranked dramas. So that we can determine the possibility of blockbuster TV mini-series. We find that the initial viewing-time pattern is the key factor for the prediction of blockbuster dramas. From our model, block-buster dramas were correctly classified with the 75.47% accuracy with the initial viewing-time pattern analysis. This paper shows high prediction rate while suggesting audience rating method different from existing ones. Currently, broadcasters rely heavily on some famous actors called so-called star systems, so they are in more severe competition than ever due to rising production costs of broadcasting programs, long-term recession, aggressive investment in comprehensive programming channels and large corporations. Everyone is in a financially difficult situation. The basic revenue model of these broadcasters is advertising, and the execution of advertising is based on audience rating as a basic index. In the drama, there is uncertainty in the drama market that it is difficult to forecast the demand due to the nature of the commodity, while the drama market has a high financial contribution in the success of various contents of the broadcasting company. Therefore, to minimize the risk of failure. Thus, by analyzing the distribution of the first-time viewing time, it can be a practical help to establish a response strategy (organization/ marketing/story change, etc.) of the related company. Also, in this paper, we found that the behavior of the audience is crucial to the success of the program. In this paper, we define TV viewing as a measure of how enthusiastically watching TV is watched. We can predict the success of the program successfully by calculating the loyalty of the customer with the hot blood. This way of calculating loyalty can also be used to calculate loyalty to various platforms. It can also be used for marketing programs such as highlights, script previews, making movies, characters, games, and other marketing projects.

Numerical Test for the 2D Q Tomography Inversion Based on the Stochastic Ground-motion Model (추계학적 지진동모델에 기반한 2D Q 토모그래피 수치모델 역산)

  • Yun, Kwan-Hee;Suh, Jung-Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.10 no.3
    • /
    • pp.191-202
    • /
    • 2007
  • To identify the detailed attenuation structure in the southern Korean Peninsula, a numerical test was conducted for the Q tomography inversion to be applied to the accumulated dataset until 2005. In particular, the stochastic pointsource ground-motion model (STGM model; Boore, 2003) was adopted for the 2D Q tomography inversion for direct application to simulating the strong ground-motion. Simultaneous inversion of the STGM model parameters with a regional single Q model was performed to evaluate the source and site effects which were necessary to generate an artificial dataset for the numerical test. The artificial dataset consists of simulated Fourier spectra that resemble the real data in the magnitude-distance-frequency-error distribution except replacement of the regional single Q model with a checkerboard type of high and low values of laterally varying Q models. The total number of Q blocks used for the checkerboard test was 75 (grid size of $35{\times}44km^2$ for Q blocks); Q functional form of $Q_0f^{\eta}$ ($Q_0$=100 or 500, 0.0 < ${\eta}$ < 1.0) was assigned to each Q block for the checkerboard test. The checkerboard test has been implemented in three steps. At the first step, the initial values of Q-values for 75 blocks were estimated. At the second step, the site amplification function was estimated by using the initial guess of A(f) which is the mean site amplification functions (Yun and Suh, 2007) for the site class. The last step is to invert the tomographic Q-values of 75 blocks based on the results of the first and second steps. As a result of the checkerboard test, it was demonstrated that Q-values could be robustly estimated by using the 2D Q tomography inversion method even in the presence of perturbed source and site effects from the true input model.

GIS based Development of Module and Algorithm for Automatic Catchment Delineation Using Korean Reach File (GIS 기반의 하천망분석도 집수구역 자동 분할을 위한 알고리듬 및 모듈 개발)

  • PARK, Yong-Gil;KIM, Kye-Hyun;YOO, Jae-Hyun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.20 no.4
    • /
    • pp.126-138
    • /
    • 2017
  • Recently, the national interest in environment is increasing and for dealing with water environment-related issues swiftly and accurately, the demand to facilitate the analysis of water environment data using a GIS is growing. To meet such growing demands, a spatial network data-based stream network analysis map(Korean Reach File; KRF) supporting spatial analysis of water environment data was developed and is being provided. However, there is a difficulty in delineating catchment areas, which are the basis of supplying spatial data including relevant information frequently required by the users such as establishing remediation measures against water pollution accidents. Therefore, in this study, the development of a computer program was made. The development process included steps such as designing a delineation method, and developing an algorithm and modules. DEM(Digital Elevation Model) and FDR(Flow Direction) were used as the major data to automatically delineate catchment areas. The algorithm for the delineation of catchment areas was developed through three stages; catchment area grid extraction, boundary point extraction, and boundary line division. Also, an add-in catchment area delineation module, based on ArcGIS from ESRI, was developed in the consideration of productivity and utility of the program. Using the developed program, the catchment areas were delineated and they were compared to the catchment areas currently used by the government. The results showed that the catchment areas were delineated efficiently using the digital elevation data. Especially, in the regions with clear topographical slopes, they were delineated accurately and swiftly. Although in some regions with flat fields of paddles and downtowns or well-organized drainage facilities, the catchment areas were not segmented accurately, the program definitely reduce the processing time to delineate existing catchment areas. In the future, more efforts should be made to enhance current algorithm to facilitate the use of the higher precision of digital elevation data, and furthermore reducing the calculation time for processing large data volume.

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.