• Title/Summary/Keyword: 실험 및 수치계산

Search Result 740, Processing Time 0.034 seconds

Development and evaluation of a 2-dimensional land surface flood analysis model using uniform square grid (정형 사각 격자 기반의 2차원 지표면 침수해석 모형 개발 및 평가)

  • Choi, Yun-Seok;Kim, Joo-Hun;Choi, Cheon-Kyu;Kim, Kyung-Tak
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.5
    • /
    • pp.361-372
    • /
    • 2019
  • The purpose of this study is to develop a two-dimensional land surface flood analysis model based on uniform square grid using the governing equations except for the convective acceleration term in the momentum equation. Finite volume method and implicit method were applied to spatial and temporal discretization. In order to reduce the execution time of the model, parallel computation techniques using CPU were applied. To verify the developed model, the model was compared with the analytical solution and the behavior of the model was evaluated through numerical experiments in the virtual domain. In addition, inundation analyzes were performed at different spatial resolutions for the domestic Janghowon area and the Sebou river area in Morocco, and the results were compared with the analysis results using the CAESER-LISFLOOD (CLF) model. In model verification, simulation results were well matched with the analytical solution, and the flow analyses in the virtual domain were also evaluated to be reasonable. The results of inundation simulations in the Janghowon and the Sebou river area by this study and CLF model were similar with each other and for Janghowon area, the simulation result was also similar to the flooding area of flood hazard map. The different parts in the simulation results of this study and the CLF model were compared and evaluated for each case. The results of this study suggest that the model proposed in this study can simulate the flooding well in the floodplain. However, in case of flood analysis using the model presented in this study, the characteristics and limitations of the model by domain composition method, governing equation and numerical method should be fully considered.

Anti-obesity effect of radish leaf extracts on high fat diet-induced obesity in mice (고지방식이를 통해 비만이 유발된 마우스에서 무청 추출물의 항비만 효과)

  • Lee, Yun-Seong;Seo, Young Ho;Kim, Ji Yong
    • Korean Journal of Food Science and Technology
    • /
    • v.54 no.3
    • /
    • pp.297-305
    • /
    • 2022
  • The goal of this study was to evaluate the anti-obesity effect of radish leaf extracts (MU-C) and radish leaf extracts with 3% citric acid (MU-CA) in a high-fat diet (HFD)-induced C57BL/6 mice. The effects of radish leaf extracts on adipogenesis were also investigated using 3T3-L1 adipocytes. As determined by Oil red O staining, MU-C inhibited adipogenesis in 3T3-L1 adipocytes. Four-week-old male C57BL/6 mice were fed an HFD for 6 weeks and then treated with radish leaf extracts (500 mg/kg, p.o.) for 6 weeks. Then, the serum levels of Aspartate aminotransferase, Alanine aminotransferase, Total cholesterol, Triglyceride and low-density lipoprotein cholesterol in the mice were measured using an automatic chemical analyzer and enzyme-linked immunosorbent assay. Administration of MU-C significantly reduced the fat weight when compared with HFD controls. As confirmed by histopathologic analysis, adipose tissue size markedly decreased in mice treated with MU-C. Therefore, this study could provide a basis for investigating the clinical use of MU-C as an agent for preventing obesity.

A Comparative Study of Subset Construction Methods in OSEM Algorithms using Simulated Projection Data of Compton Camera (모사된 컴프턴 카메라 투사데이터의 재구성을 위한 OSEM 알고리즘의 부분집합 구성법 비교 연구)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Mi-No;Lee, Ju-Hahn;Kim, Joong-Hyun;Kim, Chan-Hyeong;Lee, Chun-Sik;Lee, Dong-Soo;Lee, Soo-Jin
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.3
    • /
    • pp.234-240
    • /
    • 2007
  • Purpose: In this study we propose a block-iterative method for reconstructing Compton scattered data. This study shows that the well-known expectation maximization (EM) approach along with its accelerated version based on the ordered subsets principle can be applied to the problem of image reconstruction for Compton camera. This study also compares several methods of constructing subsets for optimal performance of our algorithms. Materials and Methods: Three reconstruction algorithms were implemented; simple backprojection (SBP), EM, and ordered subset EM (OSEM). For OSEM, the projection data were grouped into subsets in a predefined order. Three different schemes for choosing nonoverlapping subsets were considered; scatter angle-based subsets, detector position-based subsets, and both scatter angle- and detector position-based subsets. EM and OSEM with 16 subsets were performed with 64 and 4 iterations, respectively. The performance of each algorithm was evaluated in terms of computation time and normalized mean-squared error. Results: Both EM and OSEM clearly outperformed SBP in all aspects of accuracy. The OSEM with 16 subsets and 4 iterations, which is equivalent to the standard EM with 64 iterations, was approximately 14 times faster in computation time than the standard EM. In OSEM, all of the three schemes for choosing subsets yielded similar results in computation time as well as normalized mean-squared error. Conclusion: Our results show that the OSEM algorithm, which have proven useful in emission tomography, can also be applied to the problem of image reconstruction for Compton camera. With properly chosen subset construction methods and moderate numbers of subsets, our OSEM algorithm significantly improves the computational efficiency while keeping the original quality of the standard EM reconstruction. The OSEM algorithm with scatter angle- and detector position-based subsets is most available.

The Consideration about Heavy Metal Contamination of Room and Worker in a Workshop (공작실에서 실내 및 작업종사자의 중금속 오염도에 관한 고찰)

  • Kim Jeong-Ho;Kim Gha-Jung;Kim Sung-Ki;Bea Suk-Hwan
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.17 no.2
    • /
    • pp.87-94
    • /
    • 2005
  • Purpose : Heavy metal use when producing the block from the workshop. At this time, production of heavy metal dust and fume gives risk in human. This like heavy metal to improve seriousness through measurement and analysis. And by the quest in solution is purpose of this thesis. Materials and Methods : Organization is Inductively Coupled Plasma Atomic Emission Spectrometer, and the object is Deajeon city 4 workshops in university hospital radiation oncology (Bismuth, Lead, Tin and cadmium). Method is the ppb the pumping it does at unit, comparison analysis. And the Calculation heavy metal standard level in air through heavy metal standard level in body and blood, so Heavy metal temporary standard set. Results : Subterranean existence room air quality the administration laws appointed Lead and Cadmium's exposure recommend that it is $3{\mu}g/m^3\;and\;2{\mu}g/m^3$. And Bismuth and Tin decides $7{\mu}g/m^3\;and\;6{\mu}g/m^3$ through standard level in air heavy metal and standard level in body and blood. Heavy metal measurement level of workshops in 4 university hospital Daejeon city compares with work existence and nonexistence. On work nonexistence almost measurement level is below the recommend level. But work existence case express high level. Also consequently in composition ratio of the block is continuous with the detection ratio. Conclusion : Worker's heavy metal contamination imbrued serious for solution founds basic part. In hospital may operation on local air exhauster and periodical efficiency check, protector offer, et al. And worker have a correct understanding part of heavy metal contamination, and have continuous interest, health control. Finally, learned society sphere administer to establishment standard level and periodical measurement. And it founds basic solution plan of periodical special health checkup.

  • PDF

Sensitivity Experiment of Surface Reflectance to Error-inducing Variables Based on the GEMS Satellite Observations (GEMS 위성관측에 기반한 지면반사도 산출 시에 오차 유발 변수에 대한 민감도 실험)

  • Shin, Hee-Woo;Yoo, Jung-Moon
    • Journal of the Korean earth science society
    • /
    • v.39 no.1
    • /
    • pp.53-66
    • /
    • 2018
  • The information of surface reflectance ($R_{sfc}$) is important for the heat balance and the environmental/climate monitoring. The $R_{sfc}$ sensitivity to error-induced variables for the Geostationary Environment Monitoring Spectrometer (GEMS) retrieval from geostationary-orbit satellite observations at 300-500 nm was investigated, utilizing polar-orbit satellite data of the MODerate resolution Imaging Spectroradiometer (MODIS) and Ozone Mapping Instrument (OMI), and the radiative transfer model (RTM) experiment. The variables in this study can be cloud, Rayleigh-scattering, aerosol, ozone and surface type. The cloud detection in high-resolution MODIS pixels ($1km{\times}1km$) was compared with that in GEMS-scale pixels ($8km{\times}7km$). The GEMS detection was consistent (~79%) with the MODIS result. However, the detection probability in partially-cloudy (${\leq}40%$) GEMS pixels decreased due to other effects (i.e., aerosol and surface type). The Rayleigh-scattering effect in RGB images was noticeable over ocean, based on the RTM calculation. The reflectance at top of atmosphere ($R_{toa}$) increased with aerosol amounts in case of $R_{sfc}$<0.2, but decreased in $R_{sfc}{\geq}0.2$. The $R_{sfc}$ errors due to the aerosol increased with wavelength in the UV, but were constant or slightly decreased in the visible. The ozone absorption was most sensitive at 328 nm in the UV region (328-354 nm). The $R_{sfc}$ error was +0.1 because of negative total ozone anomaly (-100 DU) under the condition of $R_{sfc}=0.15$. This study can be useful to estimate $R_{sfc}$ uncertainties in the GEMS retrieval.

HW/SW Partitioning Techniques for Multi-Mode Multi-Task Embedded Applications (멀티모드 멀티태스크 임베디드 어플리케이션을 위한 HW/SW 분할 기법)

  • Kim, Young-Jun;Kim, Tae-Whan
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.8
    • /
    • pp.337-347
    • /
    • 2007
  • An embedded system is called a multi-mode embedded system if it performs multiple applications by dynamically reconfiguring the system functionality. Further, the embedded system is called a multi-mode multi-task embedded system if it additionally supports multiple tasks to be executed in a mode. In this Paper, we address a HW/SW partitioning problem, that is, HW/SW partitioning of multi-mode multi-task embedded applications with timing constraints of tasks. The objective of the optimization problem is to find a minimal total system cost of allocation/mapping of processing resources to functional modules in tasks together with a schedule that satisfies the timing constraints. The key success of solving the problem is closely related to the degree of the amount of utilization of the potential parallelism among the executions of modules. However, due to an inherently excessively large search space of the parallelism, and to make the task of schedulabilty analysis easy, the prior HW/SW partitioning methods have not been able to fully exploit the potential parallel execution of modules. To overcome the limitation, we propose a set of comprehensive HW/SW partitioning techniques which solve the three subproblems of the partitioning problem simultaneously: (1) allocation of processing resources, (2) mapping the processing resources to the modules in tasks, and (3) determining an execution schedule of modules. Specifically, based on a precise measurement on the parallel execution and schedulability of modules, we develop a stepwise refinement partitioning technique for single-mode multi-task applications. The proposed techniques is then extended to solve the HW/SW partitioning problem of multi-mode multi-task applications. From experiments with a set of real-life applications, it is shown that the proposed techniques are able to reduce the implementation cost by 19.0% and 17.0% for single- and multi-mode multi-task applications over that by the conventional method, respectively.

Numerical Simulation of the Formation of Oxygen Deficient Water-masses in Jinhae Bay (진해만의 빈산소 수괴 형성에 관한 수치실험)

  • CHOI Woo-Jeung;PARK Chung-Kill;LEE Suk-Mo
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.27 no.4
    • /
    • pp.413-433
    • /
    • 1994
  • Jinhae Bay once was a productive area of fisheries. It is, however, now notorious for its red tides; and oxygen deficient water-masses extensively develop at present in summer. Therefore the shellfish production of the bay has been decreasing and mass mortality often occurs. Under these circumstances, the three-dimensional numerical hydrodynamic and the material cycle models, which were developed by the Institute for Resources and Environment of Japan, were applied to analyze the processes affecting the oxygen depletion and also to evaluate the environment capacity for the reception of pollutant loads without dissolved oxygen depletion. In field surveys, oxygen deficient water-masses were formed with concentrations of below 2.0mg/l at the bottom layer in Masan Bay and the western part of Jinhae Bay during the summer. Current directions, computed by the $M_2$ constituent, were mainly toward the western part of Jinhae Bay during flood flows and in opposite directions during ebb flows. Tidal currents velocities during the ebb tide were stronger than that of the flood tide. The comparision between the simulated and observed tidal ellipses showed fairly good agreement. The residual currents, which were obtained by averaging the simulated tidal currents over 1 tidal cycle, showed the presence of counterclockwise eddies in the central part of Jinhae Bay. Density driven currents were generated southward at surface and northward at the bottom in Masan Bay and Jindong Bay, where the fresh water of rivers entered. The material cycle model was calibrated with the data surveyed in the field of the study area from June to July, 1992. The calibrated results are in fairly good agreement with measured values within relative error of $28\%$. The simulated dissolved oxygen distributions of bottom layer were relatively high with the concentration of $6.0{\sim}8.0mg/l$ at the boundaries, but an oxygen deficient water-masses were formed within the concentration of 2.0mg/l at the inner part of Masan Bay and the western part of Jinhae Bay. The results of sensitivity analyses showed that sediment oxygen demand(SOD) was one of the most important influence on the formation of oxygen depletion. Therefore, to control the oxygen deficient water-masses and to conserve the coastal environment, it is an effective method to reduce the SOD by improving the polluted sediment. As the results of simulations, in Masan Bay, oxygen deficient water-masses recovered to 5.0mg/l when the $50\%$ reduction in input COD loads from Masan basin and $70\%$ reduction in SOD was conducted. In the western part of Jinhae Bay, oxygen deficient water-masses recovered to 5.0mg/l when the $95\%$ reduction in SOD and $90\%$ reduction in culturing ground fecal loads was conducted.

  • PDF

Analysis and Evaluation of Glycemic Indices and Glycemic Loads of Frequently Consumed Carbohydrate-Rich Snacks according to Variety and Cooking Method (탄수화물 간식류 식품 및 조리방법에 따른 혈당지수 및 혈당부하지수)

  • Kim, Do Yeon;Lee, Hansongyi;Choi, Eun Young;Lim, Hyunjung
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.44 no.1
    • /
    • pp.14-23
    • /
    • 2015
  • This study examined the glycemic indices (GIs) and glycemic loads of carbohydrate-rich snacks in Korea according to variety and cooking method. The most popular carbohydrate snacks (corn, potatoes, sweet potatoes, chestnuts, and red beans) from the Korean National Health and Nutrition Examination Survey nutrient database were cooked using a variety of conventional cooking methods (steaming, baking, porridge, puffing, and frying). The GIs of foods were measured in 60 healthy males after receiving permission from the University Hospital institutional review board (KMC IRB 1306-01). Blood glucose and insulin levels were then measured at 0, 15, 30, 60, 90, and 120 min after consuming glucose, and each test food contained 50 g of carbohydrates (corn: 170.0 g, potatoes: 359.7 g, sweet potatoes: 160.3 g, chestnuts: 134.8 g, red beans: 73.1 g). GI values for test foods were calculated based on the increase in the area under the blood glucose response curve for each subject. Steamed potatoes ($93.6{\pm}11.6$), corn porridge ($91.8{\pm}19.5$), baked sweet potatoes ($90.9{\pm}9.6$), baked potatoes ($78.2{\pm}14.5$), steamed corn ($73.4{\pm}9.9$), and steamed sweet potatoes ($70.8{\pm}6.1$) were shown to be considered high GI foods, whereas baked chestnuts ($54.3{\pm}6.3$), red bean porridge ($33.1{\pm}5.5$), steamed red beans ($22.1{\pm}3.2$), fried potatoes ($41.5{\pm}7.8$), and ground and pan-fried potatoes ($28.0{\pm}5.1$) were considered as low GI foods. The results suggest that the cooking method of carbohydrate-rich snacks is an important determinant of GI values.

Dynamic Traffic Assignment Using Genetic Algorithm (유전자 알고리즘을 이용한 동적통행배정에 관한 연구)

  • Park, Kyung-Chul;Park, Chang-Ho;Chon, Kyung-Soo;Rhee, Sung-Mo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.8 no.1 s.15
    • /
    • pp.51-63
    • /
    • 2000
  • Dynamic traffic assignment(DTA) has been a topic of substantial research during the past decade. While DTA is gradually maturing, many aspects of DTA still need improvement, especially regarding its formulation and solution algerian Recently, with its promise for In(Intelligent Transportation System) and GIS(Geographic Information System) applications, DTA have received increasing attention. This potential also implies higher requirement for DTA modeling, especially regarding its solution efficiency for real-time implementation. But DTA have many mathematical difficulties in searching process due to the complexity of spatial and temporal variables. Although many solution algorithms have been studied, conventional methods cannot iud the solution in case that objective function or constraints is not convex. In this paper, the genetic algorithm to find the solution of DTA is applied and the Merchant-Nemhauser model is used as DTA model because it has a nonconvex constraint set. To handle the nonconvex constraint set the GENOCOP III system which is a kind of the genetic algorithm is used in this study. Results for the sample network have been compared with the results of conventional method.

  • PDF

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.