• Title/Summary/Keyword: Gas generate

Search Result 351, Processing Time 0.029 seconds

Proposal of a Pilot Plant (2T/day) for Solid Fuel Conversion of Cambodian Mango Waste Using Hybrid Hydrothermal Carbonization Technology (하이브리드 수열탄화기술을 이용한 캄보디아 망고 폐기물 고형연료화 실증플랜트 (2T/day) 제안)

  • Han, Jong-il;Lee, Kangsoo;Kang, Inkook
    • Journal of Appropriate Technology
    • /
    • v.7 no.1
    • /
    • pp.59-71
    • /
    • 2021
  • Hybrid hydrothermal carbonization (Hybrid HTC) technology is a proprietary thermochemical process for two or more organic wastes.The reaction time is less than two hours with temperature range 180~250℃ and pressure range 20~40bar. Thanks to accumulation of the carbon of the waste during Hybrid HTC process, the energy value of the solid fuel increases significantly with comparatively low energy consumption. It has also a great volume reduction with odor removal effect so that it is evaluated as the best solid fuel conversion technology for various organic wastes. In this study of the hybrid hydrothermal carbonization, the effect on the calorific value and yield of Cambodian mango waste were evaluated according to changes in temperature and reaction time. Through the study, parameter optimization has been sought with improving energy efficiency of the whole plant. It is decomposed in the Hydro-Carbonation Technology to Generate Gas. At this time, it is possible to develop manufacturing and production technologies such as hydrogen (H2) and methane (CH4). Based on the results of the study, a pilot plant (2t/day) has been proposed for future commercialization purpose along cost analysis, mass balance and energy balance calculations.

Life Cycle Assessment (LCA) of the Wind Turbine : A case study of Korea Yeongdeok Wind Farm (한국 영덕 풍력단지 사례 연구를 통한 풍력 발전의 환경 영향 평가)

  • Jun Heon Lee;Jun Hyung Ryu
    • Korean Chemical Engineering Research
    • /
    • v.61 no.1
    • /
    • pp.142-154
    • /
    • 2023
  • As the importance of the environment has been recognized worldwide, the need to calculate and reduce carbon emissions has been drawing an increasing attention throughout various industrial sections. Thereby the discipline of LCA (Life Cycle Assessment) involving raw material preparation, production processes, transportation and installation has been established. There is a clear research gap between the need and the practice for Korean Case of renewable energy industry, particularly wind power. To bridge the gap, this study conducted LCA research on wind power generation in the Korean area of Yeongdeok, an example of a domestic onshor wind power complex using SimaPro, which is the most widely used LCA system. As a result of the study, the energy recovery period (EPT) of one wind turbine is about 10 months, and the GHG emitted to generate power of 1 kwh is 15 g CO2/kWh, which is competitive compared to other energy sources. In the environmental impact assessment by component, the results showed that the tower of wind turbines had the greatest impact on various environmental impact sectors. The experience gained in this study can be further used in strengthening the introduction of renewable energy and reducing the carbon emission in line with reducing climate change.

Effect of plasma treatment using underwater non-thermal dielectric barrier discharge to remove antibiotics added to fish farm effluent (양식장 배출수에 첨가된 항생제 제거 위한 수중 비열 유전체장벽 방전 플라즈마 처리 효과)

  • Kyu Seok Cho;Han Seung Kang
    • Korean Journal of Environmental Biology
    • /
    • v.40 no.4
    • /
    • pp.641-650
    • /
    • 2022
  • The purpose of this study was to compare the efficiency of air and oxygen injected into the underwater non-thermal dielectric barrier discharge plasma (DBD plasma) device used to remove five types of antibiotics (tetracycline, doxycycline, oxytetracycline, clindamycin, and erythromycin) artificially contained in the fish farm discharge water. The voltage given to generate DBD plasma was 27.8 kV, and the measurement intervals were 0, 0.5, 1, 2, 4, 8, 16 and 32 minutes. Tetracycline antibiotics significantly decreased in 4 minutes when air was injected and were reduced in 30 seconds when oxygen was injected. After the introduction of air and oxygen at 32 minutes, 78.1% and 95.8% of tetracycline were removed, 77.1% and 96.3% of doxycycline were removed, and 77.1% and 95.5% of oxytetracycline were removed, respectively. In air and oxygen, 59.6% and 83.0% of clindamycin and 53.3% and 74.3% of erythromycin were removed, respectively. The two antibiotics showed lower removal efficiency than tetracyclines. In conclusion, fish farm discharge water contains five different types of antibiotics that can be reduced using underwater DBD plasma, and oxygen gas injection outperformed air in terms of removal efficiency.

A Study for Activation Measure of Climate Change Mitigation Movement - A Case Study of Green Start Movement - (기후변화 완화 활동 활성화 방안에 관한 연구 - 그린스타트 운동을 중심으로 -)

  • Cho, Sung Heum;Lee, Sang Hoon;Moon, Tae Hoon;Choi, Bong Seok;Park, Na Hyun;Jeon, Eui Chan
    • Journal of Climate Change Research
    • /
    • v.5 no.2
    • /
    • pp.95-107
    • /
    • 2014
  • The 'Green Start Movement' is a practical movement of green living to efficiently reduce the greenhouse gases originating from non-industrial fields such as household, commerce, transportation, etc. for the 'materialization of a low carbon society through green growth (Low Carbon, Green Korea)'. When the new government took office, following the Lee Myeongbak Administration that had presented 'Low Carbon, Green Growth' as a national vision, it was required to set up the direction of the practical movement of green life to respond to climate change persistently and stably as well as to evaluate the performance of the green start movement over the past 5 years. A questionnaire survey was administered to a total of 265 persons including public servants, members of environmental and non-environmental NGOs, participants of the green start movement and professionals. In the results of the questionnaire survey, many opinions have indicated that the awareness of the green start movement is increasing and the green start movement has had a positive impact on individual behavior and group behavior in terms of green living. The result shows, however, that the environmental NGOs don't cooperate sufficiently to create a 'green living' effect on a national scale. Action needs to be taken on the community level in order to generate a culture of environmental responsibility. The national administration office of the Green Start Movement Network should play the leading role between the government and environmental NGOs. The Green Start National Network should have greater autonomy and governance of the network needs to be restructured in order to work effectively. Also the Green Start Movement should identify specific local characteristics to support activities that reduce greenhouse gas emissions. Best practices can be shared to reduce greenhouse gas emissions by a substantial amount.

Carbon Dioxide-based Plastic Pyrolysis for Hydrogen Production Process: Sustainable Recycling of Waste Fishing Nets (이산화탄소 기반 플라스틱 열분해 수소 생산 공정: 지속가능한 폐어망 재활용)

  • Yurim Kim;Seulgi Lee;Sungyup Jung;Jaewon Lee;Hyungtae Cho
    • Korean Chemical Engineering Research
    • /
    • v.62 no.1
    • /
    • pp.36-43
    • /
    • 2024
  • Fishing net waste (FNW) constitutes over half of all marine plastic waste and is a major contributor to the degradation of marine ecosystems. While current treatment options for FNW include incineration, landfilling, and mechanical recycling, these methods often result in low-value products and pollutant emissions. Importantly, FNWs, comprised of plastic polymers, can be converted into valuable resources like syngas and pyrolysis oil through pyrolysis. Thus, this study presents a process for generating high-purity hydrogen (H2) by catalytically pyrolyzing FNW in a CO2 environment. The proposed process comprises of three stages: First, the pretreated FNW undergoes Ni/SiO2 catalytic pyrolysis under CO2 conditions to produce syngas and pyrolysis oil. Second, the produced pyrolysis oil is incinerated and repurposed as an energy source for the pyrolysis reaction. Lastly, the syngas is transformed into high-purity H2 via the Water-Gas-Shift (WGS) reaction and Pressure Swing Adsorption (PSA). This study compares the results of the proposed process with those of traditional pyrolysis conducted under N2 conditions. Simulation results show that pyrolyzing 500 kg/h of FNW produced 2.933 kmol/h of high-purity H2 under N2 conditions and 3.605 kmol/h of high-purity H2 under CO2 conditions. Furthermore, pyrolysis under CO2 conditions improved CO production, increasing H2 output. Additionally, the CO2 emissions were reduced by 89.8% compared to N2 conditions due to the capture and utilization of CO2 released during the process. Therefore, the proposed process under CO2 conditions can efficiently recycle FNW and generate eco-friendly hydrogen product.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Dosimetry of the Low Fluence Fast Neutron Beams for Boron Neutron Capture Therapy (붕소-중성자 포획치료를 위한 미세 속중성자 선량 특성 연구)

  • Lee, Dong-Han;Ji, Young-Hoon;Lee, Dong-Hoon;Park, Hyun-Joo;Lee, Suk;Lee, Kyung-Hoo;Suh, So-Heigh;Kim, Mi-Sook;Cho, Chul-Koo;Yoo, Seong-Yul;Yu, Hyung-Jun;Gwak, Ho-Shin;Rhee, Chang-Hun
    • Radiation Oncology Journal
    • /
    • v.19 no.1
    • /
    • pp.66-73
    • /
    • 2001
  • Purpose : For the research of Boron Neutron Capture Therapy (BNCT), fast neutrons generated from the MC-50 cyclotron with maximum energy of 34.4 MeV in Korea Cancer Center Hospital were moderated by 70 cm paraffin and then the dose characteristics were investigated. Using these results, we hope to establish the protocol about dose measurement of epi-thermal neutron, to make a basis of dose characteristic of epi-thermal neutron emitted from nuclear reactor, and to find feasibility about accelerator-based BNCT. Method and Materials : For measuring the absorbed dose and dose distribution of fast neutron beams, we used Unidos 10005 (PTW, Germany) electrometer and IC-17 (Far West, USA), IC-18, ElC-1 ion chambers manufactured by A-150 plastic and used IC-l7M ion chamber manufactured by magnesium for gamma dose. There chambers were flushed with tissue equivalent gas and argon gas and then the flow rate was S co per minute. Using Monte Carlo N-Particle (MCNP) code, transport program in mixed field with neutron, photon, electron, two dimensional dose and energy fluence distribution was calculated and there results were compared with measured results. Results : The absorbed dose of fast neutron beams was $6.47\times10^{-3}$ cGy per 1 MU at the 4 cm depth of the water phantom, which is assumed to be effective depth for BNCT. The magnitude of gamma contamination intermingled with fast neutron beams was $65.2{\pm}0.9\%$ at the same depth. In the dose distribution according to the depth of water, the neutron dose decreased linearly and the gamma dose decreased exponentially as the depth was deepened. The factor expressed energy level, $D_{20}/D_{10}$, of the total dose was 0.718. Conclusion : Through the direct measurement using the two ion chambers, which is made different wall materials, and computer calculation of isodose distribution using MCNP simulation method, we have found the dose characteristics of low fluence fast neutron beams. If the power supply and the target material, which generate high voltage and current, will be developed and gamma contamination was reduced by lead or bismuth, we think, it may be possible to accelerator-based BNCT.

  • PDF

Seismic Data Processing and Inversion for Characterization of CO2 Storage Prospect in Ulleung Basin, East Sea (동해 울릉분지 CO2 저장소 특성 분석을 위한 탄성파 자료처리 및 역산)

  • Lee, Ho Yong;Kim, Min Jun;Park, Myong-Ho
    • Economic and Environmental Geology
    • /
    • v.48 no.1
    • /
    • pp.25-39
    • /
    • 2015
  • $CO_2$ geological storage plays an important role in reduction of greenhouse gas emissions, but there is a lack of research for CCS demonstration. To achieve the goal of CCS, storing $CO_2$ safely and permanently in underground geological formations, it is essential to understand the characteristics of them, such as total storage capacity, stability, etc. and establish an injection strategy. We perform the impedance inversion for the seismic data acquired from the Ulleung Basin in 2012. To review the possibility of $CO_2$ storage, we also construct porosity models and extract attributes of the prospects from the seismic data. To improve the quality of seismic data, amplitude preserved processing methods, SWD(Shallow Water Demultiple), SRME(Surface Related Multiple Elimination) and Radon Demultiple, are applied. Three well log data are also analysed, and the log correlations of each well are 0.648, 0.574 and 0.342, respectively. All wells are used in building the low-frequency model to generate more robust initial model. Simultaneous pre-stack inversion is performed on all of the 2D profiles and inverted P-impedance, S-impedance and Vp/Vs ratio are generated from the inversion process. With the porosity profiles generated from the seismic inversion process, the porous and non-porous zones can be identified for the purpose of the $CO_2$ sequestration initiative. More detailed characterization of the geological storage and the simulation of $CO_2$ migration might be an essential for the CCS demonstration.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Development Strategy for New Climate Change Scenarios based on RCP (온실가스 시나리오 RCP에 대한 새로운 기후변화 시나리오 개발 전략)

  • Baek, Hee-Jeong;Cho, ChunHo;Kwon, Won-Tae;Kim, Seong-Kyoun;Cho, Joo-Young;Kim, Yeongsin
    • Journal of Climate Change Research
    • /
    • v.2 no.1
    • /
    • pp.55-68
    • /
    • 2011
  • The Intergovernmental Panel on Climate Change(IPCC) has identified the causes of climate change and come up with measures to address it at the global level. Its key component of the work involves developing and assessing future climate change scenarios. The IPCC Expert Meeting in September 2007 identified a new greenhouse gas concentration scenario "Representative Concentration Pathway(RCP)" and established the framework and development schedules for Climate Modeling (CM), Integrated Assessment Modeling(IAM), Impact Adaptation Vulnerability(IAV) community for the fifth IPCC Assessment Reports while 130 researchers and users took part in. The CM community at the IPCC Expert Meeting in September 2008, agreed on a new set of coordinated climate model experiments, the phase five of the Coupled Model Intercomparison Project(CMIP5), which consists of more than 30 standardized experiment protocols for the shortterm and long-term time scales, in order to enhance understanding on climate change for the IPCC AR5 and to develop climate change scenarios and to address major issues raised at the IPCC AR4. Since early 2009, fourteen countries including the Korea have been carrying out CMIP5-related projects. Withe increasing interest on climate change, in 2009 the COdinated Regional Downscaling EXperiment(CORDEX) has been launched to generate regional and local level information on climate change. The National Institute of Meteorological Research(NIMR) under the Korea Meteorological Administration (KMA) has contributed to the IPCC AR4 by developing climate change scenarios based on IPCC SRES using ECHO-G and embarked on crafting national scenarios for climate change as well as RCP-based global ones by engaging in international projects such as CMIP5 and CORDEX. NIMR/KMA will make a contribution to drawing the IPCC AR5 and will develop national climate change scenarios reflecting geographical factors, local climate characteristics and user needs and provide them to national IAV and IAM communites to assess future regional climate impacts and take action.