• Title/Summary/Keyword: Optimal Cost

Search Result 3,885, Processing Time 0.033 seconds

Development of Greenhouse Cooling and Heating Load Calculation Program Based on Mobile (모바일 기반 온실 냉난방 부하 산정 프로그램 개발)

  • Moon, Jong Pil;Bang, Ji Woong;Hwang, Jeongsu;Jang, Jae Kyung;Yun, Sung Wook
    • Journal of Bio-Environment Control
    • /
    • v.30 no.4
    • /
    • pp.419-428
    • /
    • 2021
  • In order to develope a mobile-based greenhouse energy calculation program, firstly, the overall thermal transmittance of 10 types of major covers and 16 types of insulation materials were measured. In addition, to estimate the overall thermal transmittance when the cover and insulation materials were installed in double or triple layers, 24 combinations of double installations and 59 combinations of triple installations were measured using the hotbox. Also, the overall thermal transmittance value for a single material and the thermal resistance value were used to calculate the overall thermal transmittance value at the time of multi-layer installation of covering and insulating materials, and the linear regression equation was derived to correct the error with the measured values. As a result of developing the model for estimating thermal transmittance when installing multiple layers of coverings and insulating materials based on the value of overall thermal transmittance of a single-material, the model evaluation index was 0.90 (good when it is 0.5 or more), indicating that the estimated value was very close to the actual value. In addition, as a result of the on-site test, it was evaluated that the estimated heat saving rate was smaller than the actual value with a relative error of 2%. Based on these results, a mobile-based greenhouse energy calculation program was developed that was implemented as an HTML5 standard web-based mobile web application and was designed to work with various mobile device and PC browsers with N-Screen support. It had functions to provides the overall thermal transmittance(heating load coefficient) for each combination of greenhouse coverings and thermal insulation materials and to evaluate the energy consumption during a specific period of the target greenhouse. It was estimated that an energy-saving greenhouse design would be possible with the optimal selection of coverings and insulation materials according to the region and shape of the greenhouse.

A Study on Differentiation and Improvement in Arbitration Systems in Construction Disputes (건설분쟁 중재제도의 차별화 및 개선방안에 관한 연구)

  • Lee, Sun-Jae
    • Journal of Arbitration Studies
    • /
    • v.29 no.2
    • /
    • pp.239-282
    • /
    • 2019
  • The importance of ADR(Alternative Dispute Resolution), which has the advantage of expertise, speed and neutrality due to the increase of arbitration cases due to domestic and foreign construction disputes, has emerged. Therefore, in order for the nation's arbitration system and the arbitration Organization to jump into the ranks of advanced international mediators, it is necessary to research the characteristics and advantages of these arbitration Organization through a study of prior domestic and foreign research and operation of international arbitration Organization. As a problem, First, education for the efficient promotion of arbitrators (compulsory education, maintenance education, specialized education, seminars, etc.). second, The effectiveness of arbitration in resolving construction disputes (hearing methods, composition of the tribunal, and speed). third, The issue of flexibility and diversity of arbitration solutions (the real problem of methodologies such as mediation and arbitration) needs to be drawn on the Arbitration laws and practical problems, such as laws, rules and guidelines. Therefore, Identify the problems presented in the preceding literature and diagnosis of the defects and problems of the KCAB by drawing features and benefits from the arbitration system operated by the international arbitration Institution. As an improvement, the results of an empirical analysis are derived for "arbitrator" simultaneously through a recognition survey. As a method of improvement, First, as an optimal combination of arbitration hearing and judgment in the settlement of construction disputes,(to improve speed). (1) A plan to improve the composition of the audit department according to the complexity, specificity, and magnification of the arbitration cases - (1)Methods to cope with the increased role of the non-lawyer(Specialist, technical expert). (2)Securing technical mediators for each specialized expert according to the large and special corporation arbitration cases. (2) Improving the method of writing by area of the arbitration guidelines, second, Introduction of the intensive hearing system for psychological efficiency and the institutional improvement plan (1) Problems of optimizing the arbitration decision hearing procedure and resolution of arbitration, and (2) Problems of the management of technical arbitrators of arbitration tribunals. (1)A plan to expand hearing work of technical arbitrator(Review on the introduction of the Assistant System as a member of the arbitration tribunals). (2)Improved use of alternative appraisers by tribunals(cost analysis and utilization of the specialized institution for calculating construction costs), Direct management of technical arbitrators : A Study on the Improvement of the Assessment Reliability of the Appraisal and the Appraisal Period. third, Improvement of expert committee system and new method, (1) Creating a non-executive technical committee : Special technology affairs, etc.(Major, supports pre-qualification of special events and coordinating work between parties). (2) Expanding the standing committee.(Added expert technicians : important, special, large affairs / pre-consultations, pre-coordination and mediation-arbitration). This has been shown to be an improvement. In addition, institutional differentiation to enhance the flexibility and diversity of arbitration. In addition, as an institutional differentiation to enhance the flexibility and diversity of arbitration, First, The options for "Med-Arb", "Arb-Med" and "Arb-Med-Arb" are selected. second, By revising the Agreement Act [Article 28, 2 (Agreement on Dispute Resolution)], which is to be amended by the National Parties, the revision of the arbitration settlement clause under the Act, to expand the method to resolve arbitration. third, 2017.6.28. Measures to strengthen the status role and activities of expert technical arbitrators under enforcement, such as the Act on Promotion of Interestments Industry and the Information of Enforcement Decree. Fourth, a measure to increase the role of expert technical Arbitrators by enacting laws on the promotion of the arbitration industry is needed. Especially, the establishment of the Act on Promotion of Intermediation Industry should be established as an international arbitration agency for the arbitration system. Therefore, it proposes a study of improvement and differentiation measures in the details and a policy, legal and institutional improvement and legislation.

Production of Medium-chain-length Poly (3-hydroxyalkanoates) by Pseudomonas sp. EML8 from Waste Frying Oil (Pseudomonas sp. EML8 균주를 이용한 폐식용류로부터 medium-chain-length poly(3-hydroxyalkanoates) 생합성)

  • Kim, Tae-Gyeong;Kim, Jong-Sik;Chung, Chung-Wook
    • Journal of Life Science
    • /
    • v.31 no.1
    • /
    • pp.90-99
    • /
    • 2021
  • In this study, to reduce the production cost of poly(3-hydroxyalkanoates) (PHA), optimal cell growth and PHA biosynthesis conditions of the isolated strain Pseudomonas sp. EML8 were established using waste frying oil (WFO) as the cheap carbon source. Gas chromatography (GC) and GC mass spectrometry analysis of the medium-chain-length PHA (mcl-PHAWFO) obtained by Pseudomonas sp. EML8 of WFO indicated that it was composed of 7.28 mol% 3-hydrxoyhexanoate, 39.04 mol% 3-hydroxyoctanoate, 37.11 mol% 3-hydroxydecanoate, and 16.58 mol% 3-hydroxvdodecanoate monomers. When Pseudomonas sp. EML8 were culture in flask, the maximum dry cell weight (DCW) and the mcl-PHAWFO yield (g/l) were showed under WFO (20 g/l), (NH4)2SO4 (0.5 g/l), pH 7, and 25℃ culture conditions. Based on this, the highest DCW, mcl-PHAWFO content, and mcl-PHAWFO yield from 3-l-jar fermentation was obtained after 48 hr. Similar results were obtained using 20 g/l of fresh frying oil (FFO) as a control carbon source. In this case, the DCW, the mcl-PHAFFO content, and the mcl-PHAFFO yields were 2.7 g/l, 62 wt%, and 1.6 g/l, respectively. Gel permeation chromatography analysis confirmed the average molecular weight of the mcl-PHAWFO and mcl-PHAFFO to be between 165-175 kDa. Thermogravimetric analysis showed decomposition temperature values of 260℃ and 274.7℃ for mcl-PHAWFO and mcl-PHAFFO, respectively. In conclusion, Pseudomonas sp. EML8 and WFO could be suggested as a new candidate and substrate for the industrial production of PHA.

A Relative Study of 3D Digital Record Results on Buried Cultural Properties (매장문화재 자료에 대한 3D 디지털 기록 결과 비교연구)

  • KIM, Soohyun;LEE, Seungyeon;LEE, Jeongwon;AHN, Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.55 no.1
    • /
    • pp.175-198
    • /
    • 2022
  • With the development of technology, the methods of digitally converting various forms of analog information have become common. As a result, the concept of recording, building, and reproducing data in a virtual space, such as digital heritage and digital reconstruction, has been actively used in the preservation and research of various cultural heritages. However, there are few existing research results that suggest optimal scanners for small and medium-sized relics. In addition, scanner prices are not cheap for researchers to use, so there are not many related studies. The 3D scanner specifications have a great influence on the quality of the 3D model. In particular, since the state of light reflected on the surface of the object varies depending on the type of light source used in the scanner, using a scanner suitable for the characteristics of the object is the way to increase the efficiency of the work. Therefore, this paper conducted a study on nine small and medium-sized buried cultural properties of various materials, including earthenware and porcelain, by period, to examine the differences in quality of the four types of 3D scanners. As a result of the study, optical scanners and small and medium-sized object scanners were the most suitable digital records of the small and medium-sized relics. Optical scanners are excellent in both mesh and texture but have the disadvantage of being very expensive and not portable. The handheld method had the advantage of excellent portability and speed. When considering the results compared to the price, the small and medium-sized object scanner was the best. It was the photo room measurement that was able to obtain the 3D model at the lowest cost. 3D scanning technology can be largely used to produce digital drawings of relics, restore and duplicate cultural properties, and build databases. This study is meaningful in that it contributed to the use of scanners most suitable for buried cultural properties by material and period for the active use of 3D scanning technology in cultural heritage.

Optimum Management Plan for Soil Contamination Facilities (특정토양오염관리대상시설의 최적 관리방안에 관한 연구)

  • Park, Jae-Soo;Kim, Ki-Ho;Kim, Hae-Keum;Choi, Sang-Il
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.2
    • /
    • pp.293-300
    • /
    • 2012
  • This study was to investigate the unsuitable rate of the storage facilities, the changes in corrosion process over time after installation according to the status, the time to install the facilities, years elapsed after facilities installation, inspection of methods and motivation, and so on, based on the results of the inspection at the petroleum storage facilities conducted by domestic soil-relate specialized agency to derive optimal management plans which meet the status of soil contamination facilities. The results showed that the facilities more than 5 years after the initial leak test at the time of the installation need to be inspected periodically by considering costs of leak test and remediation of polluted soil. The inspection period can be decided by cost and leak test methods showing discrepancies for the results obtained from individual test whether it was direct or indirect. To compensate these matters, we suggested that the direct inspection method on regular schedule is recommended. On the other hand, the inspection can be voluntarily completed to ease burden of the results by inspection or equivalent level to this inspection method. Also, it may need improved construction supervision and performance test system to minimize the occurrence of the nature defects in installing the facilities as well as the upgrade program for the facilities during intervals of inspection period.

A Study on Web-based Technology Valuation System (웹기반 지능형 기술가치평가 시스템에 관한 연구)

  • Sung, Tae-Eung;Jun, Seung-Pyo;Kim, Sang-Gook;Park, Hyun-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.23-46
    • /
    • 2017
  • Although there have been cases of evaluating the value of specific companies or projects which have centralized on developed countries in North America and Europe from the early 2000s, the system and methodology for estimating the economic value of individual technologies or patents has been activated on and on. Of course, there exist several online systems that qualitatively evaluate the technology's grade or the patent rating of the technology to be evaluated, as in 'KTRS' of the KIBO and 'SMART 3.1' of the Korea Invention Promotion Association. However, a web-based technology valuation system, referred to as 'STAR-Value system' that calculates the quantitative values of the subject technology for various purposes such as business feasibility analysis, investment attraction, tax/litigation, etc., has been officially opened and recently spreading. In this study, we introduce the type of methodology and evaluation model, reference information supporting these theories, and how database associated are utilized, focusing various modules and frameworks embedded in STAR-Value system. In particular, there are six valuation methods, including the discounted cash flow method (DCF), which is a representative one based on the income approach that anticipates future economic income to be valued at present, and the relief-from-royalty method, which calculates the present value of royalties' where we consider the contribution of the subject technology towards the business value created as the royalty rate. We look at how models and related support information (technology life, corporate (business) financial information, discount rate, industrial technology factors, etc.) can be used and linked in a intelligent manner. Based on the classification of information such as International Patent Classification (IPC) or Korea Standard Industry Classification (KSIC) for technology to be evaluated, the STAR-Value system automatically returns meta data such as technology cycle time (TCT), sales growth rate and profitability data of similar company or industry sector, weighted average cost of capital (WACC), indices of industrial technology factors, etc., and apply adjustment factors to them, so that the result of technology value calculation has high reliability and objectivity. Furthermore, if the information on the potential market size of the target technology and the market share of the commercialization subject refers to data-driven information, or if the estimated value range of similar technologies by industry sector is provided from the evaluation cases which are already completed and accumulated in database, the STAR-Value is anticipated that it will enable to present highly accurate value range in real time by intelligently linking various support modules. Including the explanation of the various valuation models and relevant primary variables as presented in this paper, the STAR-Value system intends to utilize more systematically and in a data-driven way by supporting the optimal model selection guideline module, intelligent technology value range reasoning module, and similar company selection based market share prediction module, etc. In addition, the research on the development and intelligence of the web-based STAR-Value system is significant in that it widely spread the web-based system that can be used in the validation and application to practices of the theoretical feasibility of the technology valuation field, and it is expected that it could be utilized in various fields of technology commercialization.

The Impact of the Internet Channel Introduction Depending on the Ownership of the Internet Channel (도입주체에 따른 인터넷경로의 도입효과)

  • Yoo, Weon-Sang
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.1
    • /
    • pp.37-46
    • /
    • 2009
  • The Census Bureau of the Department of Commerce announced in May 2008 that U.S. retail e-commerce sales for 2006 reached $ 107 billion, up from $ 87 billion in 2005 - an increase of 22 percent. From 2001 to 2006, retail e-sales increased at an average annual growth rate of 25.4 percent. The explosive growth of E-Commerce has caused profound changes in marketing channel relationships and structures in many industries. Despite the great potential implications for both academicians and practitioners, there still exists a great deal of uncertainty about the impact of the Internet channel introduction on distribution channel management. The purpose of this study is to investigate how the ownership of the new Internet channel affects the existing channel members and consumers. To explore the above research questions, this study conducts well-controlled mathematical experiments to isolate the impact of the Internet channel by comparing before and after the Internet channel entry. The model consists of a monopolist manufacturer selling its product through a channel system including one independent physical store before the entry of an Internet store. The addition of the Internet store to this channel system results in a mixed channel comprised of two different types of channels. The new Internet store can be launched by the independent physical store such as Bestbuy. In this case, the physical retailer coordinates the two types of stores to maximize the joint profits from the two stores. The Internet store also can be introduced by an independent Internet retailer such as Amazon. In this case, a retail level competition occurs between the two types of stores. Although the manufacturer sells only one product, consumers view each product-outlet pair as a unique offering. Thus, the introduction of the Internet channel provides two product offerings for consumers. The channel structures analyzed in this study are illustrated in Fig.1. It is assumed that the manufacturer plays as a Stackelberg leader maximizing its own profits with the foresight of the independent retailer's optimal responses as typically assumed in previous analytical channel studies. As a Stackelberg follower, the independent physical retailer or independent Internet retailer maximizes its own profits, conditional on the manufacturer's wholesale price. The price competition between two the independent retailers is assumed to be a Bertrand Nash game. For simplicity, the marginal cost is set at zero, as typically assumed in this type of study. In order to explore the research questions above, this study develops a game theoretic model that possesses the following three key characteristics. First, the model explicitly captures the fact that an Internet channel and a physical store exist in two independent dimensions (one in physical space and the other in cyber space). This enables this model to demonstrate that the effect of adding an Internet store is different from that of adding another physical store. Second, the model reflects the fact that consumers are heterogeneous in their preferences for using a physical store and for using an Internet channel. Third, the model captures the vertical strategic interactions between an upstream manufacturer and a downstream retailer, making it possible to analyze the channel structure issues discussed in this paper. Although numerous previous models capture this vertical dimension of marketing channels, none simultaneously incorporates the three characteristics reflected in this model. The analysis results are summarized in Table 1. When the new Internet channel is introduced by the existing physical retailer and the retailer coordinates both types of stores to maximize the joint profits from the both stores, retail prices increase due to a combination of the coordination of the retail prices and the wider market coverage. The quantity sold does not significantly increase despite the wider market coverage, because the excessively high retail prices alleviate the market coverage effect to a degree. Interestingly, the coordinated total retail profits are lower than the combined retail profits of two competing independent retailers. This implies that when a physical retailer opens an Internet channel, the retailers could be better off managing the two channels separately rather than coordinating them, unless they have the foresight of the manufacturer's pricing behavior. It is also found that the introduction of an Internet channel affects the power balance of the channel. The retail competition is strong when an independent Internet store joins a channel with an independent physical retailer. This implies that each retailer in this structure has weak channel power. Due to intense retail competition, the manufacturer uses its channel power to increase its wholesale price to extract more profits from the total channel profit. However, the retailers cannot increase retail prices accordingly because of the intense retail level competition, leading to lower channel power. In this case, consumer welfare increases due to the wider market coverage and lower retail prices caused by the retail competition. The model employed for this study is not designed to capture all the characteristics of the Internet channel. The theoretical model in this study can also be applied for any stores that are not geographically constrained such as TV home shopping or catalog sales via mail. The reasons the model in this study is names as "Internet" are as follows: first, the most representative example of the stores that are not geographically constrained is the Internet. Second, catalog sales usually determine the target markets using the pre-specified mailing lists. In this aspect, the model used in this study is closer to the Internet than catalog sales. However, it would be a desirable future research direction to mathematically and theoretically distinguish the core differences among the stores that are not geographically constrained. The model is simplified by a set of assumptions to obtain mathematical traceability. First, this study assumes the price is the only strategic tool for competition. In the real world, however, various marketing variables can be used for competition. Therefore, a more realistic model can be designed if a model incorporates other various marketing variables such as service levels or operation costs. Second, this study assumes the market with one monopoly manufacturer. Therefore, the results from this study should be carefully interpreted considering this limitation. Future research could extend this limitation by introducing manufacturer level competition. Finally, some of the results are drawn from the assumption that the monopoly manufacturer is the Stackelberg leader. Although this is a standard assumption among game theoretic studies of this kind, we could gain deeper understanding and generalize our findings beyond this assumption if the model is analyzed by different game rules.

  • PDF

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

Geochemical Equilibria and Kinetics of the Formation of Brown-Colored Suspended/Precipitated Matter in Groundwater: Suggestion to Proper Pumping and Turbidity Treatment Methods (지하수내 갈색 부유/침전 물질의 생성 반응에 관한 평형 및 반응속도론적 연구: 적정 양수 기법 및 탁도 제거 방안에 대한 제안)

  • 채기탁;윤성택;염승준;김남진;민중혁
    • Journal of the Korean Society of Groundwater Environment
    • /
    • v.7 no.3
    • /
    • pp.103-115
    • /
    • 2000
  • The formation of brown-colored precipitates is one of the serious problems frequently encountered in the development and supply of groundwater in Korea, because by it the water exceeds the drinking water standard in terms of color. taste. turbidity and dissolved iron concentration and of often results in scaling problem within the water supplying system. In groundwaters from the Pajoo area, brown precipitates are typically formed in a few hours after pumping-out. In this paper we examine the process of the brown precipitates' formation using the equilibrium thermodynamic and kinetic approaches, in order to understand the origin and geochemical pathway of the generation of turbidity in groundwater. The results of this study are used to suggest not only the proper pumping technique to minimize the formation of precipitates but also the optimal design of water treatment methods to improve the water quality. The bed-rock groundwater in the Pajoo area belongs to the Ca-$HCO_3$type that was evolved through water/rock (gneiss) interaction. Based on SEM-EDS and XRD analyses, the precipitates are identified as an amorphous, Fe-bearing oxides or hydroxides. By the use of multi-step filtration with pore sizes of 6, 4, 1, 0.45 and 0.2 $\mu\textrm{m}$, the precipitates mostly fall in the colloidal size (1 to 0.45 $\mu\textrm{m}$) but are concentrated (about 81%) in the range of 1 to 6 $\mu\textrm{m}$in teams of mass (weight) distribution. Large amounts of dissolved iron were possibly originated from dissolution of clinochlore in cataclasite which contains high amounts of Fe (up to 3 wt.%). The calculation of saturation index (using a computer code PHREEQC), as well as the examination of pH-Eh stability relations, also indicate that the final precipitates are Fe-oxy-hydroxide that is formed by the change of water chemistry (mainly, oxidation) due to the exposure to oxygen during the pumping-out of Fe(II)-bearing, reduced groundwater. After pumping-out, the groundwater shows the progressive decreases of pH, DO and alkalinity with elapsed time. However, turbidity increases and then decreases with time. The decrease of dissolved Fe concentration as a function of elapsed time after pumping-out is expressed as a regression equation Fe(II)=10.l exp(-0.0009t). The oxidation reaction due to the influx of free oxygen during the pumping and storage of groundwater results in the formation of brown precipitates, which is dependent on time, $Po_2$and pH. In order to obtain drinkable water quality, therefore, the precipitates should be removed by filtering after the stepwise storage and aeration in tanks with sufficient volume for sufficient time. Particle size distribution data also suggest that step-wise filtration would be cost-effective. To minimize the scaling within wells, the continued (if possible) pumping within the optimum pumping rate is recommended because this technique will be most effective for minimizing the mixing between deep Fe(II)-rich water and shallow $O_2$-rich water. The simultaneous pumping of shallow $O_2$-rich water in different wells is also recommended.

  • PDF