• Title/Summary/Keyword: mathematical experiment

Search Result 590, Processing Time 0.029 seconds

Zoning Permanent Basic Farmland Based on Artificial Immune System coupling with spatial constraints

  • Hua, Wang;Mengyu, Wang;Yuxin, Zhu;Jiqiang, Niu;Xueye, Chen;Yang, Zhang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1666-1689
    • /
    • 2021
  • The red line of Permanent Basic Farmland is the most important part in the "three-line" demarcation of China's national territorial development plan. The scientific and reasonable delineation of the red line is a major strategic measure being taken by China to improve its ability to safeguard the practical interests of farmers and guarantee national food security. The delineation of Permanent Basic Farmland zoning (DPBFZ) is essentially a multi-objective optimization problem. However, the traditional method of demarcation does not take into account the synergistic development goals of conservation of cultivated land utilization, ecological conservation, or urban expansion. Therefore, this research introduces the idea of artificial immune optimization and proposes a multi-objective model of DPBFZ red line delineation based on a clone selection algorithm. This research proposes an objective functional system consisting of these three sub-objectives: optimal quality of cropland, spatially concentrated distribution, and stability of cropland. It also takes into consideration constraints such as the red line of ecological protection, topography, and space for major development projects. The mathematical formal expressions for the objectives and constraints are given in the paper, and a multi-objective optimal decision model with multiple constraints for the DPBFZ problem is constructed based on the clone selection algorithm. An antibody coding scheme was designed according to the spatial pattern of DPBFZ zoning. In addition, the antibody-antigen affinity function, the clone mechanism, and mutation strategy were constructed and improved to solve the DPBFZ problem with a spatial optimization feature. Finally, Tongxu County in Henan province was selected as the study area, and a controlled experiment was set up according to different target preferences. The results show that the model proposed in this paper is operational in the work of delineating DPBFZ. It not only avoids the adverse effects of subjective factors in the delineation process but also provides multiple scenarios DPBFZ layouts for decision makers by adjusting the weighting of the objective function.

A Study on Pose Control for Inverted Pendulum System using PID Algorithm (PID 알고리즘을 이용한 역 진자 시스템의 자세 제어에 관한 연구)

  • Jin-Gu Kang
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.6
    • /
    • pp.400-405
    • /
    • 2023
  • Currently, inverted pendulums are being studied in many fields, including posture control of missiles, rockets, etc, and bipedal robots. In this study, the vertical posture control of the pendulum was studied by constructing a rotary inverted pendulum using a 256-pulse rotary encoder and a DC motor. In the case of nonlinear systems, complex algorithms and controllers are required, but a control method using the classic and relatively simple PID(Proportional Integral Derivation) algorithm was applied to the rotating inverted pendulum system, and a simple but desired method was studied. The rotating inverted pendulum system used in this study is a nonlinear and unstable system, and a PID controller using Microchip's dsPIC30F4013 embedded processor was designed and implemented in linear modeling. Usually, PID controllers are designed by combining one or two or more types, and have the advantage of having a simple structure compared to excellent control performance and that control gain adjustment is relatively easy compared to other controllers. In this study, the physical structure of the system was analyzed using mathematical methods and control for vertical balance of a rotating inverted pendulum was realized through modeling. In addition, the feasibility of controlling with a PID controller using a rotating inverted pendulum was verified through simulation and experiment.

Study on the Heat Transfer Phenomenon around Underground Concrete Digesters for Bigas Production Systems (생물개스 발생시스템을 위한 지하매설콘크리트 다이제스터의 열전달에 관한 연구)

  • 김윤기;고재균
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.22 no.1
    • /
    • pp.53-66
    • /
    • 1980
  • The research work is concerned with the analytical and experimental studies on the heat transfer phenomenon around the underground concrete digester used for biogas production Systems. A mathematical and computational method was developed to estimate heat losses from underground cylindrical concrete digester used for biogas production systems. To test its feasibility and to evaluate thermal parameters of materials related, the method was applied to six physical model digesters. The cylindrical concrete digester was taken as a physical model, to which the model,atical model of heat balance can be applied. The mathematical model was transformed by means of finite element method and used to analyze temperature distribution with respect to several boundary conditions and design parameters. The design parameters of experimental digesters were selected as; three different sizes 40cm by 80cm, 80cm by 160cm and l00cm by 200cm in diameter and height; two different levels of insulation materials-plain concrete and vermiculite mixing in concrete; and two different types of installation-underground and half-exposed. In order to carry out a particular aim of this study, the liquid within the digester was substituted by water, and its temperature was controlled in five levels-35。 C, 30。 C, 25。 C, 20。C and 15。C; and the ambient air temperature and ground temperature were checked out of the system under natural winter climate conditions. The following results were drawn from the study. 1.The analytical method, by which the estimated values of temperature distribution around a cylindrical digester were obtained, was able to be generally accepted from the comparison of the estimated values with the measured. However, the difference between the estimated and measured temperature had a trend to be considerably increased when the ambient temperature was relatively low. This was mainly related variations of input parameters including the thermal conductivity of soil, applied to the numerical analysis. Consequently, the improvement of these input data for the simulated operation of the numerical analysis is expected as an approach to obtain better refined estimation. 2.The difference between estimated and measured heat losses was shown to have the similar trend to that of temperature distribution discussed above. 3.It was found that a map of isothermal lines drawn from the estimated temperature distribution was very useful for a general observation of the direction and rate of heat transfer within the boundary. From this analysis, it was interpreted that most of heat losses is passed through the triangular section bounded within 45 degrees toward the wall at the bottom edge of the digesten Therefore, any effective insulation should be considered within this region. 4.It was verified by experiment that heat loss per unit volume of liquid was reduced as the size of the digester became larger For instance, at the liquid temperature of 35˚ C, the heat loss per unit volume from the 0. 1m$^3$ digester was 1, 050 Kcal/hr m$^3$, while at for 1. 57m$^3$ digester was 150 Kcal/hr m$^3$. 5.In the light of insulation, the vermiculite concrete was consistently shown to be superior to the plain concrete. At the liquid temperature ranging from 15。 C to 350 C, the reduction of heat loss was ranged from 5% to 25% for the half-exposed digester, while from 10% to 28% for the fully underground digester. 6.In the comparison of heat loss between the half-exposed and underground digesters, the heat loss from the former was fr6m 1,6 to 2, 6 times as much as that from the latter. This leads to the evidence that the underground digester takes advantage of heat conservation during winter.

  • PDF

A Study on the Nightsoil Treatment by BFB (BFB에 의한 분뇨처리(糞尿處理)의 연구(研究))

  • Kim, Hwan Gi;Lee, Young Dong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.3 no.2
    • /
    • pp.1-15
    • /
    • 1983
  • This paper has concentrated on estimating the possibility and mathematical analysis for the application of BFB to the treatment of nightsoil with low dilution rate. The experiment for the study of this purpose was conducted by continuous type reactor at $20^{\circ}C$, varying F/M ratio from 0.12 to 0.37 and dilution ratio from 2 to 10, and in it provided matted reticulated polypropylene sheets for the solid supports. The obtained results showed that the application of BFB to the treatment of nightsoil would be more effective than any other biological treatment process. Also, it has observed that the optimum dilution ratio was about 5 times and the optimum HRT was about 17 hours, and then it was estimated that the reactor volume and the quantity of weak water could be reduced to the extent of 70 percent and 80 percent. The experimental results of BFB could be analysed by the mathematical models applied to complete mixing activated sludge process. The substrate removal rates which were obtained by McKinney's($K_m$) and EcKenfelder's($K_e$) equation was 1.784/hr and $2.0{\times}10l/mg{\cdot}day$, and substrate was removed very rapidly compared to those of conventional type biological treatment processes. The biomass yield coefficient($a_5$), the endogeneous respiration rate(b), the synthesis oxygen demand rate($a{_5}^{\prime}$), and the endogeneous respiration oxygen demand rate(b') were 0.349, 0.0237/day, 0.495 and 0.0336, respectively.

  • PDF

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Optimization of Ingredients for the Preparation of Chinese Quince (Chaenomelis sinensis) Jam by Mixture Design (모과잼 제조시 혼합물 실험계획법에 의한 재료 혼합비율의 최적화)

  • Lee, Eun-Young;Jang, Myung-Sook
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.38 no.7
    • /
    • pp.935-945
    • /
    • 2009
  • This study was performed to find the optimum ratio of ingredients in the Chinese quince jam. The experiment was designed according to the D-optimal design of mixture design, which included 14 experimental points with 4 replicates for three independent variables (Chinese quince paste $45{\sim}60%$, pectin $1.5{\sim}4.5%$, sugar $45.5{\sim}63.5%$). A mathematical analytical tool was employed for the optimization of typical ingredients. The canonical form and trace plot showed the influence of each ingredient in the mixture against final product. By use of F-test, sweetness, pH, L, b, ${\Delta}E$, and firmness were expressed by a linear model, while the spreadmeter value, a, and sensory characteristics (appearance, color, smell, taste, and overall acceptability) were by a quadratic model. The optimum formulations by numerical and graphical method were similar: Chinese quince paste 54.48%, pectin 2.45%, and sugar 53.07%. Optimum ingredient formulation is expected to improve use of Chinese quince and contribute to commercialization of high quality Chinese quince jam.

Comparative Analysis of the New and Old Secondary School Science Textbooks (중학교 과학교과서의 비교분석)

  • Kim, Seong-Jin;Pak, Sung-Jae
    • Journal of The Korean Association For Science Education
    • /
    • v.5 no.1
    • /
    • pp.49-61
    • /
    • 1985
  • In this study, I compared and analyzed the new and old secondary school science textbooks to find the charateristics of them and the differences between them. The results of the study are the following. Major concepts in the new textbook are almost similar to those in the old one. The-new textbook reinforces the functions of the introudction and checking the result of learning, and presents more and diverse learning materials and reduces the degree of learning difficulty by omitting the several abstract knowledges and mathematical formulas which can be understood through formal operational thinking. The results show that the new textbook is more effective in arousing student's interest and curiosity there fore it increases the efficiency of learning. But the new textbook is less suitable for inquiry because it is mainly composed of explanation and fact rather than experiment and observation. I think that this is the result from the actual approach to the real conditions of school when the curriculum was reformed and the new textbook was written.

  • PDF

A comparison study of 76Se, 77Se and 78Se isotope spikes in isotope dilution method for Se (셀레늄의 동위원소 희석분석법에서 첨가 스파이크 동위원소 76Se, 77Se 및 78Se들의 비교분석)

  • Kim, Leewon;Lee, Seoyoung;Pak, Yong-Nam
    • Analytical Science and Technology
    • /
    • v.29 no.4
    • /
    • pp.170-178
    • /
    • 2016
  • Accuracy and precision of ID methods for different spike isotopes of 76Se, 77Se, and 78Se were compared for the analysis of Selenium using quadrupole ICP/MS equipped with Octopole reaction cell. From the analysis of Se inorganic standard solution, all of three spikes showed less than 1 % error and 1 % RSD for both short-term (a day) and long-term (several months) periods. They showed similar results with each other and 78Se showed was a bit better than 76Se and 77Se. However, different spikes showed different results when NIST SRM 1568a and SRM 2967 were analyzed because of the several interferences on the m/z measured and calculated. Interferences due to the generation of SeH from ORC was considered as well as As and Br in matrix. The results showed similar accuracy and precisions against SRM 1568a, which has a simple background matrix, for all three spikes and the recovery rate was about 80% with steadiness. The %RSD was a bit higher than inorganic standard (1.8 %, 8.6 %, and 6.3 % for 78Se, 76Se and 77Se, respectively) but low enough to conclude that this experiment is reliable. However, mussel tissue has a complex matrix showed inaccurate results in case of 78Se isotope spike (over 100 % RSD). 76Se and 77Se showd relatively good results of around 98.6 % and 104.2 % recovery rate. The errors were less than 5 % but the precision was a bit higher value of 15 % RSD. This clearly shows that Br interferences are so large that a simple mathematical calibration is not enough for a complex-matrixed sample. In conclusion, all three spikes show similar results when matrix is simple. However, 78Se should be avoided when large amount of Br exists in matrix. Either 76Se or 77Se would provide accurate results.

Analysis of Greenhouse Thermal Environment by Model Simulation (시뮬레이션 모형에 의한 온실의 열환경 분석)

  • 서원명;윤용철
    • Journal of Bio-Environment Control
    • /
    • v.5 no.2
    • /
    • pp.215-235
    • /
    • 1996
  • The thermal analysis by mathematical model simulation makes it possible to reasonably predict heating and/or cooling requirements of certain greenhouses located under various geographical and climatic environment. It is another advantages of model simulation technique to be able to make it possible to select appropriate heating system, to set up energy utilization strategy, to schedule seasonal crop pattern, as well as to determine new greenhouse ranges. In this study, the control pattern for greenhouse microclimate is categorized as cooling and heating. Dynamic model was adopted to simulate heating requirements and/or energy conservation effectiveness such as energy saving by night-time thermal curtain, estimation of Heating Degree-Hours(HDH), long time prediction of greenhouse thermal behavior, etc. On the other hand, the cooling effects of ventilation, shading, and pad ||||&|||| fan system were partly analyzed by static model. By the experimental work with small size model greenhouse of 1.2m$\times$2.4m, it was found that cooling the greenhouse by spraying cold water directly on greenhouse cover surface or by recirculating cold water through heat exchangers would be effective in greenhouse summer cooling. The mathematical model developed for greenhouse model simulation is highly applicable because it can reflects various climatic factors like temperature, humidity, beam and diffuse solar radiation, wind velocity, etc. This model was closely verified by various weather data obtained through long period greenhouse experiment. Most of the materials relating with greenhouse heating or cooling components were obtained from model greenhouse simulated mathematically by using typical year(1987) data of Jinju Gyeongnam. But some of the materials relating with greenhouse cooling was obtained by performing model experiments which include analyzing cooling effect of water sprayed directly on greenhouse roof surface. The results are summarized as follows : 1. The heating requirements of model greenhouse were highly related with the minimum temperature set for given greenhouse. The setting temperature at night-time is much more influential on heating energy requirement than that at day-time. Therefore It is highly recommended that night- time setting temperature should be carefully determined and controlled. 2. The HDH data obtained by conventional method were estimated on the basis of considerably long term average weather temperature together with the standard base temperature(usually 18.3$^{\circ}C$). This kind of data can merely be used as a relative comparison criteria about heating load, but is not applicable in the calculation of greenhouse heating requirements because of the limited consideration of climatic factors and inappropriate base temperature. By comparing the HDM data with the results of simulation, it is found that the heating system design by HDH data will probably overshoot the actual heating requirement. 3. The energy saving effect of night-time thermal curtain as well as estimated heating requirement is found to be sensitively related with weather condition: Thermal curtain adopted for simulation showed high effectiveness in energy saving which amounts to more than 50% of annual heating requirement. 4. The ventilation performances doting warm seasons are mainly influenced by air exchange rate even though there are some variations depending on greenhouse structural difference, weather and cropping conditions. For air exchanges above 1 volume per minute, the reduction rate of temperature rise on both types of considered greenhouse becomes modest with the additional increase of ventilation capacity. Therefore the desirable ventilation capacity is assumed to be 1 air change per minute, which is the recommended ventilation rate in common greenhouse. 5. In glass covered greenhouse with full production, under clear weather of 50% RH, and continuous 1 air change per minute, the temperature drop in 50% shaded greenhouse and pad & fan systemed greenhouse is 2.6$^{\circ}C$ and.6.1$^{\circ}C$ respectively. The temperature in control greenhouse under continuous air change at this time was 36.6$^{\circ}C$ which was 5.3$^{\circ}C$ above ambient temperature. As a result the greenhouse temperature can be maintained 3$^{\circ}C$ below ambient temperature. But when RH is 80%, it was impossible to drop greenhouse temperature below ambient temperature because possible temperature reduction by pad ||||&|||| fan system at this time is not more than 2.4$^{\circ}C$. 6. During 3 months of hot summer season if the greenhouse is assumed to be cooled only when greenhouse temperature rise above 27$^{\circ}C$, the relationship between RH of ambient air and greenhouse temperature drop($\Delta$T) was formulated as follows : $\Delta$T= -0.077RH+7.7 7. Time dependent cooling effects performed by operation of each or combination of ventilation, 50% shading, pad & fan of 80% efficiency, were continuously predicted for one typical summer day long. When the greenhouse was cooled only by 1 air change per minute, greenhouse air temperature was 5$^{\circ}C$ above outdoor temperature. Either method alone can not drop greenhouse air temperature below outdoor temperature even under the fully cropped situations. But when both systems were operated together, greenhouse air temperature can be controlled to about 2.0-2.3$^{\circ}C$ below ambient temperature. 8. When the cool water of 6.5-8.5$^{\circ}C$ was sprayed on greenhouse roof surface with the water flow rate of 1.3 liter/min per unit greenhouse floor area, greenhouse air temperature could be dropped down to 16.5-18.$0^{\circ}C$, whlch is about 1$0^{\circ}C$ below the ambient temperature of 26.5-28.$0^{\circ}C$ at that time. The most important thing in cooling greenhouse air effectively with water spray may be obtaining plenty of cool water source like ground water itself or cold water produced by heat-pump. Future work is focused on not only analyzing the feasibility of heat pump operation but also finding the relationships between greenhouse air temperature(T$_{g}$ ), spraying water temperature(T$_{w}$ ), water flow rate(Q), and ambient temperature(T$_{o}$).

  • PDF