• Title/Summary/Keyword: generating

Search Result 7,295, Processing Time 0.043 seconds

Preparationand Characterization of Rutile-anatase Hybrid TiO2 Thin Film by Hydrothermal Synthesis

  • Kwon, Soon Jin;Song, Hoon Sub;Im, Hyo Been;Nam, Jung Eun;Kang, Jin Kyu;Hwang, Taek Sung;Yi, Kwang Bok
    • Clean Technology
    • /
    • v.20 no.3
    • /
    • pp.306-313
    • /
    • 2014
  • Nanoporous $TiO_2$ films are commonly used as working electrodes in dye-sensitized solar cells (DSSCs). So far, there have been attempts to synthesize films with various $TiO_2$ nanostructures to increase the power-conversion efficiency. In this work, vertically aligned rutile $TiO_2$ nanorods were grown on fluorinated tin oxide (FTO) glass by hydrothermal synthesis, followed by deposition of an anatase $TiO_2$ film. This new method of anatase $TiO_2$ growth avoided the use of a seed layer that is usually required in hydrothermal synthesis of $TiO_2$ electrodes. The dense anatase $TiO_2$ layer was designed to behave as the electron-generating layer, while the less dense rutile nanorods acted as electron-transfer pathwaysto the FTO glass. In order to facilitate the electron transfer, the rutile phase nanorods were treated with a $TiCl_4$ solution so that the nanorods were coated with the anatase $TiO_2$ film after heat treatment. Compared to the electrode consisting of only rutile $TiO_2$, the power-conversion efficiency of the rutile-anatase hybrid $TiO_2$ electrode was found to be much higher. The total thickness of the rutile-anatase hybrid $TiO_2$ structures were around $4.5-5.0{\mu}m$, and the highest power efficiency of the cell assembled with the structured $TiO_2$ electrode was around 3.94%.

Performance Analysis of Implementation on IoT based Smart Wearable Mine Detection Device

  • Kim, Chi-Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.12
    • /
    • pp.51-57
    • /
    • 2019
  • In this paper, we analyzed the performance of IoT based smart wearable mine detection device. There are various mine detection methods currently used by the military. Still, in the general field, mine detection is performed by visual detection, probe detection, detector detection, and other detection methods. The detection method by the detector is using a GPR sensor on the detector, which is possible to detect metals, but it is difficult to identify non-metals. It is hard to distinguish whether the area where the detection was performed or not. Also, there is a problem that a lot of human resources and time are wasted, and if the user does not move the sensor at a constant speed or moves too fast, it is difficult to detect landmines accurately. Therefore, we studied the smart wearable mine detection device composed of human body antenna, main microprocessor, smart glasses, body-mounted LCD monitor, wireless data transmission, belt type power supply, black box camera, which is to improve the problem of the error of mine detection using unidirectional ultrasonic sensing signal. Based on the results of this study, we will conduct an experiment to confirm the possibility of detecting underground mines based on the Internet of Things (IoT). This paper consists of an introduction, experimental environment composition, simulation analysis, and conclusion. Introduction introduces the research contents such as mines, mine detectors, and research progress. It consists of large anti-personnel mine, M16A1 fragmented anti-mine, M15 and M19 antitank mines, plastic bottles similar to mines and aluminum cans. Simulation analysis is conducted by using MATLAB to analyze the mine detection device implementation performance, generating and transmitting IoT signals, and analyzing each received signal to verify the detection performance of landmines. Then we will measure the performance through the simulation of IoT-based mine detection algorithm so that we will prove the possibility of IoT-based detection landmine.

Low-Power CMOS On-Chip Voltage Reference Circuits (저전력 CMOS On-Chip 기준전압 발생회로)

  • Kwon, Duck-Ki;Park, Jong-Tae;Yu, Chong-Gun
    • Journal of IKEEE
    • /
    • v.4 no.2 s.7
    • /
    • pp.181-191
    • /
    • 2000
  • In this paper, two schemes of generating reference voltages using enhancement-mode MOS transistors and resistors are proposed. The first one is a voltage-mode scheme where the temperature compensation is made by summing a voltage component proportional to a threshold voltage and a voltage component proportional to a thermal voltage. In the second one, that is a current-mode scheme, the temperature compensation is made by summing a current component proportional to a threshold voltage and a current component proportional to a thermal voltage. The designed circuits have been simulated using a $0.65{\mu}m$ n-well CMOS process parameters. The voltage-mode circuit has a temperature coefficient less than $48.0ppm/^{\circ}C$ and a power-supply(VDD) coefficient less than 0.21%/V for a temperature range of $-30^{\circ}C{\sim}130^{\circ}C$ and a VDD range of $3V{\sim}12V$. The current-mode circuit has a temperature coefficient less than $38.2ppm/^{\circ}C$ and a VDD coefficient less than 0.8%/V for $-30^{\circ}C{\sim}130^{\circ}C\;and\; 4V{\sim}12V$. The power consumption of the voltage-mode and current-mode circuits are $27{\mu}W\;and\;65{\mu}W$ respectively for 5V and $30^{\circ}C$. Measurement results show that the voltage-mode reference circuit has a VDD coefficient less than 0.63%/V for $30^{\circ}C{\sim}100^{\circ}C$ and has a temperature coefficient less than $490ppm/^{\circ}C\;for\;3V{\sim}6V$. The proposed reference circuits are simple and thus easy to design. The proposed current-mode reference circuit can be designed to generate a wide range of reference voltages.

  • PDF

Use of Nuclear Power Sources in Outer Space and Space Law (우주에서의 핵연료(NPS)사용과 우주법)

  • Kim, Han-Taek
    • The Korean Journal of Air & Space Law and Policy
    • /
    • no.spc
    • /
    • pp.35-58
    • /
    • 2007
  • Nuclear Power Sources(NPS) have been used since 1961 for the purpose of generating energy for space objects and have since then been recognized as particularly suited essential to some space operations. In January 1978 a malfuctioning Soviet nuclear powered satellite, Cosmos 954, re-entered the earth's atmosphere and disintegrated, scattering radioactive debris over a wide area of the Canadian Northwest Territory. This incident provided some reasons to international legal scholars to make some principles to regulate using NPS in outer space. In 1992 General Assembly adopted "Principles Relevant to the Use of Nuclear Power Sources in Outer Space". These NPS Principles set out certain legal and regulatory requirements on the use of nuclear and radioactive power sources for non-propulsive purposes. Although these principles, called 'soft laws', are not legal norms, they have much enfluences on state practices such as 1983 DBS Principles(Principles Governing the Use by States of Artificial Earth Satellites for International Direct Television Broadcasting), 1986 RS Principles(Principles Relating to Remote Sensing of the Earth from Space) and 1996 Declaration on International Cooperation in the Exploration and Use of Outer Space for the Benefit and in the Interests of all States, Taking into Particular Account the Needs of Developing Countries. As far as 1963 Declaration of Legal Principles Governing the Activities of States in the Exploration and Use of Outer Space is concerned the main points such as free use of outer space, non-appropriation of celestial bodies, application of international law to outer space etc. have become customary international law binding all states. NPS Principles might have similar characters according to states' willingness to respect them.

  • PDF

Development of Control Algorithm for Greenhouse Cooling Using Two-fluid Fogging System (이류체 포그 냉방시스템의 제어알고리즘 개발)

  • Nam, Sang-Woon;Kim, Young-Shik;Sung, In-Mo
    • Journal of Bio-Environment Control
    • /
    • v.22 no.2
    • /
    • pp.138-145
    • /
    • 2013
  • In order to develop the efficient control algorithm of the two-fluid fogging system, cooling experiments for the many different types of fogging cycles were conducted in tomato greenhouses. It showed that the cooling effect was 1.2 to $4.0^{\circ}C$ and the cooling efficiency was 8.2 to 32.9% on average. The cooling efficiency with fogging interval was highest in the case of the fogging cycle of 90 seconds. The cooling efficiency showed a tendency to increase as the fogging time increased and the stopping time decreased. As the spray rate of fog in the two-fluid fogging system increased, there was a tendency for the cooling efficiency to improve. However, as the inside air approaches its saturation level, even though the spray rate of fog increases, it does not lead to further evaporation. Thus, it can be inferred that increasing the spray rate of fog before the inside air reaches the saturation level could make higher the cooling efficiency. As cooling efficiency increases, the saturation deficit of inside air decreased and the difference between absolute humidity of inside and outside air increased. The more fog evaporated, the difference between absolute humidity of inside and outside air tended to increase and as the result, the discharge of vapor due to ventilation occurs more easily, which again lead to an increase in the evaporation rate and ultimately increase in the cooling efficiency. Regression analysis result on the saturation deficit of inside air showed that the fogging time needed to change of saturation deficit of $10g{\cdot}kg^{-1}$ was 120 seconds and stopping time was 60 seconds. But in order to decrease the amplitude of temperature and to increase the cooling efficiency, the fluctuation range of saturation deficit was set to $5g{\cdot}kg^{-1}$ and we decided that the fogging-stopping time of 60-30 seconds was more appropriate. Control types of two-fluid fogging systems were classified as computer control or simple control, and their control algorithms were derived. We recommend that if the two-fluid fogging system is controlled by manipulating only the set point of temperature, humidity, and on-off time, it would be best to set up the on-off time at 60-30 seconds in time control, the lower limit of air temperature at 30 to $32^{\circ}C$ and the upper limit of relative humidity at 85 to 90%.

Effect of Plasma-activated Water Process on the Growth and Functional Substance Content of Lettuce during the Cultivation Period in a Deep Flow Technique System (담액수경재배 시스템에서 플라즈마수 처리가 상추의 생육 및 페놀류 함량에 미치는 영향)

  • Noh, Seung Won;Park, Jong Seok;Kim, Sung Jin;Kim, Dae-Woong;Kang, Woo Seok
    • Journal of Bio-Environment Control
    • /
    • v.29 no.4
    • /
    • pp.464-472
    • /
    • 2020
  • We suggest a hydroponic cultivation system combined with a plasma generator to investigate the changes in the growth and functional substance content of lettuces during the cultivation period. Lettuce seedlings of uniform size were planted in semi-DFT after seeding for 3 weeks, and the plasma-activated water was intermittently operated for 1 hour at an 8 hours cycle for 4 weeks. Lettuces grew with or without plasma-activated water with the nutrient solution in hydroponics culture systems. Among the reactive oxygen species generated during plasma-activated water treatment, brown spots and necrosis appeared in the individuals closer to the plasma generating device due to O3, and there was no significant difference in the growth parameters. While the rutin and total phenolic content of the lettuce shoot grown in the nutrient solution were higher than that of the plasma-activated water, epicatechin contents in plasma-activated water were significantly greater than the nutrient solution. However, in the roots, all kinds of secondary metabolites measured in this work, rutin, epicatechin, quercetin, and total phenolic contents, were significantly higher in the plasma-activated water than the control. These results were indicated that the growth of lettuce was decreased due to the reactive oxygen species such as ozone in the plasma-activated water, but the secondary metabolites in the root zone increased significantly. It has needed to use this technology for the cultivation of root vegetables with the modified plasma-activated water systems to increase secondary metabolite in the roots.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

Catastrophic Art and Its Instrumentalized Selection System : From work by Hunter Jonakin and Dan Perjovschi (재앙적 예술과 그 도구화된 선별체계: 헌터 조너킨과 댄 퍼잡스키의 작품으로부터)

  • Shim, Sang-Yong
    • The Journal of Art Theory & Practice
    • /
    • no.13
    • /
    • pp.73-95
    • /
    • 2012
  • In terms of element and process, art today has already been fully systemized, yet tends to become even more systemized. All phases of creation and exhibition, appreciation and education, promotion and marketing are planned, adjusted, and decided within the order of a globalized, networked system. Each phase is executed, depending on the system of management and control and diverse means corresponding to the system. From the step of education, artists are guided to determine their styles and not be motivated by their desire to become star artists or running counter to mainstream tendency and fashion. In the process of planning an exhibition, the level of artist awareness is considered more significant than work quality. It is impossible to avoid such systems and institutions today. No one can escape or be freed from the influence of such system. This discussion addresses a serious distortion in the selection system as part of the system connotatively called "art museum system," especially to evaluate artistic achievement and aesthetic quality. Called "studio system" or "art star system," the system distinguishes successful minority from failed absolute majority and justifies the results, deciding discriminative compensations. The discussion begins from work by Hunter Jonakin and Dan Perjovschi. The key point of this discussion is not their art worlds but the shared truth referred by the two as the collusive "art market" and "art star system." Through works based on their experiences, the two artists refer to these systems which restrict and confine them. Jonakin's Jeff Koons Must Die! is avideo game conveying a critical comment on authoritative operation of the museum system and star system. In this work, participants, whether viewer or artist, are destined to lose: the game is unwinnable. Players take the role of a person locked in a museum where artist Jeff Koons' retrospective is held. The player can either look around and quietly observe the works, which causes a game-over, or he can blow the classical paintings to pieces and cause the artist Koons to come out and reprimand the player, also resulting in a game-over. Like Jonakin, Dan Perjovschi's some drawings also focuses on the status of the artist shrunken by the system. Most artists are ruined in a process of competition to survive within the museum system. As John Burger properly pointed out, out of the art systems today, public collections (art museums) and private collections have become "something unbearable." The system justifies the selection system of art stars and its frame of reference, disregarding the problem of producing numerable victims in its process. What should be underlined above all else is that the present selection system seriously shrinks art's creative function and its function of generating meaning. In this situation, art might fall to the level of entertainment, accessible to more people and compromising with popularity. This discussion is based on assumption and consciousness on the matter that this situation might cause catastrophic results for not only explicit victims of the system but also winners, or ones defined as winners. The system of art is probably possible only by desire or distortion stemmed from such desire. The system can be flourished only under the economic system of avarice: quantitatively expanding economy, abundant style, resort economy in Venice and Miami, and luxurious shopping malls with up-to-date facilities. The catastrophe here is ongoing, not a sudden emergence, and dynamic, leading the system itself to a devastating end.

  • PDF

Study on true nature of the Fung(風) and that of application to the medicine (풍(風)의 본질(本質)과 의학(醫學)에서의 운용(運用)에 관(關)한 고찰(考察))

  • Back, Sang Ryong;Park, Chan Kug
    • Journal of Korean Medical classics
    • /
    • v.7
    • /
    • pp.198-231
    • /
    • 1994
  • Up to now, after I had examined the relation between the origin of Fung(風) and Gi(氣) and the mean of Fung in medical science, I obtained the conclusion being as follows. The first, Fung(風) means a flux of Gi(氣) and Gi shows the process by virtue of the form of Fung, namely, Fung means motion of Gi. In other words, it is flow of power. Accordingly, the process of all power can give a name Fung. The second, Samul(事物) ceaselessly interchange with the external world to sustain the existence and life of themselves. And they make a adequate confrontation against the pressure of the outside. This the motive power of life action(生命活動) is Gi and shows its the process on the strength of Fung. The third, Samul(事物) incessantly releases power which it has to the outside. Power released to the outside forms the territory of the established power in the environment of them and keep up their substance(實體) in the space time(時空). It can be name Fung because the field(場) of this power incessantly flows. The fourth, man operates life on the ground of the creation of his own vigor(生氣) for himself as the life body(生命體) of the independence and self-support. The occurence of this vigor and the adjustment process(調節作用) is supervised by Gan(肝). That is to say, Gan plays a role to regulate and manage the process of Fung or the action of vigor with Fung-Zang(風臟). The fifth, because the Gi-Gi adjustment process(氣機調節作用) of Gan is the same as the process of Fung, Fung that operates the cause of a disease is attributed to the disharmony of the process of the human body Gi-Gi. Therefore, the generating pathological change is attributed to the extraordinary of the function by the incongruity of Gi-Gi(氣機) or the disorder of the direct motion of Gi-Hyul(氣血). Because the incongruity of this Gi-Gi of the human body gives rise to the abnormal of Zung-Gi(正氣) in the human body properly cannot cope with the invasion of 'Oi-Sa(外邪). Furthermore, Fung serves as the mediation body of the invasion of other Sa-Gi(邪氣) because of its dynamics, By virtue of this reason, Fung is named the head of all disease. And because the incongruity of the Gi-Gi has each other form according to Zang-Bu(臟腑), Kyung-Lak(經絡), and a region, the symptoms of a disease appear differently in line with them as well. The sixth, Fung-byung(風病) is approximately separated Zung-Fung(中風) and Fung-byung(猍義의 風病). Zung-fung and Fung-byung is to be attributed to the major invasion of each Jung-gi and Fung-sa(正氣와 風邪). But these two kinds stir up the problem to the direct motion of Gi-hyul(氣血) and the harmony of Gi-Gi in the human body. When one cures it, therefore, Zung-fung has to rectify Gi-Gi and the circulation of Gi-hyul on the basis of the supplement of Jung-gi(正氣) and Fung-byung must make the harmony of Gi-Gi with the Gu-fung(驅風). -Go-gi(調氣), Sun-Gi(順氣). Hang-Gi(行氣) - All existing living things as well as man maintain life on the ground of the pertinent harmony between the soul(精神) and the body(肉體). As soon as the harmony falls down, simultaneously life disappears as well. And Fung which means the outside process between Gi(氣) and Gi(氣) makes the action of their life cooperative and unified, Accordingy, the understanding of Fung, first, has to start wi th the whole thought that not only all Samul(事物) but also the soul and the body are one.

  • PDF