• Title/Summary/Keyword: quantify

Search Result 2,944, Processing Time 0.04 seconds

A study on quantification of α-quartz, cristobalite, kaolinite mixture in respirable dust using by FTIR (FTIR를 이용한 호흡성 분진중 α-quartz, cristobalite, kaolinite 혼합물 정량 분석 연구)

  • Eun Cheol Choi;Seung Ho Lee
    • Analytical Science and Technology
    • /
    • v.36 no.6
    • /
    • pp.315-323
    • /
    • 2023
  • This study is to quantify α-quartz, cristobalite and kaolinite using by FTIR in respirable dust generated in the mining workplace. Various minerals in mines can interfere with peaks when quantifying respirable crystalline silica by FTIR. Therefore, for accurate quantification, it is necessary to remove interfering substances or correct the peaks that cause interference. To confirm the peaks occurring in α-quartz, cristobalite and kaolinite, each standard material was diluted with KBr and scanned in the range of 400 cm-1 to 4000 cm-1 using by FTIR. As a result of scanning the analytes, it was decided to use the peaks of 797.66 cm-1 and 695.25 cm-1 for α-quartz, 621.58 cm-1 for cristobalite, and 3696.47 cm-1 for kaolinite. When the above materials are mixed, interference occurs at the peak for quantification, which is corrected by the calculation formula. The analysis of the mixture of α-quartz and cristobalite shows the average bias (%) of 2.64 (corrected) at α-quartz (797.66 cm-1), 5.61 (uncorrected) at α-quartz (695.25 cm-1) and 1.51 (uncorrected) at cristobalite (621.58 cm-1). The analysis of the mixture of α-quartz and kaolinite shows the average bias(%) of 1.79(corrected) at α-quartz (797.66 cm-1), 3.92 (corrected) at α-quartz (695.25 cm-1) and 2.58 (uncorrected) at kaolinite (3696.47 cm-1). The analysis of the mixture of cristobalite and kaolinite shows the average bias (%) of 2.15 (corrected) at cristobalite (621.58 cm-1), 4.32 (uncorrected) at kaolinite (3696.47 cm-1). The analysis of the mixture of αquartz and cristobalite and kaolinite shows the average bias (%) of 1.93(corrected) at α-quartz (797.66 cm-1), 6.47 (corrected) at α-quartz (695.25 cm-1) and 1.77 (corrected) at cristobalite (621.58 cm-1) and 2.61 (uncorrected) at kaolinite (3696.47 cm-1). The experimental results showed that the deviation caused by peak interference by two or three substances could be corrected to less than 6 % of the average deviation. This study showed the possibility of correcting and quantifying when various interfering substances that are difficult to remove are mixed.

Equilibrium Fractionation of Clumped Isotopes in H2O Molecule: Insights from Quantum Chemical Calculations (양자화학 계산을 이용한 H2O 분자의 Clumped 동위원소 분배특성 분석)

  • Sehyeong Roh;Sung Keun Lee
    • Korean Journal of Mineralogy and Petrology
    • /
    • v.36 no.4
    • /
    • pp.355-363
    • /
    • 2023
  • In this study, we explore the nature of clumped isotopes of H2O molecule using quantum chemical calculations. Particularly, we estimated the relative clumping strength between diverse isotopologues, consisting of oxygen (16O, 17O, and 18O) and hydrogen (hydrogen, deuterium, and tritium) isotopes and quantify the effect of temperature on the extent of isotope clumping. The optimized equilibrium bond lengths and the bond angles of the molecules are 0.9631-0.9633 Å and 104.59-104.62°, respectively, and show a negligible variation among the isotopologues. The calculated frequencies of the modes of H2O molecules decrease as isotope mass number increases, and show a more prominent change with varying hydrogen isotopes over those with oxygen isotopes. The equilibrium constants of isotope substitution reactions involving these isotopologues reveal a greater effect of hydrogen mass number than oxygen mass number. The calculated equilibrium constants of clumping reaction for four heavy isotopologues showed a strong correlation; particularly, the relative clumping strength of three isotopologues was 1.86 times (HT18O), 1.16 times (HT17O), and 0.703 times (HD17O) relative to HD18O, respectively. The relative clumping strength decreases with increasing temperature, and therefore, has potential for a novel paleo-temperature proxy. The current calculation results highlight the first theoretical study to establish the nature of clumped isotope fractions in H2O including 17O and tritium. The current results help to account for diverse geochemical processes in earth's surface environments. Future efforts include the calculations of isotope fractionations among various phases of H2O isotopologues with a full consideration of the effect of anharmonicity in molecular vibration.

Optimization and Applicability Verification of Simultaneous Chlorogenic acid and Caffeine Analysis in Health Functional Foods using HPLC-UVD (HPLC-UVD를 이용한 건강기능식품에서 클로로겐산과 카페인 동시분석법 최적화 및 적용성 검증)

  • Hee-Sun Jeong;Se-Yun Lee;Kyu-Heon Kim;Mi-Young Lee;Jung-Ho Choi;Jeong-Sun Ahn;Jae-Myoung Oh;Kwang-Il Kwon;Hye-Young Lee
    • Journal of Food Hygiene and Safety
    • /
    • v.39 no.2
    • /
    • pp.61-71
    • /
    • 2024
  • In this study, we analyzed chlorogenic acid indicator components in preparation for the additional listing of green coffee bean extract in the Health Functional Food Code and optimized caffeine for simultaneous analysis. We extracted chlorogenic acid and caffeine using 30% methanol, phosphoric acid solution, and acetonitrile-containing phosphoric acid and analyzed them at 330 and 280 nm, respectively, using liquid chromatography. Our analysis validation results yielded a correlation coefficient (R2) revealing a significance level of at least 0.999 within the linear quantitative range. The chlorogenic acid and caffeine detection and quantification limits were 0.5 and 0.2 ㎍/mL and 1.4, and 0.4 ㎍/mL, respectively. We confirmed that the precision and accuracy results were suitable using the AOAC validation guidelines. Finally, we developed a simultaneous chlorogenic acid and caffeine analysis approach. In addition, we confirmed that our analysis approach could simultaneously quantify chlorogenic acid and caffeine by examining the applicability of each formulation through prototypes and distribution products. In conclusion, the results of this study demonstrated that the standardized analysis would expectably increase chlorogenic acidcontaining health functional food quality control reliability.

Assessment of Methane Production Rate Based on Factors of Contaminated Sediments (오염퇴적물의 주요 영향인자에 따른 메탄발생 생성률 평가)

  • Dong Hyun Kim;Hyung Jun Park;Young Jun Bang;Seung Oh Lee
    • Journal of Korean Society of Disaster and Security
    • /
    • v.16 no.4
    • /
    • pp.45-59
    • /
    • 2023
  • The global focus on mitigating climate change has traditionally centered on carbon dioxide, but recent attention has shifted towards methane as a crucial factor in climate change adaptation. Natural settings, particularly aquatic environments such as wetlands, reservoirs, and lakes, play a significant role as sources of greenhouse gases. The accumulation of organic contaminants on the lake and reservoir beds can lead to the microbial decomposition of sedimentary material, generating greenhouse gases, notably methane, under anaerobic conditions. The escalation of methane emissions in freshwater is attributed to the growing impact of non-point sources, alterations in water bodies for diverse purposes, and the introduction of structures such as river crossings that disrupt natural flow patterns. Furthermore, the effects of climate change, including rising water temperatures and ensuing hydrological and water quality challenges, contribute to an acceleration in methane emissions into the atmosphere. Methane emissions occur through various pathways, with ebullition fluxes-where methane bubbles are formed and released from bed sediments-recognized as a major mechanism. This study employs Biochemical Methane Potential (BMP) tests to analyze and quantify the factors influencing methane gas emissions. Methane production rates are measured under diverse conditions, including temperature, substrate type (glucose), shear velocity, and sediment properties. Additionally, numerical simulations are conducted to analyze the relationship between fluid shear stress on the sand bed and methane ebullition rates. The findings reveal that biochemical factors significantly influence methane production, whereas shear velocity primarily affects methane ebullition. Sediment properties are identified as influential factors impacting both methane production and ebullition. Overall, this study establishes empirical relationships between bubble dynamics, the Weber number, and methane emissions, presenting a formula to estimate methane ebullition flux. Future research, incorporating specific conditions such as water depth, effective shear stress beneath the sediment's tensile strength, and organic matter, is expected to contribute to the development of biogeochemical and hydro-environmental impact assessment methods suitable for in-situ applications.

[ $Gd(DTPA)^{2-}$ ]-enhanced, and Quantitative MR Imaging in Articular Cartilage (관절연골의 $Gd(DTPA)^{2-}$-조영증강 및 정량적 자기공명영상에 대한 실험적 연구)

  • Eun Choong-Ki;Lee Yeong-Joon;Park Auh-Whan;Park Yeong-Mi;Bae Jae-Ik;Ryu Ji Hwa;Baik Dae-Il;Jung Soo-Jin;Lee Seon-Joo
    • Investigative Magnetic Resonance Imaging
    • /
    • v.8 no.2
    • /
    • pp.100-108
    • /
    • 2004
  • Purpose : Early degeneration of articular cartilage is accompanied by a loss of glycosaminoglycan (GAG) and the consequent change of the integrity. The purpose of this study was to biochemically quantify the loss of GAG, and to evaluate the $Gd(DTPA)^{2-}$-enhanced, and T1, T2, rho relaxation map for detection of the early degeneration of cartilage. Materials and Methods : A cartilage-bone block in size of $8mm\;\times\;10mm$ was acquired from the patella in each of three pigs. Quantitative analysis of GAG of cartilage was performed at spectrophotometry by use of dimethylmethylene blue. Each of cartilage blocks was cultured in one of three different media: two different culture media (0.2 mg/ml trypsin solution, 1mM Gd $(DTPA)^{2-}$ mixed trypsin solution) and the control media (phosphate buffered saline (PBS)). The cartilage blocks were cultured for 5 hrs, during which MR images of the blocks were obtained at one hour interval (0 hr, 1 hr, 2 hr, 3 hr, 4 hr, 5 hr). And then, additional culture was done for 24 hrs and 48 hrs. Both T1-weighted image (TR/TE, 450/22 ms), and mixed-echo sequence (TR/TE, 760/21-168ms; 8 echoes) were obtained at all times using field of view 50 mm, slice thickness 2 mm, and matrix $256\times512$. The MRI data were analyzed with pixel-by-pixel comparisons. The cultured cartilage-bone blocks were microscopically observed using hematoxylin & eosin, toluidine blue, alcian blue, and trichrome stains. Results : At quantitation analysis, GAG concentration in the culture solutions was proportional to the culture durations. The T1-signal of the cartilage-bone block cultured in the $Gd(DTPA)^{2-}$ mixed solution was significantly higher ($42\%$ in average, p<0.05) than that of the cartilage-bone block cultured in the trypsin solution alone. The T1, T2, rho relaxation times of cultured tissue were not significantly correlated with culture duration (p>0.05). However the focal increase in T1 relaxation time at superficial and transitional layers of cartilage was seen in $Gd(DTPA)^{2-}$ mixed culture. Toluidine blue and alcian blue stains revealed multiple defects in whole thickness of the cartilage cultured in trypsin media. Conclusion : The quantitative analysis showed gradual loss of GAG proportional to the culture duration. Microimagings of cartilage with $Gd(DTPA)^{2-}$-enhancement, relaxation maps were available by pixel size of $97.9\times195\;{\mu}m$. Loss of GAG over time better demonstrated with $Gd(DTPA)^{2-}$-enhanced images than with T1, T2, rho relaxation maps. Therefore $Gd(DTPA)^{2-}$-enhanced T1-weighted image is superior for detection of early degeneration of cartilage.

  • PDF

Econometric Analysis on Factors of Food Demand in the Household : Comparative Study between Korea and Japan (가계 식품수요 요인의 계량분석 - 한국과 일본의 비교 -)

  • Jho, Kwang-Hyun
    • Journal of the Korean Society of Food Culture
    • /
    • v.14 no.4
    • /
    • pp.371-383
    • /
    • 1999
  • This report gave analysis of food demand both in Korea and Japan through introducing the concept of cohort analysis to the conventional demand model. This research was done to clarify the factors which determine food demand of the household. The traits of the new model for demand analysis are to consider and quantify those effects on food demand not only of economic factors such as expenditure and price but also of non-economic factors such as the age and birth cohort of the householder. The results of the analysis can be summarized as follows: 1) The comparison of the item-wise elasticities of food demand demonstrates that the expenditure elasticity is higher in Korea than in Japan and that the expenditure elasticity is -0.1 for cereal and more than 1 for eating-out in both countries. In respect to price elasticity, the absolute values of all the items except alcohol and cooked food are higher in the Korea than in Japan, and especially the price elasticities of beverages, dairy products and fruit are predominantly higher in Japan. In this way, both expenditure and price elasticities of a large number of items are higher in Korea than in Japan, which may be explained from the fact that the level of expenditure is higher in Japan than in Korea. 2) In both of Korea and Japan, as the householder grows older, the expenditure for each item increases and the composition of expenditure changes in such a way that these moves may be regarded as due to the age effect. However, there are both similarities and differences in the details of such moves between Korea and Japan. Those two countries have this trait in common that the young age groups of the householder spend more on dairy products and middle age groups spend more on cake than other age groups. In the Korea, however, there can be seen a certain trend that higher age groups spend more on a large number of items, reflecting the fact that there are more two-generation families in higher age groups. Japan differs from Korea in that expenditure in Japan is diversified, depending upon the age group. For example, in Japan, middle age groups spend more on cake, cereal, high-caloric food like meat and eating-out while older age groups spend more for Japanese-style food like fish/shellfish and vegetable/seaweed, and cooked food. 3) The effect of the birth cohort effect was also demonstrated. The birth cohort effect was introduced under the supposition that the food circumstances under which the householder was born and brought up would determine the current expenditure. Thus, the following was made clear: older generations in both countries placed more emphasis upon stable food in their composition of food consumption; the share of livestock products, oil/fats and externalized food was higher in the food composition of younger generation; differences in food composition among generations were extremely large in Korea while they were relatively small in Japan; and Westernization and externalization of diet made rapid increases simultaneously with generation changes in Korea while they made any gradual increases in Japan during the same time period. 4) The four major factors which impact the long-term change of food demand of the household are expenditure, price, the age of the householder, and the birth cohort of the householder. Investigations were made as to which factor had the largest impact. As a result, it was found that the price effect was the smallest in both countries, and that the relative importance of the factor-by-factor effects differed among the two countries: in Korea the expenditure effect was greater than the effects of age and birth cohort while in Japan the effects of non-economic factors such as the age and birth cohort of householder were greater than those of economic factors such as expenditures.

  • PDF

A STUDY ON THE IONOSPHERE AND THERMOSPHERE INTERACTION BASED ON NCAR-TIEGCM: DEPENDENCE OF THE INTERPLANETARY MAGNETIC FIELD (IMF) ON THE MOMENTUM FORCING IN THE HIGH-LATITUDE LOWER THERMOSPHERE (NCAR-TIEGCM을 이용한 이온권과 열권의 상호작용 연구: 행성간 자기장(IMF)에 따른 고위도 하부 열권의 운동량 강제에 대한 연구)

  • Kwak, Young-Sil;Richmond, Arthur D.;Ahn, Byung-Ho;Won, Young-In
    • Journal of Astronomy and Space Sciences
    • /
    • v.22 no.2
    • /
    • pp.147-174
    • /
    • 2005
  • To understand the physical processes that control the high-latitude lower thermospheric dynamics, we quantify the forces that are mainly responsible for maintaining the high-latitude lower thermospheric wind system with the aid of the National Center for Atmospheric Research Thermosphere-Ionosphere Electrodynamics General Circulation Model (NCAR-TIEGCM). Momentum forcing is statistically analyzed in magnetic coordinates, and its behavior with respect to the magnitude and orientation of the interplanetary magnetic field (IMF) is further examined. By subtracting the values with zero IMF from those with non-zero IMF, we obtained the difference winds and forces in the high-latitude 1ower thermosphere(<180 km). They show a simple structure over the polar cap and auroral regions for positive($B_y$ > 0.8|$\overline{B}_z$ |) or negative($B_y$ < -0.8|$\overline{B}_z$|) IMF-$\overline{B}_y$ conditions, with maximum values appearing around -80$^{\circ}$ magnetic latitude. Difference winds and difference forces for negative and positive $\overline{B}_y$ have an opposite sign and similar strength each other. For positive($B_z$ > 0.3125|$\overline{B}_y$|) or negative($B_z$ < -0.3125|$\overline{B}_y$|) IMF-$\overline{B}_z$ conditions the difference winds and difference forces are noted to subauroral latitudes. Difference winds and difference forces for negative $\overline{B}_z$ have an opposite sign to positive $\overline{B}_z$ condition. Those for negative $\overline{B}_z$ are stronger than those for positive indicating that negative $\overline{B}_z$ has a stronger effect on the winds and momentum forces than does positive $\overline{B}_z$ At higher altitudes(>125 km) the primary forces that determine the variations of tile neutral winds are the pressure gradient, Coriolis and rotational Pedersen ion drag forces; however, at various locations and times significant contributions can be made by the horizontal advection force. On the other hand, at lower altitudes(108-125 km) the pressure gradient, Coriolis and non-rotational Hall ion drag forces determine the variations of the neutral winds. At lower altitudes(<108 km) it tends to generate a geostrophic motion with the balance between the pressure gradient and Coriolis forces. The northward component of IMF By-dependent average momentum forces act more significantly on the neutral motion except for the ion drag. At lower altitudes(108-425 km) for negative IMF-$\overline{B}_y$ condition the ion drag force tends to generate a warm clockwise circulation with downward vertical motion associated with the adiabatic compress heating in the polar cap region. For positive IMF-$\overline{B}_y$ condition it tends to generate a cold anticlockwise circulation with upward vertical motion associated with the adiabatic expansion cooling in the polar cap region. For negative IMF-$\overline{B}_z$ the ion drag force tends to generate a cold anticlockwise circulation with upward vertical motion in the dawn sector. For positive IMF-$\overline{B}_z$ it tends to generate a warm clockwise circulation with downward vertical motion in the dawn sector.

Correlation between High-Resolution CT and Pulmonary Function Tests in Patients with Emphysema (폐기종환자에서 고해상도 CT와 폐기능검사와의 상관관계)

  • Ahn, Joong-Hyun;Park, Jeong-Mee;Ko, Seung-Hyeon;Yoon, Jong-Goo;Kwon, Soon-Seug;Kim, Young-Kyoon;Kim, Kwan-Hyoung;Moon, Hwa-Sik;Park, Sung-Hak;Song, Jeong-Sup
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.3
    • /
    • pp.367-376
    • /
    • 1996
  • Background : The diagnosis of emphysema during life is based on a combination of clinical, functional, and radiographic findings, but this combination is relatively insensitive and nonspecific. The development of rapid, high-resolution third and fourth generation CT scanners has enabled us to resolve pulmonary parenchymal abnormalities with great precision. We compared the chest HRCT findings to the pulmonary function test and arterial blood gas analysis in pulmonary emphysema patients to test the ability of HRCT to quantify the degree of pulmonary emphysema. Methods : From october 1994 to october 1995, the study group consisted of 20 subjects in whom HRCT of the thorax and pulmonary function studies had been obtained at St. Mary's hospital. The analysis was from scans at preselected anatomic levels and incorporated both lungs. On each HRCT slice the lung parenchyma was assessed for two aspects of emphysema: severity and extent. The five levels were graded and scored separately for the left and right lung giving a total of 10 lung fields. A combination of severity and extent gave the degree of emphysema. We compared the HRCT quantitation of emphysema, pulmonary function tests, ABGA, CBC, and patients characteristics(age, sex, height, weight, smoking amounts etc.) in 20 patients. Results : 1) There was a significant inverse correlation between HRCT scores for emphysema and percentage predicted values of DLco(r = -0.68, p < 0.05), DLco/VA(r = -0.49, p < 0.05), FEV1(r = -0.53, p < 0.05), and FVC(r = -0.47, p < 0.05). 2) There was a significant correlation between the HRCT scores and percentage predicted values of TLC(r = 0.50, p < 0.05), RV(r = 0.64, p < 0.05). 3) There was a significant inverse correlation between the HRCT scores and PaO2(r = -0.48, p < 0.05) and significant correlation with D(A-a)O2(r = -0.48, p < 0.05) but no significant correlation between the HRCT scores and PaCO2. 4) There was no significant correlation between the HRCT scores and age, sex, height, weight, smoking amounts in patients, hemoglobin, hematocrit, and wbc counts. Conclusion : High-Resolution CT provides a useful method for early detection and quantitating emphysema in life and correlates significantly with pulmonary function tests and arterial blood gas analysis.

  • PDF

Expression of Decidual Natural Killer (NK) Cells in Women of Recurrent Abortion with Increased Peripheral NK Cells (말초혈액자연살해세포가 증가된 반복유산 환자의 탈락막자연살해세포의 발현)

  • Yeon, Myeong-Jin;Yang, Kwang-Moon;Park, Chan-Woo;Song, In-Ok;Kang, Inn-Soo;Hong, Sung-Ran;Cho, Dong-Hee;Cho, Yong-Kyoon
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.35 no.2
    • /
    • pp.119-129
    • /
    • 2008
  • Objective: The purpose of this study was to quantify decidual $CD56^+$ and $CD16^+$ NK cell subtype population and to evaluate the correlation between decidual NK cell expression and peripheral $CD56^+$ NK cell expression in women with a history of recurrent abortion and increased peripheral NK cells. Methods: Twenty-nine women with recurrent abortion and elevated peripheral $CD56^+$ NK cell percentage who had chromosomally normal conceptus were included in this study. Thirty-two women with recurrent abortion who had chromosomally abnormal conceptus were used as controls. Distribution of $CD56^+$ and $CD16^+$ NK cells in decidual tissues including implantation sites was examined by immunohistochemical staining. The degree of immunohistochemical staining was interpreted by score and percentage. Results: There was a significant difference in decidual $CD56^+$ NK cell score ($43.6{\pm}24.5$ vs. $23.9{\pm}16.3$ P =0.001) and $CD56^+$ NK cell percentage ($42.1{\pm}11.7$ vs. $33.9{\pm}15.8$ P =0.027) between increased peripheral NK cell group and control group. However, there was no statistically significant difference in decidual $CD16^+$ NK cell score ($18.7{\pm}9.5$ vs. $13.2{\pm}39.4$ P =0.108) and $CD16^+$ NK cell percentage ($24.7{\pm}5.9$ vs. $23.4{\pm}11.7$ P =0.599). There was no significant correlation between decidual NK cell score and peripheral NK cell percentage in increase peripheral NK cell group (peripheral $CD56^+$ NK cell percentage vs. decidual $CD56^+$ NK cell score, r=-0.016, P =0.932, peripheral $CD16^+$ NK cell percentage vs. decidual $CD16^+$ NK cell score, r=0.008, P =0.968). Conclusion: This study shows that $CD56^+$ decidual NK cells are increased in decidua of women exhibiting a history of recurrent abortion with increased $CD56^+$ peripheral NK cell. There was no significant correlation between decidual and peripheral NK cell increment in increase peripheral NK cell group. This study suggests the possibility that decidual NK cells may play an important role in the immune mechanism of recurrent abortion.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF