• Title/Summary/Keyword: engineering technique

Search Result 22,457, Processing Time 0.078 seconds

A study on the Degradation and By-products Formation of NDMA by the Photolysis with UV: Setup of Reaction Models and Assessment of Decomposition Characteristics by the Statistical Design of Experiment (DOE) based on the Box-Behnken Technique (UV 공정을 이용한 N-Nitrosodimethylamine (NDMA) 광분해 및 부산물 생성에 관한 연구: 박스-벤켄법 실험계획법을 이용한 통계학적 분해특성평가 및 반응모델 수립)

  • Chang, Soon-Woong;Lee, Si-Jin;Cho, Il-Hyoung
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.32 no.1
    • /
    • pp.33-46
    • /
    • 2010
  • We investigated and estimated at the characteristics of decomposition and by-products of N-Nitrosodimethylamine (NDMA) using a design of experiment (DOE) based on the Box-Behken design in an UV process, and also the main factors (variables) with UV intensity($X_2$) (range: $1.5{\sim}4.5\;mW/cm^2$), NDMA concentration ($X_2$) (range: 100~300 uM) and pH ($X_2$) (rang: 3~9) which consisted of 3 levels in each factor and 4 responses ($Y_1$ (% of NDMA removal), $Y_2$ (dimethylamine (DMA) reformation (uM)), $Y_3$ (dimethylformamide (DMF) reformation (uM), $Y_4$ ($NO_2$-N reformation (uM)) were set up to estimate the prediction model and the optimization conditions. The results of prediction model and optimization point using the canonical analysis in order to obtain the optimal operation conditions were $Y_1$ [% of NDMA removal] = $117+21X_1-0.3X_2-17.2X_3+{2.43X_1}^2+{0.001X_2}^2+{3.2X_3}^2-0.08X_1X_2-1.6X_1X_3-0.05X_2X_3$ ($R^2$= 96%, Adjusted $R^2$ = 88%) and 99.3% ($X_1:\;4.5\;mW/cm^2$, $X_2:\;190\;uM$, $X_3:\;3.2$), $Y_2$ [DMA conc] = $-101+18.5X_1+0.4X_2+21X_3-{3.3X_1}^2-{0.01X_2}^2-{1.5X_3}^2-0.01X_1X_2+0.07X_1X_3-0.01X_2X_3$ ($R^2$= 99.4%, 수정 $R^2$ = 95.7%) and 35.2 uM ($X_1$: 3 $mW/cm^2$, $X_2$: 220 uM, $X_3$: 6.3), $Y_3$ [DMF conc] = $-6.2+0.2X_1+0.02X_2+2X_3-0.26X_1^2-0.01X_2^2-0.2X_3^2-0.004X_1X_2+0.1X_1X_3-0.02X_2X_3$ ($R^2$= 98%, Adjusted $R^2$ = 94.4%) and 3.7 uM ($X_1:\;4.5\;$mW/cm^2$, $X_2:\;290\;uM$, $X_3:\;6.2$) and $Y_4$ [$NO_2$-N conc] = $-25+12.2X_1+0.15X_2+7.8X_3+{1.1X_1}^2+{0.001X_2}^2-{0.34X_3}^2+0.01X_1X_2+0.08X_1X_3-3.4X_2X_3$ ($R^2$= 98.5%, Adjusted $R^2$ = 95.7%) and 74.5 uM ($X_1:\;4.5\;mW/cm^2$, $X_2:\;220\;uM$, $X_3:\;3.1$). This study has demonstrated that the response surface methodology and the Box-Behnken statistical experiment design can provide statistically reliable results for decomposition and by-products of NDMA by the UV photolysis and also for determination of optimum conditions. Predictions obtained from the response functions were in good agreement with the experimental results indicating the reliability of the methodology used.

Performance Analysis of Frequent Pattern Mining with Multiple Minimum Supports (다중 최소 임계치 기반 빈발 패턴 마이닝의 성능분석)

  • Ryang, Heungmo;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.1-8
    • /
    • 2013
  • Data mining techniques are used to find important and meaningful information from huge databases, and pattern mining is one of the significant data mining techniques. Pattern mining is a method of discovering useful patterns from the huge databases. Frequent pattern mining which is one of the pattern mining extracts patterns having higher frequencies than a minimum support threshold from databases, and the patterns are called frequent patterns. Traditional frequent pattern mining is based on a single minimum support threshold for the whole database to perform mining frequent patterns. This single support model implicitly supposes that all of the items in the database have the same nature. In real world applications, however, each item in databases can have relative characteristics, and thus an appropriate pattern mining technique which reflects the characteristics is required. In the framework of frequent pattern mining, where the natures of items are not considered, it needs to set the single minimum support threshold to a too low value for mining patterns containing rare items. It leads to too many patterns including meaningless items though. In contrast, we cannot mine any pattern if a too high threshold is used. This dilemma is called the rare item problem. To solve this problem, the initial researches proposed approximate approaches which split data into several groups according to item frequencies or group related rare items. However, these methods cannot find all of the frequent patterns including rare frequent patterns due to being based on approximate techniques. Hence, pattern mining model with multiple minimum supports is proposed in order to solve the rare item problem. In the model, each item has a corresponding minimum support threshold, called MIS (Minimum Item Support), and it is calculated based on item frequencies in databases. The multiple minimum supports model finds all of the rare frequent patterns without generating meaningless patterns and losing significant patterns by applying the MIS. Meanwhile, candidate patterns are extracted during a process of mining frequent patterns, and the only single minimum support is compared with frequencies of the candidate patterns in the single minimum support model. Therefore, the characteristics of items consist of the candidate patterns are not reflected. In addition, the rare item problem occurs in the model. In order to address this issue in the multiple minimum supports model, the minimum MIS value among all of the values of items in a candidate pattern is used as a minimum support threshold with respect to the candidate pattern for considering its characteristics. For efficiently mining frequent patterns including rare frequent patterns by adopting the above concept, tree based algorithms of the multiple minimum supports model sort items in a tree according to MIS descending order in contrast to those of the single minimum support model, where the items are ordered in frequency descending order. In this paper, we study the characteristics of the frequent pattern mining based on multiple minimum supports and conduct performance evaluation with a general frequent pattern mining algorithm in terms of runtime, memory usage, and scalability. Experimental results show that the multiple minimum supports based algorithm outperforms the single minimum support based one and demands more memory usage for MIS information. Moreover, the compared algorithms have a good scalability in the results.

Adaptive RFID anti-collision scheme using collision information and m-bit identification (충돌 정보와 m-bit인식을 이용한 적응형 RFID 충돌 방지 기법)

  • Lee, Je-Yul;Shin, Jongmin;Yang, Dongmin
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.1-10
    • /
    • 2013
  • RFID(Radio Frequency Identification) system is non-contact identification technology. A basic RFID system consists of a reader, and a set of tags. RFID tags can be divided into active and passive tags. Active tags with power source allows their own operation execution and passive tags are small and low-cost. So passive tags are more suitable for distribution industry than active tags. A reader processes the information receiving from tags. RFID system achieves a fast identification of multiple tags using radio frequency. RFID systems has been applied into a variety of fields such as distribution, logistics, transportation, inventory management, access control, finance and etc. To encourage the introduction of RFID systems, several problems (price, size, power consumption, security) should be resolved. In this paper, we proposed an algorithm to significantly alleviate the collision problem caused by simultaneous responses of multiple tags. In the RFID systems, in anti-collision schemes, there are three methods: probabilistic, deterministic, and hybrid. In this paper, we introduce ALOHA-based protocol as a probabilistic method, and Tree-based protocol as a deterministic one. In Aloha-based protocols, time is divided into multiple slots. Tags randomly select their own IDs and transmit it. But Aloha-based protocol cannot guarantee that all tags are identified because they are probabilistic methods. In contrast, Tree-based protocols guarantee that a reader identifies all tags within the transmission range of the reader. In Tree-based protocols, a reader sends a query, and tags respond it with their own IDs. When a reader sends a query and two or more tags respond, a collision occurs. Then the reader makes and sends a new query. Frequent collisions make the identification performance degrade. Therefore, to identify tags quickly, it is necessary to reduce collisions efficiently. Each RFID tag has an ID of 96bit EPC(Electronic Product Code). The tags in a company or manufacturer have similar tag IDs with the same prefix. Unnecessary collisions occur while identifying multiple tags using Query Tree protocol. It results in growth of query-responses and idle time, which the identification time significantly increases. To solve this problem, Collision Tree protocol and M-ary Query Tree protocol have been proposed. However, in Collision Tree protocol and Query Tree protocol, only one bit is identified during one query-response. And, when similar tag IDs exist, M-ary Query Tree Protocol generates unnecessary query-responses. In this paper, we propose Adaptive M-ary Query Tree protocol that improves the identification performance using m-bit recognition, collision information of tag IDs, and prediction technique. We compare our proposed scheme with other Tree-based protocols under the same conditions. We show that our proposed scheme outperforms others in terms of identification time and identification efficiency.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

  • Kim, Hyung Su;Hong, Seung Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.111-126
    • /
    • 2020
  • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.

The Comparison of Basic Science Research Capacity of OECD Countries

  • Lim, Yang-Taek;Song, Choong-Han
    • Journal of Technology Innovation
    • /
    • v.11 no.1
    • /
    • pp.147-176
    • /
    • 2003
  • This Paper Presents a new measurement technique to derive the level of BSRC (Basic Science and Research Capacity) index by use of the factor analysis which is extended with the assumption of the standard normal probability distribution of the selected explanatory variables. The new measurement method is used to forecast the gap of Korea's BSRC level compared with those of major OECD countries in terms of time lag and to make their international comparison during the time period of 1981∼1999, based on the assumption that the BSRC progress function of each country takes the form of the logistic curve. The US BSRC index is estimated to be 0.9878 in 1981, 0.9996 in 1990 and 0.99991 in 1999, taking the 1st place. The US BSRC level has been consistently the top among the 16 selected variables, followed by Japan, Germany, France and the United Kingdom, in order. Korea's BSRC is estimated to be 0.2293 in 1981, taking the lowest place among the 16 OECD countries. However, Korea's BSRC indices are estimated to have been increased to 0.3216 (in 1990) and 0.44652 (in 1999) respectively, taking 10th place. Meanwhile, Korea's BSRC level in 1999 (0.44652) is estimated to reach those of the US and Japan in 2233 and 2101, respectively. This means that Korea falls 234 years behind USA and 102 years behind Japan, respectively. Korea is also estimated to lag 34 years behind Germany, 16 years behind France and the UK, 15 years behind Sweden, 11 years behind Canada, 7 years behind Finland, and 5 years behind the Netherlands. For the period of 1981∼1999, the BSRC development speed of the US is estimated to be 0.29700. Its rank is the top among the selected OECD countries, followed by Japan (0.12800), Korea (0.04443), and Germany (0.04029). the US BSRC development speed (0.2970) is estimated to be 2.3 times higher than that of Japan (0.1280), and 6.7 times higher than that of Korea. German BSRC development speed (0.04029) is estimated to be fastest in Europe, but it is 7.4 times slower than that of the US. The estimated BSRC development speeds of Belgium, Finland, Italy, Denmark and the UK stand between 0.01 and 0.02, which are very slow. Particularly, the BSRC development speed of Spain is estimated to be minus 0.0065, staying at the almost same level of BSRC over time (1981 ∼ 1999). Since Korea shows BSRC development speed much slower than those of the US and Japan but relative]y faster than those of other countries, the gaps in BSRC level between Korea and the other countries may get considerably narrower or even Korea will surpass possibly several countries in BSRC level, as time goes by. Korea's BSRC level had taken 10th place till 1993. However, it is estimated to be 6th place in 2010 by catching up the UK, Sweden, Finland and Holland, and 4th place in 2020 by catching up France and Canada. The empirical results are consistent with OECD (2001a)'s computation that Korea had the highest R&D expenditures growth during 1991∼1999 among all OECD countries ; and the value-added of ICT industries in total business sectors value added is 12% in Korea, but only 8% in Japan. And OECD (2001b) observed that Korea, together with the US, Sweden, and Finland, are already the four most knowledge-based countries. Hence, the rank of the knowledge-based country was measured by investment in knowledge which is defined as public and private spending on higher education, expenditures on R&D and investment in software.

  • PDF

A Study on Movement of the Free Face During Bench Blasting (전방 자유면의 암반 이동에 관한 연구)

  • Lee, Ki-Keun;Kim, Gab-Soo;Yang, Kuk-Jung;Kang, Dae-Woo;Hur, Won-Ho
    • Explosives and Blasting
    • /
    • v.30 no.2
    • /
    • pp.29-42
    • /
    • 2012
  • Variables influencing the free face movement due to rock blasting include the physical and mechanical properties, in particular the discontinuity characteristics, explosive type, charge weight, burden, blast-hole spacing, delay time between blast-holes or rows, stemming conditions. These variables also affects the blast vibration, air blast and size of fragmentation. For the design of surface blasting, the priority is given to the safety of nearby buildings. Therefore, blast vibration has to be controlled by analyzing the free face movement at the surface blasting sites and also blasting operation needs to be optimized to improve the fragmentation size. High-speed digital image analysis enables the analyses of the initial movement of free face of rock, stemming optimality, fragment trajectory, face movement direction and velocity as well as the optimal detonator initiation system. Even though The high-speed image analysis technique has been widely used in foreign countries, its applications can hardly be found in Korea. This thesis aims at carrying out a fundamental study for optimizing the blast design and evaluation using the high-speed digital image analysis. A series of experimentation were performed at two large surface blasting sites with the rock type of shale and granite, respectively. Emulsion and ANFO were the explosives used for the study. Based on the digital images analysis, displacement and velocity of the free face were scrutinized along with the analysis fragment size distribution. In addition, AUTODYN, 2-D FEM model, was applied to simulate detonation pressure, detonation velocity, response time for the initiation of the free face movement and face movement shape. The result show that regardless of the rock type, due to the displacement and the movement velocity have the maximum near the center of charged section the free face becomes curved like a bow. Compared with ANFO, the cases with Emulsion result in larger detonation pressure and velocity and faster reaction for the displacement initiation.

Evaluation of the CO2 Storage Capacity by the Measurement of the scCO2 Displacement Efficiency for the Sandstone and the Conglomerate in Janggi Basin (장기분지 사암과 역암 공극 내 초임계 이산화탄소 대체저장효율 측정에 의한 이산화탄소 저장성능 평가)

  • Kim, Seyoon;Kim, Jungtaek;Lee, Minhee;Wang, Sookyun
    • Economic and Environmental Geology
    • /
    • v.49 no.6
    • /
    • pp.469-477
    • /
    • 2016
  • To evaluate the $CO_2$ storage capacity for the reservoir rock, the laboratory scale technique to measure the amount of $scCO_2$, replacing pore water of the reservior rock after the $CO_2$ injection was developed in this study. Laboratory experiments were performed to measure the $scCO_2$ displacement efficiency of the conglomerate and the sandstone in Janggi basin, which are classified as available $CO_2$ storage rocks in Korea. The high pressurized stainless steel cell containing two different walls was designed and undisturbed rock cores acquired from the deep drilling site around Janggi basin were used for the experiments. From the lab experiments, the average $scCO_2$ displacement efficiency of the conglomerate and the sandstone in Janggi basin was measured at 31.2% and 14.4%, respectively, which can be used to evaluate the feasibility of the Janggi basin as a $scCO_2$ storage site in Korea. Assuming that the effective radius of the $CO_2$ storage formations is 250 m and the average thickness of the conglomerate and the sandstone formation under 800 m in depth is 50 m each (from data of the drilling profile and the geophysical survey), the $scCO_2$ storage capacity of the reservoir rocks around the probable $scCO_2$ injection site in Janggi basin was calculated at 264,592 metric ton, demonstrating that the conglomerate and the sandstone formations in Janggi basin have a great potential for use as a pilot scale test site for the $CO_2$ storage in Korea.

Effect of Calvarial Cell Inoculated Onto the Biodegradable Barrier Membrane on the Bone Regeneration (흡수성 차폐막에 접목된 두개관골세포의 골조직 재생에 미치는 영향)

  • Yu, Bu-Young;Lee, Man-Sup;Kwon, Young-Hyuk;Park, Joon-Bong;Herr, Yeek
    • Journal of Periodontal and Implant Science
    • /
    • v.29 no.3
    • /
    • pp.483-509
    • /
    • 1999
  • Biodegradable barrier membrane has been demonstrated to have guided bone regeneration capacity on the animal study. The purpose of this study is to evaluate the effects of cultured calvarial cell inoculated on the biodegradable barrier membrane for the regeneration of the artificial bone defect. In this experiment 35 Sprague-Dawley male rats(mean BW 150gm) were used. 30 rats were divided into 3 groups. In group I, defects were covered periosteum without membrane. In group II, defects were repaired using biodegradable barrier membrane. In group III, the defects were repaired using biodegradable barrier membrane seeded with cultured calvarial cell. Every surgical procedure were performed under the general anesthesia by using with intravenous injection of Pentobarbital sodium(30mg/Kg). After anesthesia, 5 rats were sacrificed by decapitation to obtain the calvaria for bone cell culture. Calvarial cells were cultured with Dulbecco's Modified Essential Medium contained with 10% Fetal Bovine Serum under the conventional conditions. The number of cell inoculated on the membrane were $1{\times}10^6$ Cells/ml. The membrane were inserted on the artificial bone defect after 3 days of culture. A single 3-mm diameter full-thickness artificial calvarial defect was made in each animal by using with bone trephine drill. After the every surgical intervention of animal, all of the animals were sacrificed at 1, 2, 3 weeks after surgery by using of perfusion technique. For obtaining histological section, tissues were fixed in 2.5% Glutaraldehyde (0.1M cacodylate buffer, pH 7.2) and Karnovsky's fixative solution, and decalcified with 0.1M disodium ethylene diaminetetraacetate for 3 weeks. Tissue embeding was performed in paraffin and cut parallel to the surface of calvaria. Section in 7${\mu}m$ thickness of tissue was done and stained with Hematoxylin-Eosin. All the specimens were observed under the light microscopy. The following results were obtained. 1 . During the whole period of experiment, fibrous connective tissue was revealed at 1week after surgery which meant rapid soft tissue recovery. The healing rate of defected area into new bone formation of the test group was observed more rapid tendency than other two groups. 2 . The sequence of healing rate of bone defected area was as follows ; test group, positive control, negative control group. 3 . During the experiment, an osteoclastic cell around preexisted bone was not found. New bone formation was originated from the periphery of the remaing bone wall, and gradually extended into central portion of the bone defect. 4 . The biodegradable barrier membrane was observed favorable biocompatibility during this experimental period without any other noticeable foreign body reaction. And mineralization in the newly formed osteoid tissue revealed relatively more rapid than other group since early stage of the healing process. Conclusively, the cultured bone cell inoculated onto the biodegradable barrier membrane may have an important role of regeneration of artificial bone defects of alveolar bone. This study thus demonstrates a tissue-engineering the approach to the repair of bone defects, which may have clinical applications in clinical fields of the dentistry including periodontics.

  • PDF

Prediction of the Gold-silver Deposits from Geochemical Maps - Applications to the Bayesian Geostatistics and Decision Tree Techniques (지화학자료를 이용한 금${\cdot}$은 광산의 배태 예상지역 추정-베이시안 지구통계학과 의사나무 결정기법의 활용)

  • Hwang, Sang-Gi;Lee, Pyeong-Koo
    • Economic and Environmental Geology
    • /
    • v.38 no.6 s.175
    • /
    • pp.663-673
    • /
    • 2005
  • This study investigates the relationship between the geochemical maps and the gold-silver deposit locations. Geochemical maps of 21 elements, which are published by KIGAM, locations of gold-silver deposits, and 1:1,000,000 scale geological map of Korea are utilized far this investigation. Pixel size of the basic geochemical maps is 250m and these data are resampled in 1km spacing for the statistical analyses. Relationship between the mine location and the geochemical data are investigated using bayesian statistics and decision tree algorithms. For the bayesian statistics, each geochemical maps are reclassified by percentile divisions which divides the data by 5, 25, 50, 75, 95, and $100\%$ data groups. Number of mine locations in these divisions are counted and the probabilities are calculated. Posterior probabilities of each pixel are calculated using the probability of 21 geochemical maps and the geological map. A prediction map of the mining locations is made by plotting the posterior probability. The input parameters for the decision tree construction are 21 geochemical elements and lithology, and the output parameters are 5 types of mines (Ag/Au, Cu, Fe, Pb/Zn, W) and absence of the mine. The locations for the absence of the mine are selected by resampling the overall area by 1 km spacing and eliminating my resampled points, which is in 750m distance from mine locations. A prediction map of each mine area is produced by applying the decision tree to every pixels. The prediction by Bayesian method is slightly better than the decision tree. However both prediction maps show reasonable match with the input mine locations. We interpret that such match indicate the rules produced by both methods are reasonable and therefore the geochemical data has strong relations with the mine locations. This implies that the geochemical rules could be used as background values oi mine locations, therefore could be used for evaluation of mine contamination. Bayesian statistics indicated that the probability of Au/Ag deposit increases as CaO, Cu, MgO, MnO, Pb and Li increases, and Zr decreases.