• Title/Summary/Keyword: Site-Specific Performance

Search Result 180, Processing Time 0.032 seconds

An Objective Procedure to Decide the Scale Factors for Applying Land-form Classification Methodology Using TPI (TPI 응용에 의한 산악지형 분류기법의 적용을 위한 scale factor 선정방법 개발)

  • Jang, Kwangmin;Song, Jungeun;Park, Kyeung;Chung, Joosang
    • Journal of Korean Society of Forest Science
    • /
    • v.98 no.6
    • /
    • pp.639-645
    • /
    • 2009
  • The objective of this research was to introduce the TPI approach for interpreting land-forms of mountain forests in South Korea. We develop an objective procedure to decide the scale factor as a basic analytical unit in land-form classification of rugged mountain areas using TPI. In order to determine the scale factor associated with the pattern of slope profiles, the gradient variance curve was derived from a revised hypsometric curve developed using the relief energy of topographic profiles. Using the gradient variance curve, found was the grid size with which the change in relief energy got the peak point. The grid size at the peak point was determined as the scale factor for the study area. In order to investigate the performance of the procedure based on the gradient variance curve, it was applied to determination of the site-specific scale factors of 3 different terrain conditions; highly-rugged, moderately-rugged and relatively less-rugged. The TPI associated with the corresponding scale factors by study site was, then, determined and used in classifying the land-forms. According to the results of this study, the scale factor gets shorter with more rugged terrain conditions. It was also found that the numbers of valleys and ridges estimated with TPI show almost the same trends as those of the observed and the scale factors tends to approach to the mean distance of ridges.

Identification of catalytic acidic residues of levan fructotransferase from Microbacterium sp. AL-210 (Microbacterium sp. AL-210이 생산하는 levan fructotransferase의 효소활성에 중요한 아미노산의 동정)

  • Sung, Hee-Kyung;Moon, Keum-Ok;Choi, Ki-Won;Choi, Kyung-Hwa;Hwang, Kyung-Ju;Kim, Myo-Jung;Cha, Jae-Ho
    • Journal of Life Science
    • /
    • v.17 no.1 s.81
    • /
    • pp.6-11
    • /
    • 2007
  • [ $\beta$ ]-Fructofuranosidases, a family 32 of glycoside hydrolases (GH32), share three conserved domains including the W(L/M)(C/N)DP(Q/N), FRDPK, and ECP(D/G) motifs. The functional role of the conserved acidic residues within three domains of levan fructotransferase, one of the $\beta-fructofuranosidases$, from Microbacterium sp. AL-210 was studied by site-directed mutagenesis. Each mutant was overexpressed in E. coli BL21(DE3) and purified by using Hi-Trap chelating affinity chromatography and fast performance liquid chromatography. Substitution of Asp-63 by Ala, Asp-195 by Asn, and Glu-245 by Ala and Asp decreased the enzyme activity by approximately 100-fold compared to the wild-type enzyme. This result indicates that three acidic residues Asp-63, Asp-195, and Glu-245 play a major role in catalysis. Since the three acidic residues are present in a conserved position in inulinase, levanase, levanfructotransferase, and invertase, they are likely to have a common functional role as nucleophile, transition state stabilizer, and general acid in $\beta-fructofuranosidases$.

Development of Neural Network Model for Estimation of Undrained Shear Strength of Korean Soft Soil Based on UU Triaxial Test and Piezocone Test Results (비압밀-비배수(UU) 삼축실험과 피에조콘 실험결과를 이용한 국내 연약지반의 비배수전단강도 추정 인공신경망 모델 개발)

  • Kim Young-Sang
    • Journal of the Korean Geotechnical Society
    • /
    • v.21 no.8
    • /
    • pp.73-84
    • /
    • 2005
  • A three layered neural network model was developed using back propagation algorithm to estimate the UU undrained shear strength of Korean soft soil based on the database of actual undrained shear strengths and piezocone measurements compiled from 8 sites over the Korea. The developed model was validated by comparing model predictions with measured values about new piezocone data, which were not previously employed during development of model. Performance of the neural network model was also compared with conventional empirical methods. It was found that the number of neuron in hidden layer is different for the different combination of transfer functions of neural network models. However, all piezocone neural network models are successful in inferring a complex relationship between piezocone measurements and the undrained shear strength of Korean soft soils, which give relatively high coefficients of determination ranging from 0.69 to 0.72. Since neural network model has been generalized by self-learning from database of piezocone measurements and undrained shear strength over the various sites, the developed neural network models give more precise and generally reliable undrained shear strengths than empirical approaches which still need site specific calibration.

Solid-phase PEGylation for Site-Specific Modification of Recombinant Interferon ${\alpha}$-2a : Process Performance, Characterization, and In-vitro Bioactivity (재조합 인터페론 알파-2a의 부위 특이적 수식을 위한 고체상 PEGylation : 공정 성능, 특성화 및 생물학적 활성)

  • Lee, Byung-Kook;Kwon, Jin-Sook;Lee, E.K.
    • KSBB Journal
    • /
    • v.21 no.2
    • /
    • pp.133-139
    • /
    • 2006
  • In 'solid-phase' PEGylation, the conjugation reaction occurs as the proteins are attached to a solid matrix, and thus it can have distinct advantages over the conventional, solution-phase process. We report a case study: rhIFN-${\alpha}$-2a was first adsorbed to cation exchange resin and then N-terminally PEGylated by aldehyde mPEG of 5, 10, and 20 kD through reductive alkylation. After the PEGylation, salt gradient elution efficiently recovered the mono-PEGylate in a purified form from the unwanted species such as unmodified IFN, unreacted PEG, and others. The mono-PEGylation and its purification were integrated in a single chromatographic step. Depending on the molecular weight of the mPEG aldehyde used, the mono-PEGylation yield ranged 50-64%. We could overcome the major problems of random, or uncontrollable, multi-PEGylation and the post-PEGylation purification difficulties associated with the solution-phase process. N-terminal sequencing and MALDI-TOF MS confirmed that a PEG molecule was conjugated only to the N-terminus. Compared with the unmodified IFN, the mono-PEGylate showed the reduced anti-viral activity as measured by the cell proliferation assay. The bioactivity was reduced more as the higher molecular weight PEG was conjugated. Immunoreactivity, evaluated indirectly by antibody binding activity using a surface plasmon resonance biosensor, also decreased. Nevertheless, trypsin resistance as well as thermal stability was considerably improved.

The Significance of Traditional Storytelling in the sense of Performance Theory (연행론의 관점에서 본 전통 스토리텔링)

  • Kim, Kyung-Seop;Kim, Jeong-Lae
    • The Journal of the Convergence on Culture Technology
    • /
    • v.4 no.2
    • /
    • pp.123-130
    • /
    • 2018
  • Storytelling is a compound of two words, 'story' and 'telling'. While the static aspect of 'story' has been emphasized, the dynamic and specific aspect of telling has been ignored. So, in the argument of 'storytelling', it is necessary to break from talking about 'story' and discuss the matter in terms of 'telling' and 'interaction'. In order to accept this need, the term 'Traditional Storytelling' is coined to engage 'oral-storytelling situation' more actively. When 'storytelling' is expressed by a storyteller, it can be referred to 'traditional storytelling', called 'oral performance'. In fact, the storytelling, which has a long history, originated from oral storytelling such as 'oral narration'. It is natural that our current storytelling isn't the same storytelling of an oral period, but the traditional storytelling casts a few crucial viewpoints to us these days when 'telling' has a significant meaning. In the first place, it is 'reflexivity' that we should note in the form of traditional 'storytelling'. It is a kind of self-reference which means reflecting and looking back at oneself in the form of narration. Through this reflexivity, the storytelling is affected by the site of telling and an aspect of one-off thing very frequently. Another point we need to note is a frame. Through this frame, the traditional storytelling come to hold time and space of narration.

Ordinary kriging approach to predicting long-term particulate matter concentrations in seven major Korean cities

  • Kim, Sun-Young;Yi, Seon-Ju;Eum, Young Seob;Choi, Hae-Jin;Shin, Hyesop;Ryou, Hyoung Gon;Kim, Ho
    • Environmental Analysis Health and Toxicology
    • /
    • v.29
    • /
    • pp.12.1-12.8
    • /
    • 2014
  • Objectives Cohort studies of associations between air pollution and health have used exposure prediction approaches to estimate individual-level concentrations. A common prediction method used in Korean cohort studies is ordinary kriging. In this study, performance of ordinary kriging models for long-term particulate matter less than or equal to $10{\mu}m$ in diameter ($PM_{10}$) concentrations in seven major Korean cities was investigated with a focus on spatial prediction ability. Methods We obtained hourly $PM_{10}$ data for 2010 at 226 urban-ambient monitoring sites in South Korea and computed annual average $PM_{10}$ concentrations at each site. Given the annual averages, we developed ordinary kriging prediction models for each of the seven major cities and for the entire country by using an exponential covariance reference model and a maximum likelihood estimation method. For model evaluation, cross-validation was performed and mean square error and R-squared ($R^2$) statistics were computed. Results Mean annual average $PM_{10}$ concentrations in the seven major cities ranged between 45.5 and $66.0{\mu}g/m^3$ (standard deviation=2.40 and $9.51{\mu}g/m^3$, respectively). Cross-validated $R^2$ values in Seoul and Busan were 0.31 and 0.23, respectively, whereas the other five cities had $R^2$ values of zero. The national model produced a higher cross-validated $R^2$ (0.36) than those for the city-specific models. Conclusions In general, the ordinary kriging models performed poorly for the seven major cities and the entire country of South Korea, but the model performance was better in the national model. To improve model performance, future studies should examine different prediction approaches that incorporate $PM_{10}$ source characteristics.

Low Temperature Thermal Desorption (LTTD) Treatment of Contaminated Soil

  • Alistair Montgomery;Joo, Wan-Ho;Shin, Won-Sik
    • Proceedings of the Korean Society of Soil and Groundwater Environment Conference
    • /
    • 2002.09a
    • /
    • pp.44-52
    • /
    • 2002
  • Low temperature thermal desorption (LTTD) has become one of the cornerstone technologies used for the treatment of contaminated soils and sediments in the United States. LTTD technology was first used in the mid-1980s for soil treatment on sites managed under the Comprehensive Environmental Respones, Compensation and Liability Act (CERCLA) or Superfund. Implementation was facilitated by CERCLA regulations that require only that spplicable regulations shall be met thus avoiding the need for protracted and expensive permit applications for thermal treatment equipment. The initial equipment designs used typically came from technology transfer sources. Asphalt manufacturing plants were converted to direct-fired LTTD systems, and conventional calciners were adapted for use as indirect-fired LTTD systems. Other innovative designs included hot sand recycle technology (initially developed for synfuels production from tar sand and oil shale), recycle sweep gas, travelling belts and batch-charged vacuum chambers, among others. These systems were used to treat soil contaminated with total petroleum hydrocarbons (TPH), polycyclic aromatic hydrocarbons (PAHs), pesticides, polychlorinated biphenyls (PCBs) and dioxin with varying degrees of success. Ultimately, performance and cost considerations established the suite of systems that are used for LTTD soil treatment applications today. This paper briefly reviews the develpoment of LTTD systems and summarizes the design, performance and cost characteristics of the equipment in use today. Designs reviewed include continuous feed direct-fired and indirect-fired equipment, batch feed systems and in-situ equipment. Performance is compared in terms of before-and-after contaminant levels in the soil and permissible emissions levels in the stack gas vented to the atmosphere. The review of air emissions standards includes a review of regulations in the U.S. and the European Union (EU). Key cost centers for the mobilization and operation of LTTD equipment are identified and compared for the different types of LTTD systems in use today. A work chart is provided for the selection of the optmum LTTD system for site-specific applications. LTTD technology continues to be a cornerstone technology for soil treatment in the U.S. and elsewhere. Examples of leading-edge LTTD technologies developed in the U.S. that are now being delivered locally in global projects are described.

  • PDF

Assessment of Discoidal Polymeric Nanoconstructs as a Drug Carrier (약물 운반체로서의 폴리머 디스크 나노 입자에 대한 평가)

  • BAE, J.Y.;OH, E.S.;AHN, H.J.;KEY, Jaehong
    • Journal of Biomedical Engineering Research
    • /
    • v.38 no.1
    • /
    • pp.43-48
    • /
    • 2017
  • Chemotherapy, radiation therapy, and surgery are major methods to treat cancer. However, current cancer treatments report severe side effects and high recurrences. Recent studies about engineering nanoparticles as a drug carrier suggest possibilities in terms of specific targeting and spatiotemporal release of drugs. While many nanoparticles demonstrate lower toxicity and better targeting results than free drugs, they still need to improve their performance dramatically in terms of targeting accuracy, immune responses, and non-specific accumulation at organs. One possible way to overcome the challenges is to make precisely controlled nanoparticles with respect to size, shape, surface properties, and mechanical stiffness. Here, we demonstrate $500{\times}200nm$ discoidal polymeric nanoconstructs (DPNs) as a drug delivery carrier. DPNs were prepared by using a top-down fabrication method that we previously reported to control shape as well as size. Moreover, DPNs have multiple payloads, poly lactic-co-glycolic acid (PLGA), polyethylene glycol (PEG), lipid-Rhodamine B dye (RhB) and Salinomycin. In this study, we demonstrated a potential of DPNs as a drug carrier to treat cancer.

The Effect of Meta-Features of Multiclass Datasets on the Performance of Classification Algorithms (다중 클래스 데이터셋의 메타특징이 판별 알고리즘의 성능에 미치는 영향 연구)

  • Kim, Jeonghun;Kim, Min Yong;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.23-45
    • /
    • 2020
  • Big data is creating in a wide variety of fields such as medical care, manufacturing, logistics, sales site, SNS, and the dataset characteristics are also diverse. In order to secure the competitiveness of companies, it is necessary to improve decision-making capacity using a classification algorithm. However, most of them do not have sufficient knowledge on what kind of classification algorithm is appropriate for a specific problem area. In other words, determining which classification algorithm is appropriate depending on the characteristics of the dataset was has been a task that required expertise and effort. This is because the relationship between the characteristics of datasets (called meta-features) and the performance of classification algorithms has not been fully understood. Moreover, there has been little research on meta-features reflecting the characteristics of multi-class. Therefore, the purpose of this study is to empirically analyze whether meta-features of multi-class datasets have a significant effect on the performance of classification algorithms. In this study, meta-features of multi-class datasets were identified into two factors, (the data structure and the data complexity,) and seven representative meta-features were selected. Among those, we included the Herfindahl-Hirschman Index (HHI), originally a market concentration measurement index, in the meta-features to replace IR(Imbalanced Ratio). Also, we developed a new index called Reverse ReLU Silhouette Score into the meta-feature set. Among the UCI Machine Learning Repository data, six representative datasets (Balance Scale, PageBlocks, Car Evaluation, User Knowledge-Modeling, Wine Quality(red), Contraceptive Method Choice) were selected. The class of each dataset was classified by using the classification algorithms (KNN, Logistic Regression, Nave Bayes, Random Forest, and SVM) selected in the study. For each dataset, we applied 10-fold cross validation method. 10% to 100% oversampling method is applied for each fold and meta-features of the dataset is measured. The meta-features selected are HHI, Number of Classes, Number of Features, Entropy, Reverse ReLU Silhouette Score, Nonlinearity of Linear Classifier, Hub Score. F1-score was selected as the dependent variable. As a result, the results of this study showed that the six meta-features including Reverse ReLU Silhouette Score and HHI proposed in this study have a significant effect on the classification performance. (1) The meta-features HHI proposed in this study was significant in the classification performance. (2) The number of variables has a significant effect on the classification performance, unlike the number of classes, but it has a positive effect. (3) The number of classes has a negative effect on the performance of classification. (4) Entropy has a significant effect on the performance of classification. (5) The Reverse ReLU Silhouette Score also significantly affects the classification performance at a significant level of 0.01. (6) The nonlinearity of linear classifiers has a significant negative effect on classification performance. In addition, the results of the analysis by the classification algorithms were also consistent. In the regression analysis by classification algorithm, Naïve Bayes algorithm does not have a significant effect on the number of variables unlike other classification algorithms. This study has two theoretical contributions: (1) two new meta-features (HHI, Reverse ReLU Silhouette score) was proved to be significant. (2) The effects of data characteristics on the performance of classification were investigated using meta-features. The practical contribution points (1) can be utilized in the development of classification algorithm recommendation system according to the characteristics of datasets. (2) Many data scientists are often testing by adjusting the parameters of the algorithm to find the optimal algorithm for the situation because the characteristics of the data are different. In this process, excessive waste of resources occurs due to hardware, cost, time, and manpower. This study is expected to be useful for machine learning, data mining researchers, practitioners, and machine learning-based system developers. The composition of this study consists of introduction, related research, research model, experiment, conclusion and discussion.

Performance of Northern Exposure Index in Reducing Estimation Error for Daily Maximum Temperature over a Rugged Terrain (북향개방지수가 복잡지형의 일 최고기온 추정오차 저감에 미치는 영향)

  • Chung, U-Ran;Lee, Kwang-Hoe;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.9 no.3
    • /
    • pp.195-202
    • /
    • 2007
  • The normalized difference in incident solar energy between a target surface and a level surface (overheating index, OHI) is useful in eliminating estimation error of site-specific maximum temperature in complex terrain. Due to the complexity in its calculation, however, an empirical proxy variable called northern exposure index (NEI) which combines slope and aspect has been used to estimate OHI based on empirical relationships between the two. An experiment with real-world landscape and temperature data was carried out to evaluate performance of the NEI - derived OHI (N-OHI) in reduction of spatial interpolation error for daily maximum temperature compared with that by the original OHI. We collected daily maximum temperature data from 7 sites in a mountainous watershed with a $149 km^2$ area and a 795m elevation range ($651{\sim}1,445m$) in Pyongchang, Kangwon province. Northern exposure index was calculated for the entire 166,050 grid cells constituting the watershed based on a 30-m digital elevation model. Daily OHI was calculated for the same watershed ana regressed to the variation of NEI. The regression equations were used to estimate N-OHI for 15th of each month. Deviations in daily maximum temperature at 7 sites from those measured at the nearby synoptic station were calculated from June 2006 to February 2007 and regressed to the N-OHI. The same procedure was repeated with the original OHI values. The ratio sum of square errors contributable by the N-OHI were 0.46 (winter), 0.24 (fall), and 0.01 (summer), while those by the original OHI were 0.52, 0.37 and 0.15, respectively.