• Title/Summary/Keyword: Current detection

Search Result 2,524, Processing Time 0.04 seconds

Gateway RFP-Fusion Vectors for High Throughput Functional Analysis of Genes

  • Park, Jae-Yong;Hwang, Eun Mi;Park, Nammi;Kim, Eunju;Kim, Dong-Gyu;Kang, Dawon;Han, Jaehee;Choi, Wan Sung;Ryu, Pan-Dong;Hong, Seong-Geun
    • Molecules and Cells
    • /
    • v.23 no.3
    • /
    • pp.357-362
    • /
    • 2007
  • There is an increasing demand for high throughput (HTP) methods for gene analysis on a genome-wide scale. However, the current repertoire of HTP detection methodologies allows only a limited range of cellular phenotypes to be studied. We have constructed two HTP-optimized expression vectors generated from the red fluorescent reporter protein (RFP) gene. These vectors produce RFP-tagged target proteins in a multiple expression system using gateway cloning technology (GCT). The RFP tag was fused with the cloned genes, thereby allowing us localize the expressed proteins in mammalian cells. The effectiveness of the vectors was evaluated using an HTP-screening system. Sixty representative human C2 domains were tagged with RFP and overexpressed in HiB5 neuronal progenitor cells, and we studied in detail two C2 domains that promoted the neuronal differentiation of HiB5 cells. Our results show that the two vectors developed in this study are useful for functional gene analysis using an HTP-screening system on a genome-wide scale.

Degradation Degree Evaluation of Heat Resisting Steel by Electrochemical Technique (Part I : Mechanism and Its Possibility of Field Application) (電氣化學的 方法에 의한 耐熱鋼의 劣化度 測定 제1보)

  • 정희돈;권녕각
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.16 no.3
    • /
    • pp.598-607
    • /
    • 1992
  • The environment degradation of structural steel under high temperature is one of the key phenomena governing the availability and life of plant. This degradation resulted from the microstructural changes due to the long exposure at high temperature affect the mechanical properties such as creep strength and toughness. For instance, boiler tube materials usually tend to degrade, after long term operation, by precipitates, spherodizing, coarsening, and change in chemical composition of carbides. In this study, the material degradation under high temperature exposure was investigated by evaluating the carbide precipitation. The electrochemical polarization method was facilitated to investigate the precipitation and coarsening of carbides. It was shown by the modified electrochemical potentiokinetic reactivation (EPR) tests that the passivation of Mo-rich carbides did not occur even in the anodic peak current (Ip) which indicates the precipitation of Mo$_{6}$C was also observed. And it was assured that special electrolytic cell assembled in this research can be used for the detection of Mo$_{6}$C precipitation in the field.eld.

Unsupervised Motion Pattern Mining for Crowded Scenes Analysis

  • Wang, Chongjing;Zhao, Xu;Zou, Yi;Liu, Yuncai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.12
    • /
    • pp.3315-3337
    • /
    • 2012
  • Crowded scenes analysis is a challenging topic in computer vision field. How to detect diverse motion patterns in crowded scenarios from videos is the critical yet hard part of this problem. In this paper, we propose a novel approach to mining motion patterns by utilizing motion information during both long-term period and short interval simultaneously. To capture long-term motions effectively, we introduce Motion History Image (MHI) representation to access to the global perspective about the crowd motion. The combination of MHI and optical flow, which is used to get instant motion information, gives rise to discriminative spatial-temporal motion features. Benefitting from the robustness and efficiency of the novel motion representation, the following motion pattern mining is implemented in a completely unsupervised way. The motion vectors are clustered hierarchically through automatic hierarchical clustering algorithm building on the basis of graphic model. This method overcomes the instability of optical flow in dealing with time continuity in crowded scenes. The results of clustering reveal the situations of motion pattern distribution in current crowded videos. To validate the performance of the proposed approach, we conduct experimental evaluations on some challenging videos including vehicles and pedestrians. The reliable detection results demonstrate the effectiveness of our approach.

A Comparison Study between Uniform Testing Effort and Weibull Testing Effort during Software Development (소프트웨어 개발시 일정테스트노력과 웨이불 테스트 노력의 비교 연구)

  • 최규식;장원석;김종기
    • Journal of Information Technology Application
    • /
    • v.3 no.3
    • /
    • pp.91-106
    • /
    • 2001
  • We propose a software-reliability growth model incoporating the amount of uniform and Weibull testing efforts during the software testing phase in this paper. The time-dependent behavior of testing effort is described by uniform and Weibull curves. Assuming that the error detection rate to the amount of testing effort spent during the testing phase is proportional to the current error content, the model is formulated by a nonhomogeneous Poisson process. Using this model the method the data analysis for software reliability measurement is developed. The optimum release time is determined by considering how the initial reliability R($\chi$ 0) would be. The conditions are ($R\chi$ 0)>$R_{o}$ , $P_{o}$ >R($\chi$ 0)> $R_{o}$ $^{d}$ and R($\chi$ 0)<$R_{o}$ $^{d}$ for uniform testing efforts. deal case is $P_{o}$ >($R\chi$ 0)> $R_{o}$ $^{d}$ Likewise, it is ($R\chi$ 0)$\geq$$R_{o}$ , $R_{o}$ >($R\chi$ 0)>R(eqation omitted) and ($R\chi$ 0)<R(eqation omitted)for Weibull testing efforts. Ideal case is $R_{o}$ > R($\chi$ 0)> R(eqation omitted).

  • PDF

Automatic Fire Detector Spacing Calculation for Performance Based Design (성능위주설계를 위한 화재감지기배치의 공학적연구)

  • Park, Dong-Ha
    • Fire Science and Engineering
    • /
    • v.24 no.1
    • /
    • pp.15-23
    • /
    • 2010
  • Placement method for fire detectors prescribed in current fire safety regulation is just about placing a prescribed number of detectors according to the areas. However, this regulation has no scientific basis and standards from foreign countries are just introduced and fire detectors are installed complying with them. There are two standards in designing fire protection systems; Prescriptive-Based Design that follows stipulated regulations like fire safety standards and Performance-Based Design based on engineering knowledge such as fire dynamics, structural dynamics, mechanics of materials, fluid mechanics, and thermo dynamics. Recently, Fire Protection System Construction Business Act was revised so that fire protection systems can be designed using Performance-Based Design method ('05. 8. 4), though the method has not activated until now. In addition, the enforcement decree defines the range for specific objects of fire protection to which Performance-Based Design is applied ('07, 1. 24). At the moment, by manufacturing simulator so that formulas can be introduced and calculated with software in order to install fire detector of automatic fire detection systems keeping optimized distance, comparing the results with the state of fire detector placed according to Performance-Based Design and analyzing them, this study was intended to settle Performance-Based Design method in the future.

A Design of the Ontology-based Situation Recognition System to Detect Risk Factors in a Semiconductor Manufacturing Process (반도체 공정의 위험요소 판단을 위한 온톨로지 기반의 상황인지 시스템 설계)

  • Baek, Seung-Min;Jeon, Min-Ho;Oh, Chang-Heon
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.6
    • /
    • pp.804-809
    • /
    • 2013
  • The current state monitoring system at a semiconductor manufacturing process is based on the manually collected sensor data, which involves limitations when it comes to complex malfunction detection and real time monitoring. This study aims to design a situation recognition algorithm to form a network over time by creating a domain ontology and to suggest a system to provide users with services by generating events upon finding risk factors in the semiconductor process. To this end, a multiple sensor node for situational inference was designed and tested. As a result of the experiment, events to which the rule of time inference was applied occurred for the contents formed over time with regard to a quantity of collected data while the events that occurred with regard to malfunction and external time factors provided log data only.

Reasonability of Logistic Curve on S/W (로지스틱 곡선을 이용한 타당성)

  • Kim, Sun-Il;Che, Gyu-Shik;Jo, In-June
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.1
    • /
    • pp.1-9
    • /
    • 2008
  • The Logistic cone is studied as a most desirable for the software testing effort. Assuming that the error detection rate to the amount of testing effort spent during the testing phase is proportional to the current error content, a software-reliability growth model is formulated by a nonhomogeneous Poisson process. Using this model the method of data analysis for software reliability measurement is developed. After defining a software reliability, This paper discusses the relations between testing time and reliability and between duration following failure fixing and reliability are studied SRGM in several literatures has used the exponential curve, Railleigh curve or Weibull cure as an amount of testing effort during software testing phase. However, it might not be appropriate to represent the consumption curve for testing effort by one of already proposed curves in some software development environments. Therefore, this paper shows that a logistic testing- effort function can be adequately expressed as a software development/testing effort curve and that it gives a good predictive capability based on real failure data.

Optimal Starting Age for Colorectal Cancer Screening in an Era of Increased Metabolic Unhealthiness: A Nationwide Korean Cross-Sectional Study

  • Choi, Yoon Jin;Lee, Dong Ho;Han, Kyung-Do;Kim, Hyun Soo;Yoon, Hyuk;Shin, Cheol Min;Park, Young Soo;Kim, Nayoung
    • Gut and Liver
    • /
    • v.12 no.6
    • /
    • pp.655-663
    • /
    • 2018
  • Background/Aims: The association between metabolic syndrome and colorectal cancer (CRC) has been suggested as one of causes for the increasing incidence of CRC, particularly in younger age groups. The present study examined whether the current age threshold (50 years) for CRC screening in Korea requires modification when considering increased metabolic syndrome. Methods: We analyzed data from the National Health Insurance Corporation database, which covers ~97% of the population in Korea. CRC risk was evaluated with stratification based on age and the presence/absence of relevant metabolic syndrome components (diabetes, dyslipidemia, and hypertension). Results: A total of 51,612,316 subjects enrolled during 2014 to 2015 were analyzed. Among them, 19.3% had diabetes, hypertension, dyslipidemia, or some combination thereof. This population had a higher incidence of CRC than did those without these conditions, and this was more prominent in subjects <40 years of age. The optimal cutoff age for detecting CRC, based on the highest Youden index, was 45 years among individuals without diabetes, dyslipidemia, and hypertension. Individuals with at least one of these components of metabolic syndrome had the highest Youden index at 62 years old, but the value was only 0.2. Resetting the cutoff age from 50 years to 45 years achieved a 6% increase in sensitivity for CRC detection among the total population. Conclusions: Starting CRC screening earlier, namely, at 45 rather than at 50 years of age, may improve secondary prevention of CRC in Korea.

Complexity Estimation Based Work Load Balancing for a Parallel Lidar Waveform Decomposition Algorithm

  • Jung, Jin-Ha;Crawford, Melba M.;Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.6
    • /
    • pp.547-557
    • /
    • 2009
  • LIDAR (LIght Detection And Ranging) is an active remote sensing technology which provides 3D coordinates of the Earth's surface by performing range measurements from the sensor. Early small footprint LIDAR systems recorded multiple discrete returns from the back-scattered energy. Recent advances in LIDAR hardware now make it possible to record full digital waveforms of the returned energy. LIDAR waveform decomposition involves separating the return waveform into a mixture of components which are then used to characterize the original data. The most common statistical mixture model used for this process is the Gaussian mixture. Waveform decomposition plays an important role in LIDAR waveform processing, since the resulting components are expected to represent reflection surfaces within waveform footprints. Hence the decomposition results ultimately affect the interpretation of LIDAR waveform data. Computational requirements in the waveform decomposition process result from two factors; (1) estimation of the number of components in a mixture and the resulting parameter estimates, which are inter-related and cannot be solved separately, and (2) parameter optimization does not have a closed form solution, and thus needs to be solved iteratively. The current state-of-the-art airborne LIDAR system acquires more than 50,000 waveforms per second, so decomposing the enormous number of waveforms is challenging using traditional single processor architecture. To tackle this issue, four parallel LIDAR waveform decomposition algorithms with different work load balancing schemes - (1) no weighting, (2) a decomposition results-based linear weighting, (3) a decomposition results-based squared weighting, and (4) a decomposition time-based linear weighting - were developed and tested with varying number of processors (8-256). The results were compared in terms of efficiency. Overall, the decomposition time-based linear weighting work load balancing approach yielded the best performance among four approaches.

Study of the Optimization and the Depth Profile Using a Flat Type Ion Source in Glow Discharge Mass Spectrometry

  • Woo Jin Chun;Kim, Hyo Jin;Lim Heoung Bin;Moon Dae Won;Lee Kwang Woo
    • Bulletin of the Korean Chemical Society
    • /
    • v.13 no.6
    • /
    • pp.620-624
    • /
    • 1992
  • The analytical performance of glow discharge mass spectrometer (GD-MS), using a flat type ion source is discussed. The efficiency of ion extraction was maximized at the distance between anode and cathode of 6 mm. At the operation condition of 4 mA, -1000 volt, and 1 mbar for the source, the optimum voltages for sampler and skimmer were40 volt and -280 volt, respectively. The intensities of Cu, Zn, and Mn were increased as a function of square root of current approximately. Korea standard reference materials (KSRM) were tested for an application study. The detection limits of most elements were obtained in the range of several ppm at the optimized operating condition. The peaks of aluminum and chromium were interfered by those of residual gases. The depth profile of nickel coated copper specimens (3, 5, 10 ${\mu}m$ thickness) were obtained by plotting time versus intensities of Ni and Cr after checking the thickness of nickel coated using a scanning electron microscope (SEM). At this moment, the sputtering rate of 0.2 ${\mu}m/min$ at the optimum operating condition was determined from the slope of the plot of time to the coating thickness. The roughness spectra of specimen's crater after 16 min, discharge were obtained using a Talysuf5m-120 roughness tester as well.