• Title/Summary/Keyword: multiple target detection

Search Result 172, Processing Time 0.027 seconds

Gateway RFP-Fusion Vectors for High Throughput Functional Analysis of Genes

  • Park, Jae-Yong;Hwang, Eun Mi;Park, Nammi;Kim, Eunju;Kim, Dong-Gyu;Kang, Dawon;Han, Jaehee;Choi, Wan Sung;Ryu, Pan-Dong;Hong, Seong-Geun
    • Molecules and Cells
    • /
    • v.23 no.3
    • /
    • pp.357-362
    • /
    • 2007
  • There is an increasing demand for high throughput (HTP) methods for gene analysis on a genome-wide scale. However, the current repertoire of HTP detection methodologies allows only a limited range of cellular phenotypes to be studied. We have constructed two HTP-optimized expression vectors generated from the red fluorescent reporter protein (RFP) gene. These vectors produce RFP-tagged target proteins in a multiple expression system using gateway cloning technology (GCT). The RFP tag was fused with the cloned genes, thereby allowing us localize the expressed proteins in mammalian cells. The effectiveness of the vectors was evaluated using an HTP-screening system. Sixty representative human C2 domains were tagged with RFP and overexpressed in HiB5 neuronal progenitor cells, and we studied in detail two C2 domains that promoted the neuronal differentiation of HiB5 cells. Our results show that the two vectors developed in this study are useful for functional gene analysis using an HTP-screening system on a genome-wide scale.

A Mobile P2P Message Platform Enabling the Energy-Efficient Handover between Heterogeneous Networks (이종 네트워크 간 에너지 효율적인 핸드오버를 지원하는 모바일 P2P 메시지 플랫폼)

  • Kim, Tae-Yong;Kang, Kyung-Ran;Cho, Young-Jong
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.10
    • /
    • pp.724-739
    • /
    • 2009
  • This paper suggests the energy-efficient message delivery scheme and the software platform which exploits the multiple network interfaces of the mobile terminals and GPS in the current mobile devices. The mobile terminals determine the delivery method among 'direct', 'indirect', and 'WAN' based on the position information of itself and other terminals. 'Direct' method sends a message directly to the target terminal using local RAT. 'Indirect' method extends the service area by exploiting intermediate terminals as relay node. If the target terminal is too far to reach through 'direct' or 'indirect' method, the message is sent using wireless WAN technology. Our proposed scheme exploits the position information and, thus, power consumption is drastically reduced in determining handover time and direction. Network simulation results show that our proposed delivery scheme improves the message transfer efficiency and the handover detection latency. We implemented a message platform in a smart phone realizing the proposed delivery scheme. We compared our platform with other typical message platforms from energy efficiency aspect by observing the real power consumption and applying the mathematical modeling. The comparison results show that our platform requires significantly less power.

Analysis Method for Full-length LiDAR Waveforms (라이다 파장 분석 방법론에 대한 연구)

  • Jung, Myung-Hee;Yun, Eui-Jung;Kim, Cheon-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.4 s.316
    • /
    • pp.28-35
    • /
    • 2007
  • Airbone laser altimeters have been utilized for 3D topographic mapping of the earth, moon, and planets with high resolution and accuracy, which is a rapidly growing remote sensing technique that measures the round-trip time emitted laser pulse to determine the topography. The traveling time from the laser scanner to the Earth's surface and back is directly related to the distance of the sensor to the ground. When there are several objects within the travel path of the laser pulse, the reflected laser pluses are distorted by surface variation within the footprint, generating multiple echoes because each target transforms the emitted pulse. The shapes of the received waveforms also contain important information about surface roughness, slope and reflectivity. Waveform processing algorithms parameterize and model the return signal resulting from the interaction of the transmitted laser pulse with the surface. Each of the multiple targets within the footprint can be identified. Assuming each response is gaussian, returns are modeled as a mixture gaussian distribution. Then, the parameters of the model are estimated by LMS Method or EM algorithm However, each response actually shows the skewness in the right side with the slowly decaying tail. For the application to require more accurate analysis, the tail information is to be quantified by an approach to decompose the tail. One method to handle with this problem is proposed in this study.

Optimization of solid-phase extraction for the liquid chromatography-tandem mass spectrometry analysis of basic drugs in equine urine (액체크로마토그래피-텐덤질량분석법을 위한 경주마 소변 중 염기성 약물의 고체상 추출법 최적화)

  • Shin, Hyun Du;Yang, Ji Suk;Jung, Mihye;Kim, Hyung-Seung;Youm, Jeong-Rok;Hu, Man Bae;Kim, Sung Jean;Han, Sang Beom
    • Analytical Science and Technology
    • /
    • v.21 no.5
    • /
    • pp.412-423
    • /
    • 2008
  • A procedure based on solid-phase extraction (SPE) followed by liquid chromatography-tandem mass spectrometry has been developed for the simultaneous analysis of 55 basic drugs in equine urine. The test scope covers diversified classes of drugs including some ${\beta}$-blockers, ${\beta}$-agonists, antihypotensives, CNS stimulants, sedatives, tranquilizers, antidepressants, antihypertensives and so on. LC-MS/MS separation and quantification was carried out in positive electrospray ionization and multiple reaction monitoring (MRM) mode. Four different brands of mixed mode cation exchange SPE sorbents; UCT XTRACT$^{(R)}$ XRDAH, Supelco DSC-MCAX$^{(R)}$, Varian Bond Elut Certify$^{(R)}$ and Waters Oasis$^{(R)}$ MCX were compared. The UCT XTRACT$^{(R)}$ XRDAH sorbent provided the best results in the preconcentration of samples, yielding relative recoveries higher than 80% except for terbutaline (41.3%), salbutamol (71.5%), heptaminol (70.7%), phenylpropanolamine (66.3%). Detection limits of the target drugs provided by the proposed analytical procedure were between 0.2~8.3 ng/mL.

Comparison of Association Rule Learning and Subgroup Discovery for Mining Traffic Accident Data (교통사고 데이터의 마이닝을 위한 연관규칙 학습기법과 서브그룹 발견기법의 비교)

  • Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.1-16
    • /
    • 2015
  • Traffic accident is one of the major cause of death worldwide for the last several decades. According to the statistics of world health organization, approximately 1.24 million deaths occurred on the world's roads in 2010. In order to reduce future traffic accident, multipronged approaches have been adopted including traffic regulations, injury-reducing technologies, driving training program and so on. Records on traffic accidents are generated and maintained for this purpose. To make these records meaningful and effective, it is necessary to analyze relationship between traffic accident and related factors including vehicle design, road design, weather, driver behavior etc. Insight derived from these analysis can be used for accident prevention approaches. Traffic accident data mining is an activity to find useful knowledges about such relationship that is not well-known and user may interested in it. Many studies about mining accident data have been reported over the past two decades. Most of studies mainly focused on predict risk of accident using accident related factors. Supervised learning methods like decision tree, logistic regression, k-nearest neighbor, neural network are used for these prediction. However, derived prediction model from these algorithms are too complex to understand for human itself because the main purpose of these algorithms are prediction, not explanation of the data. Some of studies use unsupervised clustering algorithm to dividing the data into several groups, but derived group itself is still not easy to understand for human, so it is necessary to do some additional analytic works. Rule based learning methods are adequate when we want to derive comprehensive form of knowledge about the target domain. It derives a set of if-then rules that represent relationship between the target feature with other features. Rules are fairly easy for human to understand its meaning therefore it can help provide insight and comprehensible results for human. Association rule learning methods and subgroup discovery methods are representing rule based learning methods for descriptive task. These two algorithms have been used in a wide range of area from transaction analysis, accident data analysis, detection of statistically significant patient risk groups, discovering key person in social communities and so on. We use both the association rule learning method and the subgroup discovery method to discover useful patterns from a traffic accident dataset consisting of many features including profile of driver, location of accident, types of accident, information of vehicle, violation of regulation and so on. The association rule learning method, which is one of the unsupervised learning methods, searches for frequent item sets from the data and translates them into rules. In contrast, the subgroup discovery method is a kind of supervised learning method that discovers rules of user specified concepts satisfying certain degree of generality and unusualness. Depending on what aspect of the data we are focusing our attention to, we may combine different multiple relevant features of interest to make a synthetic target feature, and give it to the rule learning algorithms. After a set of rules is derived, some postprocessing steps are taken to make the ruleset more compact and easier to understand by removing some uninteresting or redundant rules. We conducted a set of experiments of mining our traffic accident data in both unsupervised mode and supervised mode for comparison of these rule based learning algorithms. Experiments with the traffic accident data reveals that the association rule learning, in its pure unsupervised mode, can discover some hidden relationship among the features. Under supervised learning setting with combinatorial target feature, however, the subgroup discovery method finds good rules much more easily than the association rule learning method that requires a lot of efforts to tune the parameters.

Automated Analyses of Ground-Penetrating Radar Images to Determine Spatial Distribution of Buried Cultural Heritage (매장 문화재 공간 분포 결정을 위한 지하투과레이더 영상 분석 자동화 기법 탐색)

  • Kwon, Moonhee;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.55 no.5
    • /
    • pp.551-561
    • /
    • 2022
  • Geophysical exploration methods are very useful for generating high-resolution images of underground structures, and such methods can be applied to investigation of buried cultural properties and for determining their exact locations. In this study, image feature extraction and image segmentation methods were applied to automatically distinguish the structures of buried relics from the high-resolution ground-penetrating radar (GPR) images obtained at the center of Silla Kingdom, Gyeongju, South Korea. The major purpose for image feature extraction analyses is identifying the circular features from building remains and the linear features from ancient roads and fences. Feature extraction is implemented by applying the Canny edge detection and Hough transform algorithms. We applied the Hough transforms to the edge image resulted from the Canny algorithm in order to determine the locations the target features. However, the Hough transform requires different parameter settings for each survey sector. As for image segmentation, we applied the connected element labeling algorithm and object-based image analysis using Orfeo Toolbox (OTB) in QGIS. The connected components labeled image shows the signals associated with the target buried relics are effectively connected and labeled. However, we often find multiple labels are assigned to a single structure on the given GPR data. Object-based image analysis was conducted by using a Large-Scale Mean-Shift (LSMS) image segmentation. In this analysis, a vector layer containing pixel values for each segmented polygon was estimated first and then used to build a train-validation dataset by assigning the polygons to one class associated with the buried relics and another class for the background field. With the Random Forest Classifier, we find that the polygons on the LSMS image segmentation layer can be successfully classified into the polygons of the buried relics and those of the background. Thus, we propose that these automatic classification methods applied to the GPR images of buried cultural heritage in this study can be useful to obtain consistent analyses results for planning excavation processes.

Performance Testing of Satellite Image Processing based on OGC WPS 2.0 in the OpenStack Cloud Environment (오픈스택 클라우드 환경 OGC WPS 2.0 기반 위성영상처리 성능측정 시험)

  • Yoon, Gooseon;Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.6
    • /
    • pp.617-627
    • /
    • 2016
  • Many kinds of OGC-based web standards have been utilized in the lots of geo-spatial application fields for sharing and interoperable processing of large volume of data sets containing satellite images. As well, the number of cloud-based application services by on-demand processing of virtual machines is increasing. However, remote sensing applications using these two huge trends are globally on the initial stage. This study presents a practical linkage case with both aspects of OGC-based standard and cloud computing. Performance test is performed with the implementation result for cloud detection processing. Test objects are WPS 2.0 and two types of geo-based service environment such as web server in a single core and multiple virtual servers implemented on OpenStack cloud computing environment. Performance test unit by JMeter is five requests of GetCapabilities, DescribeProcess, Execute, GetStatus, GetResult in WPS 2.0. As the results, the performance measurement time in a cloud-based environment is faster than that of single server. It is expected that expansion of processing algorithms by WPS 2.0 and virtual processing is possible to target-oriented applications in the practical level.

Development and validation of an LC-MS/MS method for the simultaneous analysis of 26 anti-diabetic drugs in adulterated dietary supplements and its application to a forensic sample

  • Kim, Nam Sook;Yoo, Geum Joo;Kim, Kyu Yeon;Lee, Ji Hyun;Park, Sung-Kwan;Baek, Sun Young;Kang, Hoil
    • Analytical Science and Technology
    • /
    • v.32 no.2
    • /
    • pp.35-47
    • /
    • 2019
  • In this study, high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) was employed to detect 26 antidiabetic compounds in adulterated dietary supplements using a simple, selective method. The work presented herein may help prevent incidents related to food adulteration and restrict the illegal food market. The best separation was obtained on a Shiseido Capcell Pak(R) C18 MG-II ($2.0mm{\times}100mm$, $3{\mu}m$), which improved the peak shape and MS detection sensitivity of the target compounds. A gradient elution system composed of 0.1 % (v/v) formic acid in distilled water and methanol at a flow rate of 0.3 mL/min for 18 min was utilized. A triple quadrupole mass spectrometer with an electrospray ionization source operated in the positive or negative mode was employed as the detector. The developed method was validated as follows: specificity was confirmed in the multiple reaction monitoring mode using the precursor and product ion pairs. For solid samples, LOD ranged from 0.16 to 20.00 ng/mL and LOQ ranged from 0.50 to 60.00 ng/mL, and for liquid samples, LOD ranged from 0.16 to 20.00 ng/mL and LOQ ranged from 0.50 to 60.00 ng/mL. Satisfactory linearity was obtained from calibration curves, with $R^2$ > 0.99. Both intra and inter-day precision were less than 13.19 %. Accuracies ranged from 80.69 to 118.81 % (intra/inter-day), with a stability of less than 14.88 %. Mean recovery was found to be 80.6-119.0 % and less than 13.4 % RSD. Using the validated method, glibenclamide and pioglitazone were simultaneously determined in one capsule at concentrations of 1.52 and 0.53 mg (per capsule), respectively.

Technology Development for Non-Contact Interface of Multi-Region Classifier based on Context-Aware (상황 인식 기반 다중 영역 분류기 비접촉 인터페이스기술 개발)

  • Jin, Songguo;Rhee, Phill-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.175-182
    • /
    • 2020
  • The non-contact eye tracking is a nonintrusive human-computer interface providing hands-free communications for people with severe disabilities. Recently. it is expected to do an important role in non-contact systems due to the recent coronavirus COVID-19, etc. This paper proposes a novel approach for an eye mouse using an eye tracking method based on a context-aware based AdaBoost multi-region classifier and ASSL algorithm. The conventional AdaBoost algorithm, however, cannot provide sufficiently reliable performance in face tracking for eye cursor pointing estimation, because it cannot take advantage of the spatial context relations among facial features. Therefore, we propose the eye-region context based AdaBoost multiple classifier for the efficient non-contact gaze tracking and mouse implementation. The proposed method detects, tracks, and aggregates various eye features to evaluate the gaze and adjusts active and semi-supervised learning based on the on-screen cursor. The proposed system has been successfully employed in eye location, and it can also be used to detect and track eye features. This system controls the computer cursor along the user's gaze and it was postprocessing by applying Gaussian modeling to prevent shaking during the real-time tracking using Kalman filter. In this system, target objects were randomly generated and the eye tracking performance was analyzed according to the Fits law in real time. It is expected that the utilization of non-contact interfaces.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.