• Title/Summary/Keyword: 성능정보

Search Result 26,854, Processing Time 0.054 seconds

Efficacy of I-123/I-131 Metaiodobenzylguanidine Scan as A Single Initial Diagnostic Modality in Pheochromocytoma: Comparison with Biochemical Test and Anatomic Imaging (갈색세포종의 초기 진단에서 I-123/I-131 Metaiodobenzylguanidine 스캔의 단일 검사로써의 진단 성능: 생화학적 검사, 해부학적 영상과 비교)

  • Moon, Eun-Ha;Lim, Seok-Tae;Jeong, Young-Jin;Kim, Dong-Wook;Jeong, Hwan-Jeong;Sohn, Myung-Hee
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.436-442
    • /
    • 2009
  • Purpose: We underwent this study to evaluate the diagnostic potential of I-123/I-131 metaiodobenzylguanidine (MIBG) scintigraphy alone in the initial diagnosis of pheochromocytoma, compared with biochemical test and anatomic imaging. Materials & Methods: Twenty two patients (M:F=13:9, Age: $44.3{\pm}\;19.3$ years) having the clinical evaluation due to suspicious pheochromocytoma received the biochemical test, anatomic imaging modality (CT and/or MRI) and I-123/I-131 MIBG scan for diagnosis of pheochromocytoma, prior to histopathological confirmation. MIBG scans were independently reviewed by 2 nuclear medicine physicians. Results: All patients were confirmed histopathologically by operation or biopsy (incisional or excisonal). In comparison of final diagnosis and findings of each diagnostic modality, the sensitivities of the biochemical test, anatomic imaging, and MIBG scan were 88.9%, 55.6%, and 88.9%, respectively. And the specificities of the biochemical test, anatomic imaging, and MIBG scan also were 69.2%, 69.2%, and 92.3%, respectively. MIBG scan showed one false positive (neuroblastoma) and one false negative finding. There was one patient with positive MIBG scan and negative findings of the biochemical test, anatomic imaging. Conclusion: Our data suggest that I-123/I-131 MIBG scan has higher sensitivity, specificity, positive predictive value, negative predictive value and accuracy than those of biochemical test and anatomic imaging. Thus, we expect that MIBG scan is e tectively used for initial diagnosis of pheochromocytoma alone as well as biochemical test and anatomic imaging.

Handover Functional Architecture for Next Generation Wireless Networks (차세대 무선 네트워크를 위한 핸드오버 기능 구조 제안)

  • Baek, Joo-Young;Kim, Dong-Wook;Kim, Hyun-Jin;Choi, Yoon-Hee;Kim, Duk-Jin;Kim, Woo-Jae;Suh, Young-Joo;Kang, Suk-Yang;Kim, Kyung-Suk;Shin, Kyung-Chul
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10d
    • /
    • pp.268-273
    • /
    • 2006
  • 차세대 무선 네트워크 (4G)는 새로운 무선 접속 기술의 개발과 함께 많은 연구가 필요한 분야이다. 그 중에서 특히 단말의 끊김없는 이동성을 제공해 주기 위한 핸드오버 기술이 가장 중요하다고 할 수 있다. 차세대 무선 네트워크는 새로운 무선 접속 기술과 함께 기존의 무선랜이나 이동통신망 등과 같이 사용될 것으로 예상되며, 네트워크 계층에서의 이동성 지원을 위하여 Mobile IPv6를 사용할 것으로 예상되는 네트워크이다. 이러한 네트워크에서 끊김없는 이동성을 제공해 주기 위해서는 현재까지 연구된 핸드오버 기능 및 구조에 대한 연구와 함께 보다 다양해진 네트워크 환경과 QoS 등을 고려한 종합적인 핸드오버 기능에 대한 연구가 필요하다. 본 논문에서는 차세대 무선 네트워크에서 단말의 끊김없는 핸드오버를 제공해 주기 위하여 필요한 기능들을 도출하고, 이들간의 유기적인 연관관계를 정의하여 다양한 네트워크 환경과 사용자의 우선순위, 어플리케이션의 QoS 요구 조건 등을 고려한 종합적인 핸드오버 기능 구조를 제안하고자 한다. 제안하는 핸드오버 구조는 Monitoring, Triggering, Handover의 세 가지 module로 나뉘어져 있으며, 각각은 필요에 따라 sub-module로 다시 세분화된다. 제안하는 핸드오버 구조의 가장 큰 특징은 핸드오버를 유발시킬 수 있는 여러 가지 요소를 종합적으로 고려하며 이들간의 수평적인 비교가 아닌 다단계 비교를 수행하여 보다 정확한 triggering이 가능하도록 한다. 또한 단말의 QoS 요구 사항을 보장하고 네트워크의 혼잡도(congestion) 및 부하 조절 (load balancing)을 위한 기능을 핸드오버 기능에 추가하여 효율적인 네트워크의 자원 사용이 가능하도록 설계하였다.서버로 분산처리하게 함으로써 성능에 대한 신뢰성을 향상 시킬 수 있는 Load Balancing System을 제안한다.할 때 가장 효과적인 라우팅 프로토콜이라고 할 수 있다.iRNA 상의 의존관계를 분석할 수 있었다.수안보 등 지역에서 나타난다 이러한 이상대 주변에는 대개 온천이 발달되어 있었거나 새로 개발되어 있는 곳이다. 온천에 이용하고 있는 시추공의 자료는 배제하였으나 온천이응으로 직접적으로 영향을 받지 않은 시추공의 자료는 사용하였다 이러한 온천 주변 지역이라 하더라도 실제는 온천의 pumping 으로 인한 대류현상으로 주변 일대의 온도를 올려놓았기 때문에 비교적 높은 지열류량 값을 보인다. 한편 한반도 남동부 일대는 이번 추가된 자료에 의해 새로운 지열류량 분포 변화가 나타났다 강원 북부 오색온천지역 부근에서 높은 지열류량 분포를 보이며 또한 우리나라 대단층 중의 하나인 양산단층과 같은 방향으로 발달한 밀양단층, 모량단층, 동래단층 등 주변부로 NNE-SSW 방향의 지열류량 이상대가 발달한다. 이것으로 볼 때 지열류량은 지질구조와 무관하지 않음을 파악할 수 있다. 특히 이러한 단층대 주변은 지열수의 순환이 깊은 심도까지 가능하므로 이러한 대류현상으로 지표부근까지 높은 지온 전달이 되어 나타나는 것으로 판단된다.의 안정된 방사성표지효율을 보였다. $^{99m}Tc$-transferrin을 이용한 감염영상을 성공적으로 얻을 수 있었으며, $^{67}Ga$-citrate 영상과 비교하여 더 빠른 시간 안에 우수한 영상을 얻을 수 있었다. 그러므로 $^{99m}Tc$-transierrin이 감염 병소의 영상진단에 사용될 수 있을 것으로 기대된다.리를 정량화 하였다. 특히 선

  • PDF

A Methodology for Extracting Shopping-Related Keywords by Analyzing Internet Navigation Patterns (인터넷 검색기록 분석을 통한 쇼핑의도 포함 키워드 자동 추출 기법)

  • Kim, Mingyu;Kim, Namgyu;Jung, Inhwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.123-136
    • /
    • 2014
  • Recently, online shopping has further developed as the use of the Internet and a variety of smart mobile devices becomes more prevalent. The increase in the scale of such shopping has led to the creation of many Internet shopping malls. Consequently, there is a tendency for increasingly fierce competition among online retailers, and as a result, many Internet shopping malls are making significant attempts to attract online users to their sites. One such attempt is keyword marketing, whereby a retail site pays a fee to expose its link to potential customers when they insert a specific keyword on an Internet portal site. The price related to each keyword is generally estimated by the keyword's frequency of appearance. However, it is widely accepted that the price of keywords cannot be based solely on their frequency because many keywords may appear frequently but have little relationship to shopping. This implies that it is unreasonable for an online shopping mall to spend a great deal on some keywords simply because people frequently use them. Therefore, from the perspective of shopping malls, a specialized process is required to extract meaningful keywords. Further, the demand for automating this extraction process is increasing because of the drive to improve online sales performance. In this study, we propose a methodology that can automatically extract only shopping-related keywords from the entire set of search keywords used on portal sites. We define a shopping-related keyword as a keyword that is used directly before shopping behaviors. In other words, only search keywords that direct the search results page to shopping-related pages are extracted from among the entire set of search keywords. A comparison is then made between the extracted keywords' rankings and the rankings of the entire set of search keywords. Two types of data are used in our study's experiment: web browsing history from July 1, 2012 to June 30, 2013, and site information. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The original sample dataset contains 150 million transaction logs. First, portal sites are selected, and search keywords in those sites are extracted. Search keywords can be easily extracted by simple parsing. The extracted keywords are ranked according to their frequency. The experiment uses approximately 3.9 million search results from Korea's largest search portal site. As a result, a total of 344,822 search keywords were extracted. Next, by using web browsing history and site information, the shopping-related keywords were taken from the entire set of search keywords. As a result, we obtained 4,709 shopping-related keywords. For performance evaluation, we compared the hit ratios of all the search keywords with the shopping-related keywords. To achieve this, we extracted 80,298 search keywords from several Internet shopping malls and then chose the top 1,000 keywords as a set of true shopping keywords. We measured precision, recall, and F-scores of the entire amount of keywords and the shopping-related keywords. The F-Score was formulated by calculating the harmonic mean of precision and recall. The precision, recall, and F-score of shopping-related keywords derived by the proposed methodology were revealed to be higher than those of the entire number of keywords. This study proposes a scheme that is able to obtain shopping-related keywords in a relatively simple manner. We could easily extract shopping-related keywords simply by examining transactions whose next visit is a shopping mall. The resultant shopping-related keyword set is expected to be a useful asset for many shopping malls that participate in keyword marketing. Moreover, the proposed methodology can be easily applied to the construction of special area-related keywords as well as shopping-related ones.

Selection Model of System Trading Strategies using SVM (SVM을 이용한 시스템트레이딩전략의 선택모형)

  • Park, Sungcheol;Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.59-71
    • /
    • 2014
  • System trading is becoming more popular among Korean traders recently. System traders use automatic order systems based on the system generated buy and sell signals. These signals are generated from the predetermined entry and exit rules that were coded by system traders. Most researches on system trading have focused on designing profitable entry and exit rules using technical indicators. However, market conditions, strategy characteristics, and money management also have influences on the profitability of the system trading. Unexpected price deviations from the predetermined trading rules can incur large losses to system traders. Therefore, most professional traders use strategy portfolios rather than only one strategy. Building a good strategy portfolio is important because trading performance depends on strategy portfolios. Despite of the importance of designing strategy portfolio, rule of thumb methods have been used to select trading strategies. In this study, we propose a SVM-based strategy portfolio management system. SVM were introduced by Vapnik and is known to be effective for data mining area. It can build good portfolios within a very short period of time. Since SVM minimizes structural risks, it is best suitable for the futures trading market in which prices do not move exactly the same as the past. Our system trading strategies include moving-average cross system, MACD cross system, trend-following system, buy dips and sell rallies system, DMI system, Keltner channel system, Bollinger Bands system, and Fibonacci system. These strategies are well known and frequently being used by many professional traders. We program these strategies for generating automated system signals for entry and exit. We propose SVM-based strategies selection system and portfolio construction and order routing system. Strategies selection system is a portfolio training system. It generates training data and makes SVM model using optimal portfolio. We make $m{\times}n$ data matrix by dividing KOSPI 200 index futures data with a same period. Optimal strategy portfolio is derived from analyzing each strategy performance. SVM model is generated based on this data and optimal strategy portfolio. We use 80% of the data for training and the remaining 20% is used for testing the strategy. For training, we select two strategies which show the highest profit in the next day. Selection method 1 selects two strategies and method 2 selects maximum two strategies which show profit more than 0.1 point. We use one-against-all method which has fast processing time. We analyse the daily data of KOSPI 200 index futures contracts from January 1990 to November 2011. Price change rates for 50 days are used as SVM input data. The training period is from January 1990 to March 2007 and the test period is from March 2007 to November 2011. We suggest three benchmark strategies portfolio. BM1 holds two contracts of KOSPI 200 index futures for testing period. BM2 is constructed as two strategies which show the largest cumulative profit during 30 days before testing starts. BM3 has two strategies which show best profits during testing period. Trading cost include brokerage commission cost and slippage cost. The proposed strategy portfolio management system shows profit more than double of the benchmark portfolios. BM1 shows 103.44 point profit, BM2 shows 488.61 point profit, and BM3 shows 502.41 point profit after deducting trading cost. The best benchmark is the portfolio of the two best profit strategies during the test period. The proposed system 1 shows 706.22 point profit and proposed system 2 shows 768.95 point profit after deducting trading cost. The equity curves for the entire period show stable pattern. With higher profit, this suggests a good trading direction for system traders. We can make more stable and more profitable portfolios if we add money management module to the system.

A Study on Implementation and Performance of the Power Control High Power Amplifier for Satellite Mobile Communication System (위성통신용 전력제어 고출력증폭기의 구현 및 성능평가에 관한 연구)

  • 전중성;김동일;배정철
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.4 no.1
    • /
    • pp.77-88
    • /
    • 2000
  • In this paper, the 3-mode variable gain high power amplifier for a transmitter of INMARSAT-B operating at L-band(1626.5-1646.5 MHz) was developed. This SSPA can amplify 42 dBm in high power mode, 38 dBm in medium power mode and 36 dBm in low power mode for INMARSAT-B. The allowable errol sets +1 dBm as the upper limit and -2 dBm as the lower limit, respectively. To simplify the fabrication process, the whole system is designed by two parts composed of a driving amplifier and a high power amplifier. The HP's MGA-64135 and Motorola's MRF-6401 were used for driving amplifier, and the ERICSSON's PTE-10114 and PTF-10021 for the high power amplifier. The SSPA was fabricated by the RP circuits, the temperature compensation circuits and 3-mode variable gain control circuits and 20 dB parallel coupled-line directional coupler in aluminum housing. In addition, the gain control method was proposed by digital attenuator for 3-mode amplifier. Then il has been experimentally verified that the gain is controlled for single tone signal as well as two tone signals. In this case, the SSPA detects the output power by 20 dB parallel coupled-line directional coupler and phase non-splitter amplifier. The realized SSPA has 41.6 dB, 37.6 dB and 33.2 dB for small signal gain within 20 MHz bandwidth, and the VSWR of input and output port is less than 1.3:1. The minimum value of the 1 dB compression point gets more than 12 dBm for 3-mode variable gain high power amplifier. A typical two tone intermodulation point has 36.5 dBc maximum which is single carrier backed off 3 dB from 1 dB compression point. The maximum output power of 43 dBm was achieved at the 1636.5 MHz. These results reveal a high power of 20 Watt, which was the design target.

  • PDF

A Variable Latency Newton-Raphson's Floating Point Number Reciprocal Computation (가변 시간 뉴톤-랍손 부동소수점 역수 계산기)

  • Kim Sung-Gi;Cho Gyeong-Yeon
    • The KIPS Transactions:PartA
    • /
    • v.12A no.2 s.92
    • /
    • pp.95-102
    • /
    • 2005
  • The Newton-Raphson iterative algorithm for finding a floating point reciprocal which is widely used for a floating point division, calculates the reciprocal by performing a fixed number of multiplications. In this paper, a variable latency Newton-Raphson's reciprocal algorithm is proposed that performs multiplications a variable number of times until the error becomes smaller than a given value. To find the reciprocal of a floating point number F, the algorithm repeats the following operations: '$'X_{i+1}=X=X_i*(2-e_r-F*X_i),\;i\in\{0,\;1,\;2,...n-1\}'$ with the initial value $'X_0=\frac{1}{F}{\pm}e_0'$. The bits to the right of p fractional bits in intermediate multiplication results are truncated, and this truncation error is less than $'e_r=2^{-p}'$. The value of p is 27 for the single precision floating point, and 57 for the double precision floating point. Let $'X_i=\frac{1}{F}+e_i{'}$, these is $'X_{i+1}=\frac{1}{F}-e_{i+1},\;where\;{'}e_{i+1}, is less than the smallest number which is representable by floating point number. So, $X_{i+1}$ is approximate to $'\frac{1}{F}{'}$. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal tables $(X_0=\frac{1}{F}{\pm}e_0)$ with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a reciprocal unit. Also, it can be used to construct optimized approximate reciprocal tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia scientific computing, etc.

Response Modeling for the Marketing Promotion with Weighted Case Based Reasoning Under Imbalanced Data Distribution (불균형 데이터 환경에서 변수가중치를 적용한 사례기반추론 기반의 고객반응 예측)

  • Kim, Eunmi;Hong, Taeho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.29-45
    • /
    • 2015
  • Response modeling is a well-known research issue for those who have tried to get more superior performance in the capability of predicting the customers' response for the marketing promotion. The response model for customers would reduce the marketing cost by identifying prospective customers from very large customer database and predicting the purchasing intention of the selected customers while the promotion which is derived from an undifferentiated marketing strategy results in unnecessary cost. In addition, the big data environment has accelerated developing the response model with data mining techniques such as CBR, neural networks and support vector machines. And CBR is one of the most major tools in business because it is known as simple and robust to apply to the response model. However, CBR is an attractive data mining technique for data mining applications in business even though it hasn't shown high performance compared to other machine learning techniques. Thus many studies have tried to improve CBR and utilized in business data mining with the enhanced algorithms or the support of other techniques such as genetic algorithm, decision tree and AHP (Analytic Process Hierarchy). Ahn and Kim(2008) utilized logit, neural networks, CBR to predict that which customers would purchase the items promoted by marketing department and tried to optimized the number of k for k-nearest neighbor with genetic algorithm for the purpose of improving the performance of the integrated model. Hong and Park(2009) noted that the integrated approach with CBR for logit, neural networks, and Support Vector Machine (SVM) showed more improved prediction ability for response of customers to marketing promotion than each data mining models such as logit, neural networks, and SVM. This paper presented an approach to predict customers' response of marketing promotion with Case Based Reasoning. The proposed model was developed by applying different weights to each feature. We deployed logit model with a database including the promotion and the purchasing data of bath soap. After that, the coefficients were used to give different weights of CBR. We analyzed the performance of proposed weighted CBR based model compared to neural networks and pure CBR based model empirically and found that the proposed weighted CBR based model showed more superior performance than pure CBR model. Imbalanced data is a common problem to build data mining model to classify a class with real data such as bankruptcy prediction, intrusion detection, fraud detection, churn management, and response modeling. Imbalanced data means that the number of instance in one class is remarkably small or large compared to the number of instance in other classes. The classification model such as response modeling has a lot of trouble to recognize the pattern from data through learning because the model tends to ignore a small number of classes while classifying a large number of classes correctly. To resolve the problem caused from imbalanced data distribution, sampling method is one of the most representative approach. The sampling method could be categorized to under sampling and over sampling. However, CBR is not sensitive to data distribution because it doesn't learn from data unlike machine learning algorithm. In this study, we investigated the robustness of our proposed model while changing the ratio of response customers and nonresponse customers to the promotion program because the response customers for the suggested promotion is always a small part of nonresponse customers in the real world. We simulated the proposed model 100 times to validate the robustness with different ratio of response customers to response customers under the imbalanced data distribution. Finally, we found that our proposed CBR based model showed superior performance than compared models under the imbalanced data sets. Our study is expected to improve the performance of response model for the promotion program with CBR under imbalanced data distribution in the real world.

A Match-Making System Considering Symmetrical Preferences of Matching Partners (상호 대칭적 만족성을 고려한 온라인 데이트시스템)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.177-192
    • /
    • 2012
  • This is a study of match-making systems that considers the mutual satisfaction of matching partners. Recently, recommendation systems have been applied to people recommendation, such as recommending new friends, employees, or dating partners. One of the prominent domain areas is match-making systems that recommend suitable dating partners to customers. A match-making system, however, is different from a product recommender system. First, a match-making system needs to satisfy the recommended partners as well as the customer, whereas a product recommender system only needs to satisfy the customer. Second, match-making systems need to include as many participants in a matching pool as possible for their recommendation results, even with unpopular customers. In other words, recommendations should not be focused only on a limited number of popular people; unpopular people should also be listed on someone else's matching results. In product recommender systems, it is acceptable to recommend the same popular items to many customers, since these items can easily be additionally supplied. However, in match-making systems, there are only a few popular people, and they may become overburdened with too many recommendations. Also, a successful match could cause a customer to drop out of the matching pool. Thus, match-making systems should provide recommendation services equally to all customers without favoring popular customers. The suggested match-making system, called Mutually Beneficial Matching (MBM), considers the reciprocal satisfaction of both the customer and the matched partner and also considers the number of customers who are excluded in the matching. A brief outline of the MBM method is as follows: First, it collects a customer's profile information, his/her preferable dating partner's profile information and the weights that he/she considers important when selecting dating partners. Then, it calculates the preference score of a customer to certain potential dating partners on the basis of the difference between them. The preference score of a certain partner to a customer is also calculated in this way. After that, the mutual preference score is produced by the two preference values calculated in the previous step using the proposed formula in this study. The proposed formula reflects the symmetry of preferences as well as their quantities. Finally, the MBM method recommends the top N partners having high mutual preference scores to a customer. The prototype of the suggested MBM system is implemented by JAVA and applied to an artificial dataset that is based on real survey results from major match-making companies in Korea. The results of the MBM method are compared with those of the other two conventional methods: Preference-Based Matching (PBM), which only considers a customer's preferences, and Arithmetic Mean-Based Matching (AMM), which considers the preferences of both the customer and the partner (although it does not reflect their symmetry in the matching results). We perform the comparisons in terms of criteria such as average preference of the matching partners, average symmetry, and the number of people who are excluded from the matching results by changing the number of recommendations to 5, 10, 15, 20, and 25. The results show that in many cases, the suggested MBM method produces average preferences and symmetries that are significantly higher than those of the PBM and AMM methods. Moreover, in every case, MBM produces a smaller pool of excluded people than those of the PBM method.

Object Tracking Based on Exactly Reweighted Online Total-Error-Rate Minimization (정확히 재가중되는 온라인 전체 에러율 최소화 기반의 객체 추적)

  • JANG, Se-In;PARK, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.53-65
    • /
    • 2019
  • Object tracking is one of important steps to achieve video-based surveillance systems. Object tracking is considered as an essential task similar to object detection and recognition. In order to perform object tracking, various machine learning methods (e.g., least-squares, perceptron and support vector machine) can be applied for different designs of tracking systems. In general, generative methods (e.g., principal component analysis) were utilized due to its simplicity and effectiveness. However, the generative methods were only focused on modeling the target object. Due to this limitation, discriminative methods (e.g., binary classification) were adopted to distinguish the target object and the background. Among the machine learning methods for binary classification, total error rate minimization can be used as one of successful machine learning methods for binary classification. The total error rate minimization can achieve a global minimum due to a quadratic approximation to a step function while other methods (e.g., support vector machine) seek local minima using nonlinear functions (e.g., hinge loss function). Due to this quadratic approximation, the total error rate minimization could obtain appropriate properties in solving optimization problems for binary classification. However, this total error rate minimization was based on a batch mode setting. The batch mode setting can be limited to several applications under offline learning. Due to limited computing resources, offline learning could not handle large scale data sets. Compared to offline learning, online learning can update its solution without storing all training samples in learning process. Due to increment of large scale data sets, online learning becomes one of essential properties for various applications. Since object tracking needs to handle data samples in real time, online learning based total error rate minimization methods are necessary to efficiently address object tracking problems. Due to the need of the online learning, an online learning based total error rate minimization method was developed. However, an approximately reweighted technique was developed. Although the approximation technique is utilized, this online version of the total error rate minimization could achieve good performances in biometric applications. However, this method is assumed that the total error rate minimization can be asymptotically achieved when only the number of training samples is infinite. Although there is the assumption to achieve the total error rate minimization, the approximation issue can continuously accumulate learning errors according to increment of training samples. Due to this reason, the approximated online learning solution can then lead a wrong solution. The wrong solution can make significant errors when it is applied to surveillance systems. In this paper, we propose an exactly reweighted technique to recursively update the solution of the total error rate minimization in online learning manner. Compared to the approximately reweighted online total error rate minimization, an exactly reweighted online total error rate minimization is achieved. The proposed exact online learning method based on the total error rate minimization is then applied to object tracking problems. In our object tracking system, particle filtering is adopted. In particle filtering, our observation model is consisted of both generative and discriminative methods to leverage the advantages between generative and discriminative properties. In our experiments, our proposed object tracking system achieves promising performances on 8 public video sequences over competing object tracking systems. The paired t-test is also reported to evaluate its quality of the results. Our proposed online learning method can be extended under the deep learning architecture which can cover the shallow and deep networks. Moreover, online learning methods, that need the exact reweighting process, can use our proposed reweighting technique. In addition to object tracking, the proposed online learning method can be easily applied to object detection and recognition. Therefore, our proposed methods can contribute to online learning community and object tracking, detection and recognition communities.

A Hardware Implementation of the Underlying Field Arithmetic Processor based on Optimized Unit Operation Components for Elliptic Curve Cryptosystems (타원곡선을 암호시스템에 사용되는 최적단위 연산항을 기반으로 한 기저체 연산기의 하드웨어 구현)

  • Jo, Seong-Je;Kwon, Yong-Jin
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.1
    • /
    • pp.88-95
    • /
    • 2002
  • In recent years, the security of hardware and software systems is one of the most essential factor of our safe network community. As elliptic Curve Cryptosystems proposed by N. Koblitz and V. Miller independently in 1985, require fewer bits for the same security as the existing cryptosystems, for example RSA, there is a net reduction in cost size, and time. In this thesis, we propose an efficient hardware architecture of underlying field arithmetic processor for Elliptic Curve Cryptosystems, and a very useful method for implementing the architecture, especially multiplicative inverse operator over GF$GF (2^m)$ onto FPGA and futhermore VLSI, where the method is based on optimized unit operation components. We optimize the arithmetic processor for speed so that it has a resonable number of gates to implement. The proposed architecture could be applied to any finite field $F_{2m}$. According to the simulation result, though the number of gates are increased by a factor of 8.8, the multiplication speed We optimize the arithmetic processor for speed so that it has a resonable number of gates to implement. The proposed architecture could be applied to any finite field $F_{2m}$. According to the simulation result, though the number of gates are increased by a factor of 8.8, the multiplication speed and inversion speed has been improved 150 times, 480 times respectively compared with the thesis presented by Sarwono Sutikno et al. [7]. The designed underlying arithmetic processor can be also applied for implementing other crypto-processor and various finite field applications.