• Title/Summary/Keyword: Information input algorithm

Search Result 2,444, Processing Time 0.035 seconds

Feature selection for text data via sparse principal component analysis (희소주성분분석을 이용한 텍스트데이터의 단어선택)

  • Won Son
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.6
    • /
    • pp.501-514
    • /
    • 2023
  • When analyzing high dimensional data such as text data, if we input all the variables as explanatory variables, statistical learning procedures may suffer from over-fitting problems. Furthermore, computational efficiency can deteriorate with a large number of variables. Dimensionality reduction techniques such as feature selection or feature extraction are useful for dealing with these problems. The sparse principal component analysis (SPCA) is one of the regularized least squares methods which employs an elastic net-type objective function. The SPCA can be used to remove insignificant principal components and identify important variables from noisy observations. In this study, we propose a dimension reduction procedure for text data based on the SPCA. Applying the proposed procedure to real data, we find that the reduced feature set maintains sufficient information in text data while the size of the feature set is reduced by removing redundant variables. As a result, the proposed procedure can improve classification accuracy and computational efficiency, especially for some classifiers such as the k-nearest neighbors algorithm.

Controls Methods Review of Single-Phase Boost PFC Converter : Average Current Mode Control, Predictive Current Mode Control, and Model Based Predictive Current Control

  • Hyeon-Joon Ko;Yeong-Jun Choi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.231-238
    • /
    • 2023
  • For boost PFC (Power Factor Correction) converters, various control methods are being studied to achieve unity power factor and low THD (Total Harmonic Distortion) of AC input current. Among them, average current mode control, which controls the average value of the inductor current to follow the current reference, is the most widely used. However, nowadays, as advanced digital control becomes possible with the development of digital processors, predictive control of boost PFC converters is receiving attention. Predictive control is classified into predictive current mode control, which generates duty in advance using a predictive algorithm, and model predictive current control, which performs switching operations by selecting a cost function based on a model. Therefore, this paper simply explains the average current mode control, predictive current mode control, and model predictive current control of the boost PFC converter. In addition, current control under entire load and disturbance conditions is compared and analyzed through simulation.

Boot storm Reduction through Artificial Intelligence Driven System in Virtual Desktop Infrastructure

  • Heejin Lee;Taeyoung Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.7
    • /
    • pp.1-9
    • /
    • 2024
  • In this paper, we propose BRAIDS, a boot storm mitigation plan consisting of an AI-based VDI usage prediction system and a virtual machine boot scheduler system, to alleviate boot storms and improve service stability. Virtual Desktop Infrastructure (VDI) is an important technology for improving an organization's work productivity and increasing IT infrastructure efficiency. Boot storms that occur when multiple virtual desktops boot simultaneously cause poor performance and increased latency. Using the xgboost algorithm, existing VDI usage data is used to predict future VDI usage. In addition, it receives the predicted usage as input, defines a boot storm considering the hardware specifications of the VDI server and virtual machine, and provides a schedule to sequentially boot virtual machines to alleviate boot storms. Through the case study, the VDI usage prediction model showed high prediction accuracy and performance improvement, and it was confirmed that the boot storm phenomenon in the virtual desktop environment can be alleviated and IT infrastructure can be utilized efficiently through the virtual machine boot scheduler.

The Optimal Turbo Coded V-BLAST Technique in the Adaptive Modulation System corresponding to each MIMO Scheme (적응 변조 시스템에서 각 MIMO 기법에 따른 최적의 터보 부호화된 V-BLAST 기법)

  • Lee, Kyung-Hwan;Ryoo, Sang-Jin;Choi, Kwang-Wook;You, Cheol-Woo;Hong, Dae-Ki;Kim, Dae-Jin;Hwang, In-Tae;Kim, Cheol-Sung
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.44 no.6 s.360
    • /
    • pp.40-47
    • /
    • 2007
  • In this paper, we propose and analyze the Adaptive Modulation System with optimal Turbo Coded V-BLAST(Vertical-Bell-lab Layered Space-Time) technique that adopts the extrinsic information from MAP (Maximum A Posteriori) Decoder with Iterative Decoding as a priori probability in two decoding procedures of V-BLAST; the ordering and the slicing. Also, we consider and compare the Adaptive Modulation System using conventional Turbo Coded V-BLAST technique that is simply combined V-BLAST with Turbo Coding scheme and the Adaptive Modulation System using conventional Turbo Coded V-BLAST technique that is decoded by the ML (Maximum Likelihood) decoding algorithm. We observe a throughput performance and a complexity. As a result of a performance comparison of each system, it has been proved that the complexity of the proposed decoding algorithm is lower than that of the ML decoding algorithm but is higher than that of the conventional V-BLAST decoding algorithm. however, we can see that the proposed system achieves a better throughput performance than the conventional system in the whole SNR (Signal to Noise Ratio) range. And the result shows that the proposed system achieves a throughput performance close to the ML decoded system. Specifically, a simulation shows that the maximum throughput improvement in each MIMO scheme is respectively about 350 kbps, 460 kbps, and 740 kbps compared to the conventional system. It is suggested that the effect of the proposed decoding algorithm accordingly gets higher as the number of system antenna increases.

A Dynamic Management Method for FOAF Using RSS and OLAP cube (RSS와 OLAP 큐브를 이용한 FOAF의 동적 관리 기법)

  • Sohn, Jong-Soo;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.39-60
    • /
    • 2011
  • Since the introduction of web 2.0 technology, social network service has been recognized as the foundation of an important future information technology. The advent of web 2.0 has led to the change of content creators. In the existing web, content creators are service providers, whereas they have changed into service users in the recent web. Users share experiences with other users improving contents quality, thereby it has increased the importance of social network. As a result, diverse forms of social network service have been emerged from relations and experiences of users. Social network is a network to construct and express social relations among people who share interests and activities. Today's social network service has not merely confined itself to showing user interactions, but it has also developed into a level in which content generation and evaluation are interacting with each other. As the volume of contents generated from social network service and the number of connections between users have drastically increased, the social network extraction method becomes more complicated. Consequently the following problems for the social network extraction arise. First problem lies in insufficiency of representational power of object in the social network. Second problem is incapability of expressional power in the diverse connections among users. Third problem is the difficulty of creating dynamic change in the social network due to change in user interests. And lastly, lack of method capable of integrating and processing data efficiently in the heterogeneous distributed computing environment. The first and last problems can be solved by using FOAF, a tool for describing ontology-based user profiles for construction of social network. However, solving second and third problems require a novel technology to reflect dynamic change of user interests and relations. In this paper, we propose a novel method to overcome the above problems of existing social network extraction method by applying FOAF (a tool for describing user profiles) and RSS (a literary web work publishing mechanism) to OLAP system in order to dynamically innovate and manage FOAF. We employed data interoperability which is an important characteristic of FOAF in this paper. Next we used RSS to reflect such changes as time flow and user interests. RSS, a tool for literary web work, provides standard vocabulary for distribution at web sites and contents in the form of RDF/XML. In this paper, we collect personal information and relations of users by utilizing FOAF. We also collect user contents by utilizing RSS. Finally, collected data is inserted into the database by star schema. The system we proposed in this paper generates OLAP cube using data in the database. 'Dynamic FOAF Management Algorithm' processes generated OLAP cube. Dynamic FOAF Management Algorithm consists of two functions: one is find_id_interest() and the other is find_relation (). Find_id_interest() is used to extract user interests during the input period, and find-relation() extracts users matching user interests. Finally, the proposed system reconstructs FOAF by reflecting extracted relationships and interests of users. For the justification of the suggested idea, we showed the implemented result together with its analysis. We used C# language and MS-SQL database, and input FOAF and RSS as data collected from livejournal.com. The implemented result shows that foaf : interest of users has reached an average of 19 percent increase for four weeks. In proportion to the increased foaf : interest change, the number of foaf : knows of users has grown an average of 9 percent for four weeks. As we use FOAF and RSS as basic data which have a wide support in web 2.0 and social network service, we have a definite advantage in utilizing user data distributed in the diverse web sites and services regardless of language and types of computer. By using suggested method in this paper, we can provide better services coping with the rapid change of user interests with the automatic application of FOAF.

A Study for Hybrid Honeypot Systems (하이브리드 허니팟 시스템에 대한 연구)

  • Lee, Moon-Goo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.11
    • /
    • pp.127-133
    • /
    • 2014
  • In order to protect information asset from various malicious code, Honeypot system is implemented. Honeypot system is designed to elicit attacks so that internal system is not attacked or it is designed to collect malicious code information. However, existing honeypot system is designed for the purpose of collecting information, so it is designed to induce inflows of attackers positively by establishing disguised server or disguised client server and by providing disguised contents. In case of establishing disguised server, it should reinstall hardware in a cycle of one year because of frequent disk input and output. In case of establishing disguised client server, it has operating problem such as procuring professional labor force because it has a limit to automize the analysis of acquired information. To solve and supplement operating problem and previous problem of honeypot's hardware, this thesis suggested hybrid honeypot. Suggested hybrid honeypot has honeywall, analyzed server and combined console and it processes by categorizing attacking types into two types. It is designed that disguise (inducement) and false response (emulation) are connected to common switch area to operate high level interaction server, which is type 1 and low level interaction server, which is type 2. This hybrid honeypot operates low level honeypot and high level honeypot. Analysis server converts hacking types into hash value and separates it into correlation analysis algorithm and sends it to honeywall. Integrated monitoring console implements continuous monitoring, so it is expected that not only analyzing information about recent hacking method and attacking tool but also it provides effects of anticipative security response.

The Region-of-Interest Based Pixel Domain Distributed Video Coding With Low Decoding Complexity (관심 영역 기반의 픽셀 도메인 분산 비디오 부호)

  • Jung, Chun-Sung;Kim, Ung-Hwan;Jun, Dong-San;Park, Hyun-Wook;Ha, Jeong-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.4
    • /
    • pp.79-89
    • /
    • 2010
  • Recently, distributed video coding (DVC) has been actively studied for low complexity video encoder. The complexity of the encoder in DVC is much simpler than that of traditional video coding schemes such as H.264/AVC, but the complexity of the decoder in DVC increases. In this paper, we propose the Region-Of-Interest (ROI) based DVC with low decoding complexity. The proposed scheme uses the ROI, the region the motion of objects is quickly moving as the input of the Wyner-Ziv (WZ) encoder instead of the whole WZ frame. In this case, the complexity of encoder and decoder is reduced, and the bite rate decreases. Experimental results show that the proposed scheme obtain 0.95 dB as the maximum PSNR gain in Hall Monitor sequence and 1.87 dB in Salesman sequence. Moreover, the complexity of encoder and decoder in the proposed scheme is significantly reduced by 73.7% and 63.3% over the traditional DVC scheme, respectively. In addition, we employ the layered belief propagation (LBP) algorithm whose decoding convergence speed is 1.73 times faster than belief propagation algorithm as the Low-Density Parity-Check (LDPC) decoder for low decoding complexity.

Eye Tracking Using Neural Network and Mean-shift (신경망과 Mean-shift를 이용한 눈 추적)

  • Kang, Sin-Kuk;Kim, Kyung-Tai;Shin, Yun-Hee;Kim, Na-Yeon;Kim, Eun-Yi
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.1
    • /
    • pp.56-63
    • /
    • 2007
  • In this paper, an eye tracking method is presented using a neural network (NN) and mean-shift algorithm that can accurately detect and track user's eyes under the cluttered background. In the proposed method, to deal with the rigid head motion, the facial region is first obtained using skin-color model and con-nected-component analysis. Thereafter the eye regions are localized using neural network (NN)-based tex-ture classifier that discriminates the facial region into eye class and non-eye class, which enables our method to accurately detect users' eyes even if they put on glasses. Once the eye region is localized, they are continuously and correctly tracking by mean-shift algorithm. To assess the validity of the proposed method, it is applied to the interface system using eye movement and is tested with a group of 25 users through playing a 'aligns games.' The results show that the system process more than 30 frames/sec on PC for the $320{\times}240$ size input image and supply a user-friendly and convenient access to a computer in real-time operation.

Matching and Geometric Correction of Multi-Resolution Satellite SAR Images Using SURF Technique (SURF 기법을 활용한 위성 SAR 다중해상도 영상의 정합 및 기하보정)

  • Kim, Ah-Leum;Song, Jung-Hwan;Kang, Seo-Li;Lee, Woo-Kyung
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.4
    • /
    • pp.431-444
    • /
    • 2014
  • As applications of spaceborne SAR imagery are extended, there are increased demands for accurate registrations for better understanding and fusion of radar images. It becomes common to adopt multi-resolution SAR images to apply for wide area reconnaissance. Geometric correction of the SAR images can be performed by using satellite orbit and attitude information. However, the inherent errors of the SAR sensor's attitude and ground geographical data tend to cause geometric errors in the produced SAR image. These errors should be corrected when the SAR images are applied for multi-temporal analysis, change detection applications and image fusion with other sensor images. The undesirable ground registration errors can be corrected with respect to the true ground control points in order to produce complete SAR products. Speeded Up Robust Feature (SURF) technique is an efficient algorithm to extract ground control points from images but is considered to be inappropriate to apply to SAR images due to high speckle noises. In this paper, an attempt is made to apply SURF algorithm to SAR images for image registration and fusion. Matched points are extracted with respect to the varying parameters of Hessian and SURF matching thresholds, and the performance is analyzed by measuring the imaging matching accuracies. A number of performance measures concerning image registration are suggested to validate the use of SURF for spaceborne SAR images. Various simulations methodologies are suggested the validate the use of SURF for the geometric correction and image registrations and it is shown that a good choice of input parameters to the SURF algorithm should be made to apply for the spaceborne SAR images of moderate resolutions.

Self-Diagnosing Disease Classification System for Oriental Medical Science with Refined Fuzzy ART Algorithm (Refined Fuzzy ART 알고리즘을 이용한 한방 자가 질병 분류 시스템)

  • Kim, Kwang-Baek
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.1-8
    • /
    • 2009
  • In this paper, we propose a home medical system that integrates a self-diagnosing disease classification system and a tele-consulting system by communication technology. The proposed disease classification system supports to self-diagnose the health condition based on oriental medical science using fuzzy neural network algorithm. The prepared database includes 72 different diseases and their associated symptoms based on a famous medical science book "Dong-eui-bo-gam". The proposed system extracts three most prospective diseases from user's symptoms by analyzing disease database with fuzzy neural network technology. Technically, user's symptoms are used as an input vector and the clustering algorithm based upon a fuzzy neural network is performed. The degree of fuzzy membership is computed for each probable cluster and the system infers the three most prospective diseases with their degree of membership. Such information should be sent to medical doctors via our tele-consulting system module. Finally a user can take an appropriate consultation via video images by a medical doctor. Oriental medical doctors verified the accuracy of disease diagnosing ability and the efficacy of overall system's plausibility in the real world.