• Title/Summary/Keyword: 한국적

Search Result 342,669, Processing Time 0.32 seconds

Comparative Analysis of ViSCa Platform-based Mobile Payment Service with other Cases (스마트카드 가상화(ViSCa) 플랫폼 기반 모바일 결제 서비스 제안 및 타 사례와의 비교분석)

  • Lee, June-Yeop;Lee, Kyoung-Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.163-178
    • /
    • 2014
  • Following research proposes "Virtualization of Smart Cards (ViSCa)" which is a security system that aims to provide a multi-device platform for the deployment of services that require a strong security protocol, both for the access & authentication and execution of its applications and focuses on analyzing Virtualization of Smart Cards (ViSCa) platform-based mobile payment service by comparing with other similar cases. At the present day, the appearance of new ICT, the diffusion of new user devices (such as smartphones, tablet PC, and so on) and the growth of internet penetration rate are creating many world-shaking services yet in the most of these applications' private information has to be shared, which means that security breaches and illegal access to that information are real threats that have to be solved. Also mobile payment service is, one of the innovative services, has same issues which are real threats for users because mobile payment service sometimes requires user identification, an authentication procedure and confidential data sharing. Thus, an extra layer of security is needed in their communication and execution protocols. The Virtualization of Smart Cards (ViSCa), concept is a holistic approach and centralized management for a security system that pursues to provide a ubiquitous multi-device platform for the arrangement of mobile payment services that demand a powerful security protocol, both for the access & authentication and execution of its applications. In this sense, Virtualization of Smart Cards (ViSCa) offers full interoperability and full access from any user device without any loss of security. The concept prevents possible attacks by third parties, guaranteeing the confidentiality of personal data, bank accounts or private financial information. The Virtualization of Smart Cards (ViSCa) concept is split in two different phases: the execution of the user authentication protocol on the user device and the cloud architecture that executes the secure application. Thus, the secure service access is guaranteed at anytime, anywhere and through any device supporting previously required security mechanisms. The security level is improved by using virtualization technology in the cloud. This virtualization technology is used terminal virtualization to virtualize smart card hardware and thrive to manage virtualized smart cards as a whole, through mobile cloud technology in Virtualization of Smart Cards (ViSCa) platform-based mobile payment service. This entire process is referred to as Smart Card as a Service (SCaaS). Virtualization of Smart Cards (ViSCa) platform-based mobile payment service virtualizes smart card, which is used as payment mean, and loads it in to the mobile cloud. Authentication takes place through application and helps log on to mobile cloud and chooses one of virtualized smart card as a payment method. To decide the scope of the research, which is comparing Virtualization of Smart Cards (ViSCa) platform-based mobile payment service with other similar cases, we categorized the prior researches' mobile payment service groups into distinct feature and service type. Both groups store credit card's data in the mobile device and settle the payment process at the offline market. By the location where the electronic financial transaction information (data) is stored, the groups can be categorized into two main service types. First is "App Method" which loads the data in the server connected to the application. Second "Mobile Card Method" stores its data in the Integrated Circuit (IC) chip, which holds financial transaction data, which is inbuilt in the mobile device secure element (SE). Through prior researches on accept factors of mobile payment service and its market environment, we came up with six key factors of comparative analysis which are economic, generality, security, convenience(ease of use), applicability and efficiency. Within the chosen group, we compared and analyzed the selected cases and Virtualization of Smart Cards (ViSCa) platform-based mobile payment service.

A Methodology for Extracting Shopping-Related Keywords by Analyzing Internet Navigation Patterns (인터넷 검색기록 분석을 통한 쇼핑의도 포함 키워드 자동 추출 기법)

  • Kim, Mingyu;Kim, Namgyu;Jung, Inhwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.123-136
    • /
    • 2014
  • Recently, online shopping has further developed as the use of the Internet and a variety of smart mobile devices becomes more prevalent. The increase in the scale of such shopping has led to the creation of many Internet shopping malls. Consequently, there is a tendency for increasingly fierce competition among online retailers, and as a result, many Internet shopping malls are making significant attempts to attract online users to their sites. One such attempt is keyword marketing, whereby a retail site pays a fee to expose its link to potential customers when they insert a specific keyword on an Internet portal site. The price related to each keyword is generally estimated by the keyword's frequency of appearance. However, it is widely accepted that the price of keywords cannot be based solely on their frequency because many keywords may appear frequently but have little relationship to shopping. This implies that it is unreasonable for an online shopping mall to spend a great deal on some keywords simply because people frequently use them. Therefore, from the perspective of shopping malls, a specialized process is required to extract meaningful keywords. Further, the demand for automating this extraction process is increasing because of the drive to improve online sales performance. In this study, we propose a methodology that can automatically extract only shopping-related keywords from the entire set of search keywords used on portal sites. We define a shopping-related keyword as a keyword that is used directly before shopping behaviors. In other words, only search keywords that direct the search results page to shopping-related pages are extracted from among the entire set of search keywords. A comparison is then made between the extracted keywords' rankings and the rankings of the entire set of search keywords. Two types of data are used in our study's experiment: web browsing history from July 1, 2012 to June 30, 2013, and site information. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The original sample dataset contains 150 million transaction logs. First, portal sites are selected, and search keywords in those sites are extracted. Search keywords can be easily extracted by simple parsing. The extracted keywords are ranked according to their frequency. The experiment uses approximately 3.9 million search results from Korea's largest search portal site. As a result, a total of 344,822 search keywords were extracted. Next, by using web browsing history and site information, the shopping-related keywords were taken from the entire set of search keywords. As a result, we obtained 4,709 shopping-related keywords. For performance evaluation, we compared the hit ratios of all the search keywords with the shopping-related keywords. To achieve this, we extracted 80,298 search keywords from several Internet shopping malls and then chose the top 1,000 keywords as a set of true shopping keywords. We measured precision, recall, and F-scores of the entire amount of keywords and the shopping-related keywords. The F-Score was formulated by calculating the harmonic mean of precision and recall. The precision, recall, and F-score of shopping-related keywords derived by the proposed methodology were revealed to be higher than those of the entire number of keywords. This study proposes a scheme that is able to obtain shopping-related keywords in a relatively simple manner. We could easily extract shopping-related keywords simply by examining transactions whose next visit is a shopping mall. The resultant shopping-related keyword set is expected to be a useful asset for many shopping malls that participate in keyword marketing. Moreover, the proposed methodology can be easily applied to the construction of special area-related keywords as well as shopping-related ones.

A Method for Evaluating News Value based on Supply and Demand of Information Using Text Analysis (텍스트 분석을 활용한 정보의 수요 공급 기반 뉴스 가치 평가 방안)

  • Lee, Donghoon;Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.45-67
    • /
    • 2016
  • Given the recent development of smart devices, users are producing, sharing, and acquiring a variety of information via the Internet and social network services (SNSs). Because users tend to use multiple media simultaneously according to their goals and preferences, domestic SNS users use around 2.09 media concurrently on average. Since the information provided by such media is usually textually represented, recent studies have been actively conducting textual analysis in order to understand users more deeply. Earlier studies using textual analysis focused on analyzing a document's contents without substantive consideration of the diverse characteristics of the source medium. However, current studies argue that analytical and interpretive approaches should be applied differently according to the characteristics of a document's source. Documents can be classified into the following types: informative documents for delivering information, expressive documents for expressing emotions and aesthetics, operational documents for inducing the recipient's behavior, and audiovisual media documents for supplementing the above three functions through images and music. Further, documents can be classified according to their contents, which comprise facts, concepts, procedures, principles, rules, stories, opinions, and descriptions. Documents have unique characteristics according to the source media by which they are distributed. In terms of newspapers, only highly trained people tend to write articles for public dissemination. In contrast, with SNSs, various types of users can freely write any message and such messages are distributed in an unpredictable way. Again, in the case of newspapers, each article exists independently and does not tend to have any relation to other articles. However, messages (original tweets) on Twitter, for example, are highly organized and regularly duplicated and repeated through replies and retweets. There have been many studies focusing on the different characteristics between newspapers and SNSs. However, it is difficult to find a study that focuses on the difference between the two media from the perspective of supply and demand. We can regard the articles of newspapers as a kind of information supply, whereas messages on various SNSs represent a demand for information. By investigating traditional newspapers and SNSs from the perspective of supply and demand of information, we can explore and explain the information dilemma more clearly. For example, there may be superfluous issues that are heavily reported in newspaper articles despite the fact that users seldom have much interest in these issues. Such overproduced information is not only a waste of media resources but also makes it difficult to find valuable, in-demand information. Further, some issues that are covered by only a few newspapers may be of high interest to SNS users. To alleviate the deleterious effects of information asymmetries, it is necessary to analyze the supply and demand of each information source and, accordingly, provide information flexibly. Such an approach would allow the value of information to be explored and approximated on the basis of the supply-demand balance. Conceptually, this is very similar to the price of goods or services being determined by the supply-demand relationship. Adopting this concept, media companies could focus on the production of highly in-demand issues that are in short supply. In this study, we selected Internet news sites and Twitter as representative media for investigating information supply and demand, respectively. We present the notion of News Value Index (NVI), which evaluates the value of news information in terms of the magnitude of Twitter messages associated with it. In addition, we visualize the change of information value over time using the NVI. We conducted an analysis using 387,014 news articles and 31,674,795 Twitter messages. The analysis results revealed interesting patterns: most issues show lower NVI than average of the whole issue, whereas a few issues show steadily higher NVI than the average.

Mapping Categories of Heterogeneous Sources Using Text Analytics (텍스트 분석을 통한 이종 매체 카테고리 다중 매핑 방법론)

  • Kim, Dasom;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.193-215
    • /
    • 2016
  • In recent years, the proliferation of diverse social networking services has led users to use many mediums simultaneously depending on their individual purpose and taste. Besides, while collecting information about particular themes, they usually employ various mediums such as social networking services, Internet news, and blogs. However, in terms of management, each document circulated through diverse mediums is placed in different categories on the basis of each source's policy and standards, hindering any attempt to conduct research on a specific category across different kinds of sources. For example, documents containing content on "Application for a foreign travel" can be classified into "Information Technology," "Travel," or "Life and Culture" according to the peculiar standard of each source. Likewise, with different viewpoints of definition and levels of specification for each source, similar categories can be named and structured differently in accordance with each source. To overcome these limitations, this study proposes a plan for conducting category mapping between different sources with various mediums while maintaining the existing category system of the medium as it is. Specifically, by re-classifying individual documents from the viewpoint of diverse sources and storing the result of such a classification as extra attributes, this study proposes a logical layer by which users can search for a specific document from multiple heterogeneous sources with different category names as if they belong to the same source. Besides, by collecting 6,000 articles of news from two Internet news portals, experiments were conducted to compare accuracy among sources, supervised learning and semi-supervised learning, and homogeneous and heterogeneous learning data. It is particularly interesting that in some categories, classifying accuracy of semi-supervised learning using heterogeneous learning data proved to be higher than that of supervised learning and semi-supervised learning, which used homogeneous learning data. This study has the following significances. First, it proposes a logical plan for establishing a system to integrate and manage all the heterogeneous mediums in different classifying systems while maintaining the existing physical classifying system as it is. This study's results particularly exhibit very different classifying accuracies in accordance with the heterogeneity of learning data; this is expected to spur further studies for enhancing the performance of the proposed methodology through the analysis of characteristics by category. In addition, with an increasing demand for search, collection, and analysis of documents from diverse mediums, the scope of the Internet search is not restricted to one medium. However, since each medium has a different categorical structure and name, it is actually very difficult to search for a specific category insofar as encompassing heterogeneous mediums. The proposed methodology is also significant for presenting a plan that enquires into all the documents regarding the standards of the relevant sites' categorical classification when the users select the desired site, while maintaining the existing site's characteristics and structure as it is. This study's proposed methodology needs to be further complemented in the following aspects. First, though only an indirect comparison and evaluation was made on the performance of this proposed methodology, future studies would need to conduct more direct tests on its accuracy. That is, after re-classifying documents of the object source on the basis of the categorical system of the existing source, the extent to which the classification was accurate needs to be verified through evaluation by actual users. In addition, the accuracy in classification needs to be increased by making the methodology more sophisticated. Furthermore, an understanding is required that the characteristics of some categories that showed a rather higher classifying accuracy of heterogeneous semi-supervised learning than that of supervised learning might assist in obtaining heterogeneous documents from diverse mediums and seeking plans that enhance the accuracy of document classification through its usage.

Job Preference Analysis and Job Matching System Development for the Middle Aged Class (중장년층 일자리 요구사항 분석 및 인력 고용 매칭 시스템 개발)

  • Kim, Seongchan;Jang, Jincheul;Kim, Seong Jung;Chin, Hyojin;Yi, Mun Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.247-264
    • /
    • 2016
  • With the rapid acceleration of low-birth rate and population aging, the employment of the neglected groups of people including the middle aged class is a crucial issue in South Korea. In particular, in the 2010s, the number of the middle aged who want to find a new job after retirement age is significantly increasing with the arrival of the retirement time of the baby boom generation (born 1955-1963). Despite the importance of matching jobs to this emerging middle aged class, private job portals as well as the Korean government do not provide any online job service tailored for them. A gigantic amount of job information is available online; however, the current recruiting systems do not meet the demand of the middle aged class as their primary targets are young workers. We are in dire need of a specially designed recruiting system for the middle aged. Meanwhile, when users are searching the desired occupations on the Worknet website, provided by the Korean Ministry of Employment and Labor, users are experiencing discomfort to search for similar jobs because Worknet is providing filtered search results on the basis of exact matches of a preferred job code. Besides, according to our Worknet data analysis, only about 24% of job seekers had landed on a job position consistent with their initial preferred job code while the rest had landed on a position different from their initial preference. To improve the situation, particularly for the middle aged class, we investigate a soft job matching technique by performing the following: 1) we review a user behavior logs of Worknet, which is a public job recruiting system set up by the Korean government and point out key system design implications for the middle aged. Specifically, we analyze the job postings that include preferential tags for the middle aged in order to disclose what types of jobs are in favor of the middle aged; 2) we develope a new occupation classification scheme for the middle aged, Korea Occupation Classification for the Middle-aged (KOCM), based on the similarity between jobs by reorganizing and modifying a general occupation classification scheme. When viewed from the perspective of job placement, an occupation classification scheme is a way to connect the enterprises and job seekers and a basic mechanism for job placement. The key features of KOCM include establishing the Simple Labor category, which is the most requested category by enterprises; and 3) we design MOMA (Middle-aged Occupation Matching Algorithm), which is a hybrid job matching algorithm comprising constraint-based reasoning and case-based reasoning. MOMA incorporates KOCM to expand query to search similar jobs in the database. MOMA utilizes cosine similarity between user requirement and job posting to rank a set of postings in terms of preferred job code, salary, distance, and job type. The developed system using MOMA demonstrates about 20 times of improvement over the hard matching performance. In implementing the algorithm for a web-based application of recruiting system for the middle aged, we also considered the usability issue of making the system easier to use, which is especially important for this particular class of users. That is, we wanted to improve the usability of the system during the job search process for the middle aged users by asking to enter only a few simple and core pieces of information such as preferred job (job code), salary, and (allowable) distance to the working place, enabling the middle aged to find a job suitable to their needs efficiently. The Web site implemented with MOMA should be able to contribute to improving job search of the middle aged class. We also expect the overall approach to be applicable to other groups of people for the improvement of job matching results.

ATM Cell Encipherment Method using Rijndael Algorithm in Physical Layer (Rijndael 알고리즘을 이용한 물리 계층 ATM 셀 보안 기법)

  • Im Sung-Yeal;Chung Ki-Dong
    • The KIPS Transactions:PartC
    • /
    • v.13C no.1 s.104
    • /
    • pp.83-94
    • /
    • 2006
  • This paper describes ATM cell encipherment method using Rijndael Algorithm adopted as an AES(Advanced Encryption Standard) by NIST in 2001. ISO 9160 describes the requirement of physical layer data processing in encryption/decryption. For the description of ATM cell encipherment method, we implemented ATM data encipherment equipment which satisfies the requirements of ISO 9160, and verified the encipherment/decipherment processing at ATM STM-1 rate(155.52Mbps). The DES algorithm can process data in the block size of 64 bits and its key length is 64 bits, but the Rijndael algorithm can process data in the block size of 128 bits and the key length of 128, 192, or 256 bits selectively. So it is more flexible in high bit rate data processing and stronger in encription strength than DES. For tile real time encryption of high bit rate data stream. Rijndael algorithm was implemented in FPGA in this experiment. The boundary of serial UNI cell was detected by the CRC method, and in the case of user data cell the payload of 48 octets (384 bits) is converted in parallel and transferred to 3 Rijndael encipherment module in the block size of 128 bits individually. After completion of encryption, the header stored in buffer is attached to the enciphered payload and retransmitted in the format of cell. At the receiving end, the boundary of ceil is detected by the CRC method and the payload type is decided. n the payload type is the user data cell, the payload of the cell is transferred to the 3-Rijndael decryption module in the block sire of 128 bits for decryption of data. And in the case of maintenance cell, the payload is extracted without decryption processing.

Etherification of n-Butanol to Di-n-Butyl Ether over H3+xPW12-xNbxO40 (x=0, 1, 2, 3) Keggin and H6+xP2W18-xNbxO62 (x=0, 1, 2, 3) Wells-Dawson Heteropolyacid Catalysts (Keggin형 H3+xPW12-xNbxO40 (x=0, 1, 2, 3) 및 Wells-Dawson형 H6+xP2W18-xNbxO62 (x=0, 1, 2, 3) 헤테로폴리산 촉매를 이용한 n-Butanol로부터 Di-n-Butyl Ether의 제조)

  • Kim, Jeong Kwon;Choi, Jung Ho;Yi, Jongheop;Song, In Kyu
    • Korean Chemical Engineering Research
    • /
    • v.50 no.2
    • /
    • pp.251-256
    • /
    • 2012
  • Etherification of n-butanol to di-n-Butyl Ether was carried out over Keggin $H_{3+x}PW_{12-x}Nb_xO_{40}$ (x=0, 1, 2, 3) and $H_{6+x}P_2W_{18-x}Nb_xO_{62}$ (x=0, 1, 2, 3) Wells-Dawson heteropolyacid catalysts. Niobium-substituted Keggin and Wells-Dawson heteropolyacid catalysts with different niobium content were prepared. Successful preparation of the catalysts was confirmed by FT-IR, ICP-AES, and $^{31}P$ NMR analyses. Their acid properties were determined by $NH_3$-TPD (Temperature-Programmed Desorption) measurements. Heteropolyacid catalysts showed different acid properties depending on niobium content in both series. The correlation between acid properties of heteropolyacid catalysts and catalytic activity was then established. Acidity of Keggin and Wells-Dawson heteropolyacid catalysts decreased with increasing niobium content, and conversion of n-butanol and yield for di-n-butyl ether increased with increasing acidity of the catalysts, regardless of the identity of heteropolyacid catalysts (without heteropolyacid structural sensitivity). Thus, acidity of heteropolyacid catalysts served as an important factor determining the catalytic performance in the etherification of n-butanol to di-n-Butyl Ether.

A Study of Hydrodynamics and Reaction Characteristics in Relation to the Desulfurization Temperatures of Zn-Based Solid Sorbent in the Lab-scale High Pressure and High Temperature Desulfurization Process (실험실규모 고온고압건식탈황공정의 수력학적 특성 및 탈황온도에 따른 아연계 탈황제의 반응특성 연구)

  • Kyung, Dae-Hyun;Kim, Jae-Young;Jo, Sung-Ho;Park, Young Cheol;Moon, Jong-Ho;Yi, Chang-Keun;Baek, Jeom-In
    • Korean Chemical Engineering Research
    • /
    • v.50 no.3
    • /
    • pp.492-498
    • /
    • 2012
  • In this study, hydrodynamics such as solid circulation rate and voidage in the desulfurizer and the reaction characteristics of Zn-based solid sorbents were investigated using lab-scale high pressure and high temperature desulfurization process. The continuous HGD (Hot Gas Desulfurization) process consist of a fast fluidized bed type desulfurizer (6.2 m tall pipe of 0.015 m i.d), a bubbling fluidized bed type regenerator (1.6 m tall bed of 0.053 m i.d), a loop-seal and the pressure control valves. The solid circulation rate was measured by varying the slide-gate opening positions, the gas velocities and temperatures of the desulfurizer and the voidage in the desulfurizer was derived by the same way. At the same gas velocities and the same opening positions of the slide gate, the solid circulation rate, which was similar at the temperature of $300^{\circ}C$ and $550^{\circ}C$, was low at those temperatures compared with a room temperature. The voidage in the desulfurizer showed a fast fluidized bed type when the opening positions of the slide gate were 10~20% while that showed a turbulent fluidized bed type when those of slide gate were 30~40%. The reaction characteristics of Zn-based solid sorbent were investigated by different desulfurization temperatures at 20 atm in the continuous operation. The $H_2S$ removal efficiency tended to decrease below the desulfurization temperature of $450^{\circ}C$. Thus, the 10 hour continuous operation has been performed at the desulfurization temperature of $500^{\circ}C$ in order to maintain the high $H_2S$ removal efficiency. During 10 hour continuous operation, the $H_2S$ removal efficiency was above 99.99% because the $H_2S$ concentration after desulfurization was not detected at the inlet $H_2S$ concentration of 5,000 ppmv condition using UV analyzers (Radas2) and the detector tube (GASTEC) which lower detection limit is 1 ppmv.

Performance and Economic Analysis of Domestic Supercritical Coal-Fired Power Plant with Post-Combustion CO2 Capture Process (국내 초임계 석탄화력발전소에 연소 후 CO2 포집공정 설치 시 성능 및 경제성 평가)

  • Lee, Ji-Hyun;Kwak, No-Sang;Lee, In-Young;Jang, Kyung-Ryoung;Shim, Jae-Goo
    • Korean Chemical Engineering Research
    • /
    • v.50 no.2
    • /
    • pp.365-370
    • /
    • 2012
  • In this study, Economic analysis of supercritical coal-fired power plant with $CO_2$ capture process was performed. For this purpose, chemical absorption method using amine solvent, which is commercially available and most suitable for existing thermal power plant, was studied. For the evaluation of the economic analysis of coal-fired power plant with post-combustion $CO_2$ capture process in Korea, energy penalty after $CO_2$ capture was calculated using the power equivalent factor suggested by Bolland et al. And the overnight cost of power plant (or cost of plant construction) and the operation cost reported by the IEA (International Energy Agency) were used. Based on chemical absorption method using a amine solvent and 3.31 GJ/$tonCO_2$ as a regeneration energy in the stripper, the net power efficiency was reduced from 41.0% (without $CO_2$ capture) to 31.6% (with $CO_2$ capture) and the levelized cost of electricity was increased from 45.5 USD/MWh (Reference case, without $CO_2$ capture) to 73.9 USD/MWh (With $CO_2$ capture) and the cost of $CO_2$ avoided was estimated as 41.3 USD/$tonCO_2$.

Study on LiFePO4 Composite Cathode Materials to Enhance Thermal Stability of Hybrid Capacitor (하이브리드 커패시터의 열안정성 개선을 위한 LiFePO4 복합양극 소재에 관한 연구)

  • Kwon, Tae-Soon;Park, Ji-Hyun;Kang, Seok-Won;Jeong, Rag-Gyo;Han, Sang-Jin
    • Korean Chemical Engineering Research
    • /
    • v.55 no.2
    • /
    • pp.242-246
    • /
    • 2017
  • The application of composite cathode materials including $LiFePO_4$ (lithium iron phosphate) of olivine crystal structure, which has high thermal stability, were investigated as alternatives for hybrid battery-capacitors with a $LiMn_2O_4$ (spinel crystal structure) cathode, which exhibits decreased performance at high temperatures due to Mn-dissolution. However, these composite cathode materials have been shown to have a reduction in capacity by conducting life cycle experiments in which a $LiFePO_4$/activated carbon cell was charged and discharged between 1.0 V and 2.3 V at two temperatures, $25^{\circ}C$ and $60^{\circ}C$, which caused a degradation of the anode due to the lowered voltage in the anode. To avoid the degradation of the anode, composite cathodes of $LiFePO_4/LiMn_2O_4$ (50:50 wt%), $LiFePO_4$/activated carbon (50:50 wt%) and $LiNi_{1/3}Co_{1/3}Mn_{1/3}O_2$ (50:50 wt%) were prepared and the life cycle experiments were conducted on these cells. The composite cathode including $LiNi_{1/3}Co_{1/3}Mn_{1/3}O_2$ of layered crystal structure showed stable voltage behavior. The discharge capacity retention ratio of $LiNi_{1/3}Co_{1/3}Mn_{1/3}O_2$ was about twice as high as that of a $LiFePO_4/LiMn_2O_4$ cell at thermal stability experiment for a duration of 1,000 hours charged at 2.3 V and a temperature of $80^{\circ}C$.