• Title/Summary/Keyword: New Industry

Search Result 8,449, Processing Time 0.032 seconds

Characteristics of Petroleum Geology of the Marine Basins in North Korea and Mutual Cooperative Plans for MT (Marine Technology) (북한 해양분지의 석유지질학적인 특징과 남북한 해양과학기술 협력 방안)

  • Huh, Sik;Yoo, Hai-Soo;Kwon, Suk-Jae;Oh, Wee-Yeong;Pae, Seong-Hwan
    • The Korean Journal of Petroleum Geology
    • /
    • v.12 no.1
    • /
    • pp.27-33
    • /
    • 2006
  • The possibility of oil reserve has been conformed because the oil has been produced by 450 barrel per day in the West Korea Bay basin of the North Korea. There is also possibility of giant oil reserve since it is geographically close to one of the biggest oil fields of Bohai Basin, China. Based on the on-going oil exploration and the present condition of investment, the areas of ongoing oil exploration are three: West Korea Bay B&C prospect explored by Swedish Taurus, the north of West Korea Bay and Anju basin explored by Canadian SOCO, and East Korea Bay explored by Australian Beach Petroleum. However, there is little or no possibility of oil reserve in the rest sea areas of three. Even though oil reserves were discovered in the some parts of land areas such as Kilju and Myungcheon, it was presumed to have no economical efficiency. Geology in West Korea Bay off the North Korea is similar to that in Bohai Bay off China. The basement consists of thick carbonate rock of the Late Proterozoic and Early Paleozoic overlain by Mesozoic ($6,000{\sim}10,000\;m$) and Cenozoic ($4,000{\sim}5,000\;m$) units. Source rocks are Jurassic black shale (3,000 m or more), Cretaceous black shale ($1,000{\sim}2,000\;m$), and pre-Mesozoic carbonates (several thousand meters). Reservoir rocks are Mesozoic-Cenozoic sandstone with high porosity and pre-Mesozoic fractured carbonate rocks. Petroleum raps are of the anticline, fault sealed, buried hill, and stratigraphic types. It absolutely needs to take up a positive attitude, the activation of ocean science and technology exchange, and the joint research and development of modern MT (Marine Technology) considering the state of establishing new international ocean order forcing on building up 200 nautical mile EEZ (exclusive economic zone) among coastal nations. Both South and North Koreas should extend the ocean jurisdiction and contiguity, and MT development dealing with the same sea areas. It is more urgent problem to find a way to have the North Korea participated in, and then to develop ocean management and ocean industry individually.

  • PDF

Understanding the Mismatch between ERP and Organizational Information Needs and Its Responses: A Study based on Organizational Memory Theory (조직의 정보 니즈와 ERP 기능과의 불일치 및 그 대응책에 대한 이해: 조직 메모리 이론을 바탕으로)

  • Jeong, Seung-Ryul;Bae, Uk-Ho
    • Asia pacific journal of information systems
    • /
    • v.22 no.2
    • /
    • pp.21-38
    • /
    • 2012
  • Until recently, successful implementation of ERP systems has been a popular topic among ERP researchers, who have attempted to identify its various contributing factors. None of these efforts, however, explicitly recognize the need to identify disparities that can exist between organizational information requirements and ERP systems. Since ERP systems are in fact "packages" -that is, software programs developed by independent software vendors for sale to organizations that use them-they are designed to meet the general needs of numerous organizations, rather than the unique needs of a particular organization, as is the case with custom-developed software. By adopting standard packages, organizations can substantially reduce many of the potential implementation risks commonly associated with custom-developed software. However, it is also true that the nature of the package itself could be a risk factor as the features and functions of the ERP systems may not completely comply with a particular organization's informational requirements. In this study, based on the organizational memory mismatch perspective that was derived from organizational memory theory and cognitive dissonance theory, we define the nature of disparities, which we call "mismatches," and propose that the mismatch between organizational information requirements and ERP systems is one of the primary determinants in the successful implementation of ERP systems. Furthermore, we suggest that customization efforts as a coping strategy for mismatches can play a significant role in increasing the possibilities of success. In order to examine the contention we propose in this study, we employed a survey-based field study of ERP project team members, resulting in a total of 77 responses. The results of this study show that, as anticipated from the organizational memory mismatch perspective, the mismatch between organizational information requirements and ERP systems makes a significantly negative impact on the implementation success of ERP systems. This finding confirms our hypothesis that the more mismatch there is, the more difficult successful ERP implementation is, and thus requires more attention to be drawn to mismatch as a major failure source in ERP implementation. This study also found that as a coping strategy on mismatch, the effects of customization are significant. In other words, utilizing the appropriate customization method could lead to the implementation success of ERP systems. This is somewhat interesting because it runs counter to the argument of some literature and ERP vendors that minimized customization (or even the lack thereof) is required for successful ERP implementation. In many ERP projects, there is a tendency among ERP developers to adopt default ERP functions without any customization, adhering to the slogan of "the introduction of best practices." However, this study asserts that we cannot expect successful implementation if we don't attempt to customize ERP systems when mismatches exist. For a more detailed analysis, we identified three types of mismatches-Non-ERP, Non-Procedure, and Hybrid. Among these, only Non-ERP mismatches (a situation in which ERP systems cannot support the existing information needs that are currently fulfilled) were found to have a direct influence on the implementation of ERP systems. Neither Non-Procedure nor Hybrid mismatches were found to have significant impact in the ERP context. These findings provide meaningful insights since they could serve as the basis for discussing how the ERP implementation process should be defined and what activities should be included in the implementation process. They show that ERP developers may not want to include organizational (or business processes) changes in the implementation process, suggesting that doing so could lead to failed implementation. And in fact, this suggestion eventually turned out to be true when we found that the application of process customization led to higher possibilities of failure. From these discussions, we are convinced that Non-ERP is the only type of mismatch we need to focus on during the implementation process, implying that organizational changes must be made before, rather than during, the implementation process. Finally, this study found that among the various customization approaches, bolt-on development methods in particular seemed to have significantly positive effects. Interestingly again, this finding is not in the same line of thought as that of the vendors in the ERP industry. The vendors' recommendations are to apply as many best practices as possible, thereby resulting in the minimization of customization and utilization of bolt-on development methods. They particularly advise against changing the source code and rather recommend employing, when necessary, the method of programming additional software code using the computer language of the vendor. As previously stated, however, our study found active customization, especially bolt-on development methods, to have positive effects on ERP, and found source code changes in particular to have the most significant effects. Moreover, our study found programming additional software to be ineffective, suggesting there is much difference between ERP developers and vendors in viewpoints and strategies toward ERP customization. In summary, mismatches are inherent in the ERP implementation context and play an important role in determining its success. Considering the significance of mismatches, this study proposes a new model for successful ERP implementation, developed from the organizational memory mismatch perspective, and provides many insights by empirically confirming the model's usefulness.

  • PDF

Production of a hypothetical polyene substance by activating a cryptic fungal PKS-NRPS hybrid gene in Monascus purpureus (홍국Monascus purpureus에서 진균 PKS-NRPS 하이브리드 유전자의 발현 유도를 통한 미지 polyene 화합물의 생성)

  • Suh, Jae-Won;Balakrishnan, Bijinu;Lim, Yoon Ji;Lee, Doh Won;Choi, Jeong Ju;Park, Si-Hyung;Kwon, Hyung-Jin
    • Journal of Applied Biological Chemistry
    • /
    • v.61 no.1
    • /
    • pp.83-91
    • /
    • 2018
  • Advances in bacterial and fungal genome mining uncover a plethora of cryptic secondary metabolite biosynthetic gene clusters. Guided by the genome information, targeted transcriptional derepression could be employed to determine the product of a cryptic gene cluster and to explore its biological role. Monascus spp. are food grade filamentous fungi popular in eastern Asia and several genome data belong to them are now available. We achieved transcription activation of a cryptic fungal polyketide synthase-nonribosomal peptide synthase gene Mpfus1 in Monascus purpureus ${\Delta}MpPKS5$ by inserting Aspergillus gpdA promoter at the upstream of Mpfus1 through double crossover gene replacement. The gene cluster with Mpfus1 show a high similarity to those for the biosynthesis of conjugated polyene derivatives with 2-pyrrolidone ring and the mycotoxin fusarin is the representative member of this group. The ${\Delta}MpPKS5$ is incapable of producing azaphilone pigment, providing an excellent background to identify chromogenic and UV-absorbing compounds. Activation of Mpfus1 resulted in a yellow hue on mycelia and its methanol extract exhibit a maximum absorption at 365 nm. HPLC analysis of the organic extracts indicated the presence of a variety of yellow compounds in the extract. This implies that the product of MpFus1 is metabolically or chemically unstable. LC-MS analysis guided us to predict the MpFus1 product and to propose that the Mpfus1-containing gene cluster encode the biosynthesis of a desmethyl analogue of fusarin. This study showcases the genome mining in Monascus and the possibility to unveil new biological activities embedded in it.

Development of Algorithm in Analysis of Single Trait Animal Model for Genetic Evaluation of Hanwoo (단형질 개체모형을 이용한 한우 육종가 추정프로그램 개발)

  • Koo, Yangmo;Kim, Jungil;Song, Chieun;Lee, Kihwan;Shin, Jaeyoung;Jang, Hyungi;Choi, Taejeong;Kim, Sidong;Park, Byoungho;Cho, Kwanghyun;Lee, Seungsoo;Choy, Yunho;Kim, Byeongwoo;Lee, Junggyu;Song, Hoon
    • Journal of Animal Science and Technology
    • /
    • v.55 no.5
    • /
    • pp.359-365
    • /
    • 2013
  • Estimate breeding value can be used as single trait animal model was developed directly using the Fortran language program. The program is based on data computed by using the indirect method repeatedly. The program develops a common algorithm and imprves efficiency. Algorithm efficiency was compared between the two programs. Estimated using the solution is easy to farm and brand the service, pedigree data base was associated with the development of an improved system. The existing program that uses the single trait animal model and the comparative analysis of efficiency is weak because the estimation of the solution and the conventional algorithm programmed through regular formulation involve many repetition; therefore, the newly developed algorithm was conducted to improve speed by reducing the repetition. Single trait animal model was used to analyze Gauss-Seidel iteration method, and the aforesaid two algorithms were compared thorough the mixed model equation which is used the most commonly in estimating the current breeding value by applying the procedures such as the preparation of information necessary for modelling, removal of duplicative data, verifying the parent information of based population in the pedigree data, and assigning sequential numbers, etc. The existing conventional algorithm is the method for reading and recording the data by utilizing the successive repetitive sentences, while new algorithm is the method for directly generating the left hand side for estimation based on effect. Two programs were developed to ensure the accurate evaluation. BLUPF90 and MTDFREML were compared using the estimated solution. In relation to the pearson and spearman correlation, the estimated breeding value correlation coefficients were highest among all traits over 99.5%. Depending on the breeding value of the high correlation in Model I and Model II, accurate evaluation can be found. The number of iteration to convergence was 2,568 in Model I and 1,038 in Model II. The speed of solving was 256.008 seconds in Model I and 235.729 seconds in Model II. Model II had a speed of approximately 10% more than Model I. Therefore, it is considered to be much more effective to analyze large data through the improved algorithm than the existing method. If the corresponding program is systemized and utilized for the consulting of farm and industrial services, it would make contribution to the early selection of individual, shorten the generation, and cultivation of superior groups, and help develop the Hanwoo industry further through the improvement of breeding value based enhancement, ultimately paving the way for the country to evolve into an advanced livestock country.

An identity analysis of Mechanic Design through the Japan Animation (일본 애니메이션<신세기 에반게리온>으로 본 메카닉 디자인의 정체성 분석)

  • Lee, Jong-Han;Liu, Si-Jie
    • Cartoon and Animation Studies
    • /
    • s.50
    • /
    • pp.275-297
    • /
    • 2018
  • Japan's mechanic animation is widely known throughout the world. 1952년, Japan's first mechanic animation and the first TV animation, , has been popular since it's creation in 1952. Atom, a big hit at the time, has influenced many people. Japanese mechanic animations convey their unique traits and world view to the public In this paper, we are going to discuss the change of the Japanese mechanical design through comparison of the mechanical design, which has been booming since the 1990s in Japan; and the . I expect the results of this analysis to depict Japanese culture and thought reflected in animation, which is a good indication of worldwide cultural view of animation. unexpectedly influenced the Japanese animation industry after it screened in 1995, and there are still people constantly reinterpreting and analyzing it. This is the reaction of the audience to anticipate the mystery and endless conclusions of the work itself. The design elements of Evangelion are distinguished from other mechanical objects. Mechanic design based on human biotechnology can overcome limitations of machine and make you feel more human. The pilot 's boarding structure, which can contain human nature, is reinforced in the form of an enterprising plug, and the attitude of excavation makes humanity more prominent than a straight robot. Thus, pursues a mechanic design that can reflect human identity. can be selected as the mechanic animation of the 80's, and the "Neon Genesis Evangelion" of the 90's shows it with a completely different design. By comparing the mechanical design of two works, therefore, we examine the correlation between the message and the design of the work. presents the close relationship between the identity of the mechanical design and the contents. I would like to point out that mechanical design can be a good example and theoretical basis for the future.

Olympic Advertisers Win Gold, Experience Stock Price Gains During and After the Games (오운선수작위엄고대언인영득금패(奥运选手作为广告代言人赢得金牌), 비새중화비새후적고표개격상양(比赛中和比赛后的股票价格上扬))

  • Tomovick, Chuck;Yelkur, Rama
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.1
    • /
    • pp.80-88
    • /
    • 2010
  • There has been considerable research examining the relationship between stockholders equity and various marketing strategies. These include studies linking stock price performance to advertising, customer service metrics, new product introductions, research and development, celebrity endorsers, brand perception, brand extensions, brand evaluation, company name changes, and sports sponsorships. Another facet of marketing investments which has received heightened scrutiny for its purported influence on stockholder equity is television advertisement embedded within specific sporting events such as the Super Bowl. Research indicates that firms which advertise in Super Bowls experience stock price gains. Given this reported relationship between advertising investment and increased shareholder value, for both general and special events, it is surprising that relatively little research attention has been paid to investigating the relationship between advertising in the Olympic Games and its subsequent impact on stockholder equity. While attention has been directed at examining the effectiveness of sponsoring the Olympic Games, much less focus has been placed on the financial soundness of advertising during the telecasts of these Games. Notable exceptions to this include Peters (2008), Pfanner (2008), Saini (2008), and Keller Fay Group (2009). This paper presents a study of Olympic advertisers who ran TV ads on NBC in the American telecasts of the 2000, 2004, and 2008 Summer Olympic Games. Five hypothesis were tested: H1: The stock prices of firms which advertised on American telecasts of the 2008, 2004 and 2000 Olympics (referred to as O-Stocks), will outperform the S&P 500 during this same period of time (i.e., the Monday before the Games through to the Friday after the Games). H2: O-Stocks will outperform the S&P 500 during the medium term, that is, for the period of the Monday before the Games through to the end of each Olympic calendar year (December 31st of 2000, 2004, and 2008 respectively). H3: O-Stocks will outperform the S&P 500 in the longer term, that is, for the period of the Monday before the Games through to the midpoint of the following years (June 30th of 2001, 2005, and 2009 respectively). H4: There will be no difference in the performance of these O-Stocks vs. the S&P 500 in the Non-Olympic time control periods (i.e. three months earlier for each of the Olympic years). H5: The annual revenue of firms which advertised on American telecasts of the 2008, 2004 and 2000 Olympics will be higher for those years than the revenue for those same firms in the years preceding those three Olympics respectively. In this study, we recorded stock prices of those companies that advertised during the Olympics for the last three Summer Olympic Games (i.e. Beijing in 2008, Athens in 2004, and Sydney in 2000). We identified these advertisers using Google searches as well as with the help of the television network (i.e., NBC) that hosted the Games. NBC held the American broadcast rights to all three Olympic Games studied. We used Internet sources to verify the parent companies of the brands that were advertised each year. Stock prices of these parent companies were found using Yahoo! Finance. Only companies that were publicly held and traded were used in the study. We identified changes in Olympic advertisers' stock prices over the four-week period that included the Monday before through the Friday after the Games. In total, there were 117 advertisers of the Games on telecasts which were broadcast in the U.S. for 2008, 2004, and 2000 Olympics. Figure 1 provides a breakdown of those advertisers, by industry sector. Results indicate the stock of the firms that advertised (O-Stocks) out-performed the S&P 500 during the period of interest and under-performed the S&P 500 during the earlier control periods. These same O-Stocks also outperformed the S&P 500 from the start of these Games through to the end of each Olympic year, and for six months beyond that. Price pressure linkage, signaling theory, high involvement viewers, and corporate activation strategies are believed to contribute to these positive results. Implications for advertisers and researchers are discussed, as are study limitations and future research directions.

Examining the Relationships among Attitude toward Luxury Brands, Customer Equity, and Customer Lifetime Value in a Korean Context (측시이한국위배경적사치품패태도(测试以韩国为背景的奢侈品牌态度), 고객자산화고객종신개치지간적관계(顾客资产和顾客终身价值之间的关系))

  • Kim, Kyung-Hoon;Park, Seong-Yeon;Lee, Seung-Hee;Knight, Dee K.;Xu, Bing;Jeon, Byung-Joo;Moon, Hak-Il
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.1
    • /
    • pp.27-34
    • /
    • 2010
  • During the past 10 years, sales of luxury goods increased significantly to more than US$ 130 billion in 2007. In this industry, more than half of the revenue comes from Asia where the average income has risen significantly, and the demand for luxury products is forecast to grow rapidly. Purchasing luxury brands appears to be an intriguing social phenomenon that is profitable for companies in this region. As a newly developed country, Korea is one of the most attractive luxury markets in Asia. Currently, a total of 120 luxury fashion brands have entered the Korean market, primarily in luxury districts in Seoul where the competition is fierce. The purposes of this study are to: (1) identify antecedents of attitude toward luxury brands, (2) examine the effect of attitudes toward luxury brands on customer equity, (3) determine the impact of attitudes toward luxury brands on customer lifetime value, and (4) investigate the influence of customer equity on customer life time value. Previous studies have examined materialism, social need, experiential need, need for uniqueness, conformity, and fashion involvement as antecedents of attitude toward luxury brands. Richins and Dowson (1992) suggested that that materialism influences consumption behavior relative to quantity of goods purchased. Nueno and Quelch (1998) reported that the ownership of luxury brands conveys information related to the owner's social status, communicates an image of success and prestige, and is a determinant of purchase behavior. Experiential need is recognized as an important aspect of consumption, especially for new products developed to meet consumer demand. Since luxury goods, by definition are relatively scarce, ownership of these types of products may fulfill consumers' need for uniqueness. In this study, value equity, relationship equity, and brand equity are examined as drivers of customer equity. The sample (n = 114) was undergraduate and graduate students at two private women's universities in Seoul, Korea. Data collection was conducted using a self-administered questionnaire survey in March, 2009. Data analysis included descriptive statistics, factor analysis, reliability analysis, and regression analysis using SPSS 15.0 software. Data analysis resulted in a number of conclusions. First, experiential need and fashion involvement positively influence participants' attitude toward luxury brands. Second, attitude toward luxury brands positively influences brand equity, followed by value equity and relationship equity. However, there is no significant relationship between attitude toward luxury brand and customer lifetime value. Finally, relationship equity positively influences customer lifetime value. In conclusion, young consumers are an important potential consumer group that tries different brands to discover the ones most suitable for them. Luxury marketers that use effective marketing strategies to attract and engender loyalty among this potentially lucrative consumer group may increase customer equity and lifetime value.

A Framework on 3D Object-Based Construction Information Management System for Work Productivity Analysis for Reinforced Concrete Work (철근콘크리트 공사의 작업 생산성 분석을 위한 3차원 객체 활용 정보관리 시스템 구축방안)

  • Kim, Jun;Cha, Heesung
    • Korean Journal of Construction Engineering and Management
    • /
    • v.19 no.2
    • /
    • pp.15-24
    • /
    • 2018
  • Despite the recognition of the need for productivity information and its importance, the feedback of productivity information is not well-established in the construction industry. Effective use of productivity information is required to improve the reliability of construction planning. However, in many cases, on-site productivity information is hardly management effectively, but rather it relies on the experience and/or intuition of project participants. Based on the literature review and expert interviews, the authors recognized that one of the possible solutions is to develop a systematic approach in dealing with productivity information of the construction job-sites. It is required that the new system should not be burdensome to users, purpose-oriented information management, easy-to follow information structure, real-time information feedback, and productivity-related factor recognition. Based on the preliminary investigations, this study proposed a framework for a novel system that facilitate the effective management of construction productivity information. This system has utilized Sketchup software which has good user accessibility by minimizing additional data input and related workload. The proposed system has been designed to input, process, and output the pertinent information through a four-stage process: preparation, input, processing, and output. The inputted construction information is classified into Task Breakdown Structure (TBS) and Material Breakdown Structure (MBS), which are constructed by referring to the contents of the standard specification of building construction, and converted into productivity information. In addition, the converted information is also graphically visualized on the screen, allowing the users to use the productivity information from the job-site. The productivity information management system proposed in this study has been pilot-tested in terms of practical applicability and information availability in the real construction project. Very positive results have been obtained from the usability and the applicability of the system and benefits are expected from the validity test of the system. If the proposed system is used in the planning stage in the construction, the productivity information and the continuous information is accumulated, the expected effectiveness of this study would be conceivably further enhanced.

Technical Efficiency in Korea: Interindustry Determinants and Dynamic Stability (기술적(技術的) 효율성(效率性)의 결정요인(決定要因)과 동태적(動態的) 변화(變化))

  • Yoo, Seong-min
    • KDI Journal of Economic Policy
    • /
    • v.12 no.4
    • /
    • pp.21-46
    • /
    • 1990
  • This paper, a sequel to Yoo and Lee (1990), attempts to investigate the interindustry determinants of technical efficiency in Korea's manufacturing industries, and also to conduct an exploratory analysis on the stability of technical efficiency over time. The hypotheses set forth in this paper are most found in the existing literature on technical efficiency. They are, however, revised and shed a new light upon, whenever possible, to accommodate any Korea-specific conditions. The set of regressors used in the cross-sectional analysis are chosen and the hypotheses are posed in such a way that our result can be made comparable to those of similar studies conducted for the U.S. and Japan by Caves and Barton (1990) and Uekusa and Torii (1987), respectively. It is interesting to observe a certain degree of similarity as well as differentiation between the cross-section evidence on Korea's manufacturing industries and that on the U.S. and Japanese industries. As for the similarities, we can find positive and significant effects on technical efficiency of relative size of production and the extent of specialization in production, and negative and significant effect of the variations in capital-labor ratio within industries. The curvature influence of concentration ratio on technical efficiency is also confirmed in the Korean case. There are differences, too. We cannot find any significant effects of capital vintage, R&D and foreign competition on technical efficiency, all of which were shown to be robust determinants of technical efficiency in the U.S. case. We note, however, that the variables measuring capital vintage effect, R&D and the degree of foreign competition in Korean markets are suspected to suffer from serious measurement errors incurred in data collection and/or conversion of industrial classification system into the KSIC (Korea Standard Industrial Classification) system. Thus, we are reluctant to accept the findings on the effects of these variables as definitive conclusions on Korea's industrial organization. Another finding that interests us is that the cross-industry evidence becomes consistently strong when we use the efficiency estimates based on gross output instead of value added, which provides us with an ex post empirical criterion to choose an output measure between the two in estimating the production frontier. We also conduct exploratory analyses on the stability of the estimates of technical efficiency in Korea's manufacturing industries. Though the method of testing stability employed in this paper is never a complete one, we cannot find strong evidence that our efficiency estimates are stable over time. The outcome is both surprising and disappointing. We can also show that the instability of technical efficiency over time is partly explained by the way we constructed our measures of technical efficiency. To the extent that our efficiency estimates depend on the shape of the empirical distribution of plants in the input-output space, any movements of the production frontier over time are not reflected in the estimates, and possibilities exist of associating a higher level of technical efficiency with a downward movement of the production frontier over time, and so on. Thus, we find that efficiency measures that take into account not only the distributional changes, but also the shifts of the production frontier over time, increase the extent of stability, and are more appropriate for use in a dynamic context. The remaining portion of the instability of technical efficiency over time is not explained satisfactorily in this paper, and future research should address this question.

  • PDF

Building battery deterioration prediction model using real field data (머신러닝 기법을 이용한 납축전지 열화 예측 모델 개발)

  • Choi, Keunho;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.243-264
    • /
    • 2018
  • Although the worldwide battery market is recently spurring the development of lithium secondary battery, lead acid batteries (rechargeable batteries) which have good-performance and can be reused are consumed in a wide range of industry fields. However, lead-acid batteries have a serious problem in that deterioration of a battery makes progress quickly in the presence of that degradation of only one cell among several cells which is packed in a battery begins. To overcome this problem, previous researches have attempted to identify the mechanism of deterioration of a battery in many ways. However, most of previous researches have used data obtained in a laboratory to analyze the mechanism of deterioration of a battery but not used data obtained in a real world. The usage of real data can increase the feasibility and the applicability of the findings of a research. Therefore, this study aims to develop a model which predicts the battery deterioration using data obtained in real world. To this end, we collected data which presents change of battery state by attaching sensors enabling to monitor the battery condition in real time to dozens of golf carts operated in the real golf field. As a result, total 16,883 samples were obtained. And then, we developed a model which predicts a precursor phenomenon representing deterioration of a battery by analyzing the data collected from the sensors using machine learning techniques. As initial independent variables, we used 1) inbound time of a cart, 2) outbound time of a cart, 3) duration(from outbound time to charge time), 4) charge amount, 5) used amount, 6) charge efficiency, 7) lowest temperature of battery cell 1 to 6, 8) lowest voltage of battery cell 1 to 6, 9) highest voltage of battery cell 1 to 6, 10) voltage of battery cell 1 to 6 at the beginning of operation, 11) voltage of battery cell 1 to 6 at the end of charge, 12) used amount of battery cell 1 to 6 during operation, 13) used amount of battery during operation(Max-Min), 14) duration of battery use, and 15) highest current during operation. Since the values of the independent variables, lowest temperature of battery cell 1 to 6, lowest voltage of battery cell 1 to 6, highest voltage of battery cell 1 to 6, voltage of battery cell 1 to 6 at the beginning of operation, voltage of battery cell 1 to 6 at the end of charge, and used amount of battery cell 1 to 6 during operation are similar to that of each battery cell, we conducted principal component analysis using verimax orthogonal rotation in order to mitigate the multiple collinearity problem. According to the results, we made new variables by averaging the values of independent variables clustered together, and used them as final independent variables instead of origin variables, thereby reducing the dimension. We used decision tree, logistic regression, Bayesian network as algorithms for building prediction models. And also, we built prediction models using the bagging of each of them, the boosting of each of them, and RandomForest. Experimental results show that the prediction model using the bagging of decision tree yields the best accuracy of 89.3923%. This study has some limitations in that the additional variables which affect the deterioration of battery such as weather (temperature, humidity) and driving habits, did not considered, therefore, we would like to consider the them in the future research. However, the battery deterioration prediction model proposed in the present study is expected to enable effective and efficient management of battery used in the real filed by dramatically and to reduce the cost caused by not detecting battery deterioration accordingly.