• Title/Summary/Keyword: optimization design

Search Result 8,480, Processing Time 0.043 seconds

Performance assessment of an urban stormwater infiltration trench considering facility maintenance (침투도랑 유지관리를 통한 도시 강우유출수 처리 성능 평가)

  • Reyes, N.J. D.G.;Geronimo, F.K.F.;Choi, H.S.;Kim, L.H.
    • Journal of Wetlands Research
    • /
    • v.20 no.4
    • /
    • pp.424-431
    • /
    • 2018
  • Stormwater runoff containing considerable amounts of pollutants such as particulates, organics, nutrients, and heavy metals contaminate natural bodies of water. At present, best management practices (BMP) intended to reduce the volume and treat pollutants from stormwater runoff were devised to serve as cost-effective measures of stormwater management. However, improper design and lack of proper maintenance can lead to degradation of the facility, making it unable to perform its intended function. This study evaluated an infiltration trench (IT) that went through a series of maintenance operations. 41 monitored rainfall events from 2009 to 2016 were used to evaluate the pollutant removal capabilities of the IT. Assessment of the water quality and hydrological data revealed that the inflow volume was the most relative factor affecting the unit pollutant loads (UPL) entering the facility. Seasonal variations also affected the pollutant removal capabilities of the IT. During the summer season, the increased rainfall depths and runoff volumes diminished the pollutant removal efficiency (RE) of the facility due to increased volumes that washed off larger pollutant loads and caused the IT to overflow. Moreover, the system also exhibited reduced pollutant RE for the winter season due to frozen media layers and chemical-related mechanisms impacted by the low winter temperature. Maintenance operations also posed considerable effects of the performance of the IT. During the first two years of operation, the IT exhibited a decrease in pollutant RE due to aging and lack of proper maintenance. However, some events also showed reduced pollutant RE succeeding the maintenance as a result of disturbed sediments that were not removed from the geotextile. Ultimately, the presented effects of maintenance operations in relation to the pollutant RE of the system may lead to the optimization of maintenance schedules and procedures for BMP of same structure.

Optimization and Scale-up of Fish Skin Peptide Loaded Liposome Preparation and Its Storage Stability (어피 펩타이드 리포좀 대량생산 최적 조건 및 저장 안정성)

  • Lee, JungGyu;Lee, YunJung;Bai, JingJing;Kim, Soojin;Cho, Youngjae;Choi, Mi-Jung
    • Food Engineering Progress
    • /
    • v.21 no.4
    • /
    • pp.360-366
    • /
    • 2017
  • Fish skin peptide-loaded liposomes were prepared in 100 mL and 1 L solution as lab scales, and 10 L solution as a prototype scale. The particle size and zeta potential were measured to determine the optimal conditions for the production of fish skin peptide-loaded liposome. The liposome was manufactured by the following conditions: (1) primary homogenization at 4,000 rpm, 8,000 rpm, and 12,000 rpm for 3 minutes; (2) secondary homogenization at 40 watt (W), 60 W, and 80 W for 3 minutes. From this experimental design, the optimal conditions of homogenization were selected as 4,000 rpm and 60 W. For the next step, fish peptides were prepared as the concentrations of 3, 6, and 12% at the optimum manufacturing conditions of liposome and stored at $4^{\circ}C$. Particle size, polydispersion index (pdI), and zeta potential of peptide-loaded liposome were measured for its stability. Particle size increased significantly as manufacture scale and peptide concentration increased, and decreased over storage time. The zeta potential results increased as storage time increased at 10 L scale. In addition, 12% peptide showed the formation of a sediment layer after 3 weeks, and 6% peptide was considered to be the most suitable for industrial application.

Water Digital Twin for High-tech Electronics Industrial Wastewater Treatment System (II): e-ASM Calibration, Effluent Prediction, Process selection, and Design (첨단 전자산업 폐수처리시설의 Water Digital Twin(II): e-ASM 모델 보정, 수질 예측, 공정 선택과 설계)

  • Heo, SungKu;Jeong, Chanhyeok;Lee, Nahui;Shim, Yerim;Woo, TaeYong;Kim, JeongIn;Yoo, ChangKyoo
    • Clean Technology
    • /
    • v.28 no.1
    • /
    • pp.79-93
    • /
    • 2022
  • In this study, an electronics industrial wastewater activated sludge model (e-ASM) to be used as a Water Digital Twin was calibrated based on real high-tech electronics industrial wastewater treatment measurements from lab-scale and pilot-scale reactors, and examined for its treatment performance, effluent quality prediction, and optimal process selection. For specialized modeling of a high-tech electronics industrial wastewater treatment system, the kinetic parameters of the e-ASM were identified by a sensitivity analysis and calibrated by the multiple response surface method (MRS). The calibrated e-ASM showed a high compatibility of more than 90% with the experimental data from the lab-scale and pilot-scale processes. Four electronics industrial wastewater treatment processes-MLE, A2/O, 4-stage MLE-MBR, and Bardenpo-MBR-were implemented with the proposed Water Digital Twin to compare their removal efficiencies according to various electronics industrial wastewater characteristics. Bardenpo-MBR stably removed more than 90% of the chemical oxygen demand (COD) and showed the highest nitrogen removal efficiency. Furthermore, a high concentration of 1,800 mg L-1 T MAH influent could be 98% removed when the HRT of the Bardenpho-MBR process was more than 3 days. Hence, it is expected that the e-ASM in this study can be used as a Water Digital Twin platform with high compatibility in a variety of situations, including plant optimization, Water AI, and the selection of best available technology (BAT) for a sustainable high-tech electronics industry.

A Study on Formulation Optimization for Improving Skin Absorption of Glabridin-Containing Nanoemulsion Using Response Surface Methodology (반응표면분석법을 활용한 Glabridin 함유 나노에멀젼의 피부흡수 향상을 위한 제형 최적화 연구)

  • Se-Yeon Kim;Won Hyung Kim;Kyung-Sup Yoon
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.49 no.3
    • /
    • pp.231-245
    • /
    • 2023
  • In the cosmetics industry, it is important to develop new materials for functional cosmetics such as whitening, wrinkles, anti-oxidation, and anti-aging, as well as technology to increase absorption when applied to the skin. Therefore, in this study, we tried to optimize the nanoemulsion formulation by utilizing response surface methodology (RSM), an experimental design method. A nanoemulsion was prepared by a high-pressure emulsification method using Glabridin as an active ingredient, and finally, the optimized skin absorption rate of the nanoemulsion was evaluated. Nanoemulsions were prepared by varying the surfactant content, cholesterol content, oil content, polyol content, high-pressure homogenization pressure, and cycling number of high-pressure homogenization as RSM factors. Among them, surfactant content, oil content, high-pressure homogenization pressure, and cycling number of high-pressure homogenization, which are factors that have the greatest influence on particle size, were used as independent variables, and particle size and skin absorption rate of nanoemulsion were used as response variables. A total of 29 experiments were conducted at random, including 5 repetitions of the center point, and the particle size and skin absorption of the prepared nanoemulsion were measured. Based on the results, the formulation with the minimum particle size and maximum skin absorption was optimized, and the surfactant content of 5.0 wt%, oil content of 2.0 wt%, high-pressure homogenization pressure of 1,000 bar, and the cycling number of high-pressure homogenization of 4 pass were derived as the optimal conditions. As the physical properties of the nanoemulsion prepared under optimal conditions, the particle size was 111.6 ± 0.2 nm, the PDI was 0.247 ± 0.014, and the zeta potential was -56.7 ± 1.2 mV. The skin absorption rate of the nanoemulsion was compared with emulsion as a control. As a result of the nanoemulsion and general emulsion skin absorption test, the cumulative absorption of the nanoemulsion was 79.53 ± 0.23%, and the cumulative absorption of the emulsion as a control was 66.54 ± 1.45% after 24 h, which was 13% higher than the emulsion.

A Study on the Development of Ultra-precision Small Angle Spindle for Curved Processing of Special Shape Pocket in the Fourth Industrial Revolution of Machine Tools (공작기계의 4차 산업혁명에서 특수한 형상 포켓 곡면가공을 위한 초정밀 소형 앵글 스핀들 개발에 관한 연구)

  • Lee Ji Woong
    • Journal of Practical Engineering Education
    • /
    • v.15 no.1
    • /
    • pp.119-126
    • /
    • 2023
  • Today, in order to improve fuel efficiency and dynamic behavior of automobiles, an era of light weight and simplification of automobile parts is being formed. In order to simplify and design and manufacture the shape of the product, various components are integrated. For example, in order to commercialize three products into one product, product processing is occurring to a very narrow area. In the case of existing parts, precision die casting or casting production is used for processing convenience, and the multi-piece method requires a lot of processes and reduces the precision and strength of the parts. It is very advantageous to manufacture integrally to simplify the processing air and secure the strength of the parts, but if a deep and narrow pocket part needs to be processed, it cannot be processed with the equipment's own spindle. To solve a problem, research on cutting processing is being actively conducted, and multi-axis composite processing technology not only solves this problem. It has many advantages, such as being able to cut into composite shapes that have been difficult to flexibly cut through various processes with one machine tool so far. However, the reality is that expensive equipment increases manufacturing costs and lacks engineers who can operate the machine. In the five-axis cutting processing machine, when producing products with deep and narrow sections, the cycle time increases in product production due to the indirectness of tools, and many problems occur in processing. Therefore, dedicated machine tools and multi-axis composite machines should be used. Alternatively, an angle spindle may be used as a special tool capable of multi-axis composite machining of five or more axes in a three-axis machining center. Various and continuous studies are needed in areas such as processing vibration absorption, low heat generation and operational stability, excellent dimensional stability, and strength securing by using the angle spindle.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A Comparative Analysis of Social Commerce and Open Market Using User Reviews in Korean Mobile Commerce (사용자 리뷰를 통한 소셜커머스와 오픈마켓의 이용경험 비교분석)

  • Chae, Seung Hoon;Lim, Jay Ick;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.53-77
    • /
    • 2015
  • Mobile commerce provides a convenient shopping experience in which users can buy products without the constraints of time and space. Mobile commerce has already set off a mega trend in Korea. The market size is estimated at approximately 15 trillion won (KRW) for 2015, thus far. In the Korean market, social commerce and open market are key components. Social commerce has an overwhelming open market in terms of the number of users in the Korean mobile commerce market. From the point of view of the industry, quick market entry, and content curation are considered to be the major success factors, reflecting the rapid growth of social commerce in the market. However, academics' empirical research and analysis to prove the success rate of social commerce is still insufficient. Henceforward, it is to be expected that social commerce and the open market in the Korean mobile commerce will compete intensively. So it is important to conduct an empirical analysis to prove the differences in user experience between social commerce and open market. This paper is an exploratory study that shows a comparative analysis of social commerce and the open market regarding user experience, which is based on the mobile users' reviews. Firstly, this study includes a collection of approximately 10,000 user reviews of social commerce and open market listed Google play. A collection of mobile user reviews were classified into topics, such as perceived usefulness and perceived ease of use through LDA topic modeling. Then, a sentimental analysis and co-occurrence analysis on the topics of perceived usefulness and perceived ease of use was conducted. The study's results demonstrated that social commerce users have a more positive experience in terms of service usefulness and convenience versus open market in the mobile commerce market. Social commerce has provided positive user experiences to mobile users in terms of service areas, like 'delivery,' 'coupon,' and 'discount,' while open market has been faced with user complaints in terms of technical problems and inconveniences like 'login error,' 'view details,' and 'stoppage.' This result has shown that social commerce has a good performance in terms of user service experience, since the aggressive marketing campaign conducted and there have been investments in building logistics infrastructure. However, the open market still has mobile optimization problems, since the open market in mobile commerce still has not resolved user complaints and inconveniences from technical problems. This study presents an exploratory research method used to analyze user experience by utilizing an empirical approach to user reviews. In contrast to previous studies, which conducted surveys to analyze user experience, this study was conducted by using empirical analysis that incorporates user reviews for reflecting users' vivid and actual experiences. Specifically, by using an LDA topic model and TAM this study presents its methodology, which shows an analysis of user reviews that are effective due to the method of dividing user reviews into service areas and technical areas from a new perspective. The methodology of this study has not only proven the differences in user experience between social commerce and open market, but also has provided a deep understanding of user experience in Korean mobile commerce. In addition, the results of this study have important implications on social commerce and open market by proving that user insights can be utilized in establishing competitive and groundbreaking strategies in the market. The limitations and research direction for follow-up studies are as follows. In a follow-up study, it will be required to design a more elaborate technique of the text analysis. This study could not clearly refine the user reviews, even though the ones online have inherent typos and mistakes. This study has proven that the user reviews are an invaluable source to analyze user experience. The methodology of this study can be expected to further expand comparative research of services using user reviews. Even at this moment, users around the world are posting their reviews about service experiences after using the mobile game, commerce, and messenger applications.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

  • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.149-161
    • /
    • 2014
  • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.

An Empirical Study on the Influencing Factors for Big Data Intented Adoption: Focusing on the Strategic Value Recognition and TOE Framework (빅데이터 도입의도에 미치는 영향요인에 관한 연구: 전략적 가치인식과 TOE(Technology Organizational Environment) Framework을 중심으로)

  • Ka, Hoi-Kwang;Kim, Jin-soo
    • Asia pacific journal of information systems
    • /
    • v.24 no.4
    • /
    • pp.443-472
    • /
    • 2014
  • To survive in the global competitive environment, enterprise should be able to solve various problems and find the optimal solution effectively. The big-data is being perceived as a tool for solving enterprise problems effectively and improve competitiveness with its' various problem solving and advanced predictive capabilities. Due to its remarkable performance, the implementation of big data systems has been increased through many enterprises around the world. Currently the big-data is called the 'crude oil' of the 21st century and is expected to provide competitive superiority. The reason why the big data is in the limelight is because while the conventional IT technology has been falling behind much in its possibility level, the big data has gone beyond the technological possibility and has the advantage of being utilized to create new values such as business optimization and new business creation through analysis of big data. Since the big data has been introduced too hastily without considering the strategic value deduction and achievement obtained through the big data, however, there are difficulties in the strategic value deduction and data utilization that can be gained through big data. According to the survey result of 1,800 IT professionals from 18 countries world wide, the percentage of the corporation where the big data is being utilized well was only 28%, and many of them responded that they are having difficulties in strategic value deduction and operation through big data. The strategic value should be deducted and environment phases like corporate internal and external related regulations and systems should be considered in order to introduce big data, but these factors were not well being reflected. The cause of the failure turned out to be that the big data was introduced by way of the IT trend and surrounding environment, but it was introduced hastily in the situation where the introduction condition was not well arranged. The strategic value which can be obtained through big data should be clearly comprehended and systematic environment analysis is very important about applicability in order to introduce successful big data, but since the corporations are considering only partial achievements and technological phases that can be obtained through big data, the successful introduction is not being made. Previous study shows that most of big data researches are focused on big data concept, cases, and practical suggestions without empirical study. The purpose of this study is provide the theoretically and practically useful implementation framework and strategies of big data systems with conducting comprehensive literature review, finding influencing factors for successful big data systems implementation, and analysing empirical models. To do this, the elements which can affect the introduction intention of big data were deducted by reviewing the information system's successful factors, strategic value perception factors, considering factors for the information system introduction environment and big data related literature in order to comprehend the effect factors when the corporations introduce big data and structured questionnaire was developed. After that, the questionnaire and the statistical analysis were performed with the people in charge of the big data inside the corporations as objects. According to the statistical analysis, it was shown that the strategic value perception factor and the inside-industry environmental factors affected positively the introduction intention of big data. The theoretical, practical and political implications deducted from the study result is as follows. The frist theoretical implication is that this study has proposed theoretically effect factors which affect the introduction intention of big data by reviewing the strategic value perception and environmental factors and big data related precedent studies and proposed the variables and measurement items which were analyzed empirically and verified. This study has meaning in that it has measured the influence of each variable on the introduction intention by verifying the relationship between the independent variables and the dependent variables through structural equation model. Second, this study has defined the independent variable(strategic value perception, environment), dependent variable(introduction intention) and regulatory variable(type of business and corporate size) about big data introduction intention and has arranged theoretical base in studying big data related field empirically afterwards by developing measurement items which has obtained credibility and validity. Third, by verifying the strategic value perception factors and the significance about environmental factors proposed in the conventional precedent studies, this study will be able to give aid to the afterwards empirical study about effect factors on big data introduction. The operational implications are as follows. First, this study has arranged the empirical study base about big data field by investigating the cause and effect relationship about the influence of the strategic value perception factor and environmental factor on the introduction intention and proposing the measurement items which has obtained the justice, credibility and validity etc. Second, this study has proposed the study result that the strategic value perception factor affects positively the big data introduction intention and it has meaning in that the importance of the strategic value perception has been presented. Third, the study has proposed that the corporation which introduces big data should consider the big data introduction through precise analysis about industry's internal environment. Fourth, this study has proposed the point that the size and type of business of the corresponding corporation should be considered in introducing the big data by presenting the difference of the effect factors of big data introduction depending on the size and type of business of the corporation. The political implications are as follows. First, variety of utilization of big data is needed. The strategic value that big data has can be accessed in various ways in the product, service field, productivity field, decision making field etc and can be utilized in all the business fields based on that, but the parts that main domestic corporations are considering are limited to some parts of the products and service fields. Accordingly, in introducing big data, reviewing the phase about utilization in detail and design the big data system in a form which can maximize the utilization rate will be necessary. Second, the study is proposing the burden of the cost of the system introduction, difficulty in utilization in the system and lack of credibility in the supply corporations etc in the big data introduction phase by corporations. Since the world IT corporations are predominating the big data market, the big data introduction of domestic corporations can not but to be dependent on the foreign corporations. When considering that fact, that our country does not have global IT corporations even though it is world powerful IT country, the big data can be thought to be the chance to rear world level corporations. Accordingly, the government shall need to rear star corporations through active political support. Third, the corporations' internal and external professional manpower for the big data introduction and operation lacks. Big data is a system where how valuable data can be deducted utilizing data is more important than the system construction itself. For this, talent who are equipped with academic knowledge and experience in various fields like IT, statistics, strategy and management etc and manpower training should be implemented through systematic education for these talents. This study has arranged theoretical base for empirical studies about big data related fields by comprehending the main variables which affect the big data introduction intention and verifying them and is expected to be able to propose useful guidelines for the corporations and policy developers who are considering big data implementationby analyzing empirically that theoretical base.