Development of Systematic Process for Estimating Commercialization Duration and Cost of R&D Performance (기술가치 평가를 위한 기술사업화 기간 및 비용 추정체계 개발)
-
- Journal of Intelligence and Information Systems
- /
- v.23 no.2
- /
- pp.139-160
- /
- 2017
Technology commercialization creates effective economic value by linking the company's R & D processes and outputs to the market. This technology commercialization is important in that a company can retain and maintain a sustained competitive advantage. In order for a specific technology to be commercialized, it goes through the stage of technical planning, technology research and development, and commercialization. This process involves a lot of time and money. Therefore, the duration and cost of technology commercialization are important decision information for determining the market entry strategy. In addition, it is more important information for a technology investor to rationally evaluate the technology value. In this way, it is very important to scientifically estimate the duration and cost of the technology commercialization. However, research on technology commercialization is insufficient and related methodology are lacking. In this study, we propose an evaluation model that can estimate the duration and cost of R & D technology commercialization for small and medium-sized enterprises. To accomplish this, this study collected the public data of the National Science & Technology Information Service (NTIS) and the survey data provided by the Small and Medium Business Administration. Also this study will develop the estimation model of commercialization duration and cost of R&D performance on using these data based on the market approach, one of the technology valuation methods. Specifically, this study defined the process of commercialization as consisting of development planning, development progress, and commercialization. We collected the data from the NTIS database and the survey of SMEs technical statistics of the Small and Medium Business Administration. We derived the key variables such as stage-wise R&D costs and duration, the factors of the technology itself, the factors of the technology development, and the environmental factors. At first, given data, we estimates the costs and duration in each technology readiness level (basic research, applied research, development research, prototype production, commercialization), for each industry classification. Then, we developed and verified the research model of each industry classification. The results of this study can be summarized as follows. Firstly, it is reflected in the technology valuation model and can be used to estimate the objective economic value of technology. The duration and the cost from the technology development stage to the commercialization stage is a critical factor that has a great influence on the amount of money to discount the future sales from the technology. The results of this study can contribute to more reliable technology valuation because it estimates the commercialization duration and cost scientifically based on past data. Secondly, we have verified models of various fields such as statistical model and data mining model. The statistical model helps us to find the important factors to estimate the duration and cost of technology Commercialization, and the data mining model gives us the rules or algorithms to be applied to an advanced technology valuation system. Finally, this study reaffirms the importance of commercialization costs and durations, which has not been actively studied in previous studies. The results confirm the significant factors to affect the commercialization costs and duration, furthermore the factors are different depending on industry classification. Practically, the results of this study can be reflected in the technology valuation system, which can be provided by national research institutes and R & D staff to provide sophisticated technology valuation. The relevant logic or algorithm of the research result can be implemented independently so that it can be directly reflected in the system, so researchers can use it practically immediately. In conclusion, the results of this study can be a great contribution not only to the theoretical contributions but also to the practical ones.
Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.
The scarcity is pervasive aspect of human life and is a fundamental precondition of economic behavior of consumers. Also, the effect of scarcity message is a power social influence principle used by marketers to increase the subjective desirability of products. Because valuable objects are often scare, consumers tend to infer the scarce objects are valuable. Marketers often do base promotional appeals on the principle of scarcity to increase the subjective desirability their products among consumers. Specially, advertisers and retailers often promote their products using restrictions. These restriction act to constraint consumers' ability th take advantage of the promotion and can assume several forms. For example, some promotions are advertised as limited time offers, while others limit the quantity that can be bought at the deal price by employing the statements such as 'limit one per consumer,' 'limit 5 per customer,' 'limited products for special commemoration celebration,' Some retailers use statements extensively. A recent weekly flyer by a prominent retailer limited purchase quantities on 50% of the specials advertised on front page. When consumers saw these phrase, they often infer value from the product that has limited availability or is promoted as being scarce. But, the past researchers explored a direct relationship between the purchase quantity and time limit on deal purchase intention. They also don't explored that all restriction message are not created equal. Namely, we thought that different restrictions signal deal value in different ways or different mechanism. Consumers appear to perceive that time limits are used to attract consumers to the brand, while quantity limits are necessary to reduce stockpiling. This suggests other possible differences across restrictions. For example, quantity limits could imply product quality (i.e., this product at this price is so good that purchases must be limited). In contrast, purchase preconditions force the consumer to spend a certain amount to qualify for the deal, which suggests that inferences about the absolute quality of the promoted item would decline from purchase limits (highest quality) to time limits to purchase preconditions (lowest quality). This might be expected to be particularly true for unfamiliar brands. However, a critical but elusive issue in scarcity message research is the impacts of a inferred motives on the promoted scarcity message. The past researchers not explored possibility of inferred motives on the scarcity message context. Despite various type to the quantity limits message, they didn't separated scarcity message among the quantity limits. Therefore, we apply a stricter definition of scarcity message(i.e. quantity limits) and consider scarcity message type(general scarcity message vs. special scarcity message), scarcity depth(high vs. low). The purpose of this study is to examine the effect of the scarcity message on the consumer's purchase intension. Specifically, we investigate the effect of general versus special scarcity messages on the consumer's purchase intention using the level of the scarcity depth as moderators. In other words, we postulates that the scarcity message type and scarcity depth play an essential moderating role in the relationship between the inferred motives and purchase intention. In other worlds, different from the past studies, we examine the interplay between the perceived motives and scarcity type, and between the perceived motives and scarcity depth. Both of these constructs have been examined in isolation, but a key question is whether they interact to produce an effect in reaction to the scarcity message type or scarcity depth increase. The perceived motive Inference behind the scarcity message will have important impact on consumers' reactions to the degree of scarcity depth increase. In relation ti this general question, we investigate the following specific issues. First, does consumers' inferred motives weaken the positive relationship between the scarcity depth decrease and the consumers' purchase intention, and if so, how much does it attenuate this relationship? Second, we examine the interplay between the scarcity message type and the consumers' purchase intention in the context of the scarcity depth decrease. Third, we study whether scarcity message type and scarcity depth directly affect the consumers' purchase intention. For the answer of these questions, this research is composed of 2(intention inference: existence vs. nonexistence)
This study attempts to analyze how and why Chosun Ilbo and Hankyoreh Shinmun produce particular social discourses about the media reform in different ways. In doing so, this paper attempts to disclose the ideological nature of media reform discourses in social contexts. For the purpose, a content analysis method was applied to the analysis of straight news, while an interpretive discourse analysis was appled to analyze both editorials and columns in newspapers. As a theoretical framework, an articulation theory was applied to explain the relationships among social forces, ideological elements, discourse practices and subjects to produce the media reform discourses. In doing so, I attempted to understand the overall conjuncture of the media reform aspects in social contexts. The period for the analysis was limited from January 10th to August 10th this year. Newspaper articles related to the media reform were obtained from the database of newspaper articles, "KINDS," produced by Korean Press Foundation, in searching the key word, "media reform". Total articles to be analyzed were 765, 429 from Hankyoreh Sinmun and 236 from Chosun Ilbo. The research results, first of all, empirically show that both Chosun Ilbo and Hankure Synmun used straight news for their firms' interests and value judgement, in selecting and excluding events related to media reform or in exaggerating and reducing the meanings of the events, although there are differences in a greater or less degree between two newspaper companies. Accordingly, this paper argues that the monopoly of newspaper subscriber by three major newspapers in Korean society could result in the forming of one-sided social consensus about various social issues through the distorting and unequal reporting by them. Second, this paper's discourse analysis related to the media reform indicates that the discourse of ideology confrontation between the right and the left produced by Chosen Ilbo functioned as a mechanism to realize law enforcement of the right in articulating the request of media reform and the anti-communist ideology. It resulted in the discursive effect of suppressing the request of media reform by civic groups and scholars and made many people to consider the media reform as a ideological matter in Korean society.
Recently Korean universities show very rapid increases in both patents and R&D (research and development) expenditures. During the period from 1970 to 2008, university R&D spending has on the average increased 15.3% annually. Along with steady increases in R&D spending, university's research outputs have also continuously increased. In 1990 Korea as a total published 1,613 SCI-level scientific papers and Korean universities applied 27 patents to Korea patent office. In 2008, Korea published more that 35,000 SCI papers and Korean universities applied about 7,300 patents. The growth of scientific articles had begun from the early 1990s whereas the growth of patent has ignited entering the 2000s. The paper tried to investigate university research through the window of patent. Patents lie between invention and innovation and represent the potential value of invention which will be realized at the marketplace. Since Korean patents do not contain citation information, the paper used US patents-NBER patent database-as the main data. The key empirical question is whether Korean university patents granted from USPTO are characteristically different from other Korean patents granted from USPTO. Previous studies on US and Europe show that corporate patents are more stylized in appropriablity of invention, whereas university patents basicness. In case of Korea, the paper confirmed the appropriability characteristic of corporate patents; but the Korean unversity patents are not distinguishable in terms of basicness. The paper estimated the citation frequency function-an empirical model which was firstly developed by Caballero and Jaffe (1993) and later articulated by Jaffe and Trajtenberg (1996, 2002). The model is specified mainly composed of two interacting parts-diffusion effect and obsolescence effect of new ideas or innovations. Estimation results show that differences in forward citations between university and corporate patents are not statistically significant, after controlling self-citation. Since forward citations represent the quality of patents, this estimation result implies that there are no statistically significant quality differences between university and corporate patents. Prior research results, based on the same model of citation frequency function, about US and some European cases show that, in terms of forward citations, university patents are generally superior to corporate patents -for the case of US- or, the former not inferior to the latter-for the case of most of Europe. It is argued that some important and significant policy changes caused the rapid rise of university patents in Korea. Policy changes include the revision of technology transfer act allowing the ownership of publicly-funded research results to researchers and the changes in faculty/professor evaluation which gives more credit to the number of patents. These policy changes have triggered the rapid growth of the number of university patents. The results of the empirical analysis in this paper indicated that Korea now needs to make further efforts to enhance the quality of university patents, not just to produce more numbers of patents.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70