• Title/Summary/Keyword: Improving method

Search Result 7,223, Processing Time 0.036 seconds

Exhibition Hall Lighting Design that Fulfill High CRI Based on Natural Light Characteristics - Focusing on CRI Ra, R9, R12 (자연광 특성 기반 고연색성 실현 전시관 조명 설계 - CRI Ra, R9, R12를 중심으로)

  • Ji-Young Lee;Seung-Teak Oh;Jae-Hyun Lim
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.65-72
    • /
    • 2024
  • To faithfully represent the intention of the work in the exhibition space, lighting that provides high color reproduction like natural light is required. Thus, many lighting technologies have been introduced to improve CRI, but most of them only evaluated the general color rendering index (CRI Ra), which considers eight pastel colors. Natural light provides excellent color rendering performance for all colors, including red and blue, expressed by color rendering index of R9 and R12, but most artificial lighting has the problem that color rendering performance such as R9 and R12 is significantly lower than that of natural light. Recently, lighting technology that provides CRI at the level of natural light is required to realistically express the colors of works including primary colors but related research is very insufficient. Therefore this paper proposes exhibition hall lighting that fulfills CRI with a focus on CRI Ra, R9, and R12 based on the characteristics of natural light. First reinforcement wavelength bands for improving R9 and R12 are selected through analysis of the actual measurement SPD of natural and artificial lighting. Afterward virtual SPDs with a peak wavelength within the reinforcement wavelength band are created and then SPD combination conditions that satisfy CRI Ra≥95, R9, and R12≥90 are derived through combination simulation with a commercial LED light source. Through this, after specifying two types of light sources with 405,630nm peak wavelength that had the greatest impact on the improvement of R9 and R12, the exhibition hall lighting applied with two W/C White LEDs is designed and a control Index DB of the lighting is constructed. Afterward experiments with the proposed method showed that it was possible to achieve high CRI at the level of natural light with average CRI Ra 96.5, R9 96.2, and R12 94.0 under the conditions of illuminance (300-1,000 Lux) and color temperature (3,000-5,000K).

Optimization of Conditions for Conidial Production in Bipolaris oryzae Isolated from Rice (벼 깨씨무늬병 Bipolaris oryzae의 포자 형성 방법 개선)

  • Seol-Hwa Jang;Seyeon Kim;Shinhwa Kim;Hyunjung Chung;Sook-Young Park
    • Research in Plant Disease
    • /
    • v.30 no.3
    • /
    • pp.229-235
    • /
    • 2024
  • Conidial production is a critical factor in testing pathogenicity and studying the physiology and ecology of fungal pathogens. Therefore, selecting an appropriate condition and medium for consistent conidia production is essential. In this study, we investigated light conditions and suitable medium conditions using the slide culture method to establish optimal conditions for continuous spore acquisition of Bipolaris oryzae. Primarily, we observed conidial production using two B. oryzae isolates, CM23-042 and 23CM10, under two different light conditions: (1) consistent near-ultraviolet (NUV) with fluorescent light, and (2) a 12-hr shift of the NUV-dark cycle. Secondly, we examined conidial formation under seven different media on potato dextrose agar (PDA), V8-Juice agar, minimal medium (MM), sucrose-proline agar (SPA), rabbit food agar (RFA), rice bran agar (RBA), and rice leaf agar (RLA). Under consistent NUV light with fluorescent conditions, conidia were induced in both isolates, whereas conidia were not produced under other conditions after 7 days post-inoculation (dpi). Moreover, B. oryzae isolate CM23-042 produced the highest number of conidia in MM, while isolate 23CM10 yielded the highest number of conidia in PDA after 7 dpi. In summary, our data demonstrated that the consistent NUV light with fluorescent conditions were most conducive for conidia induction in B. oryzae. The selection of a medium for conidiation may vary depending on the B. oryzae isolates, but using MM and PDA or SPA and RFA medium could be effective for spore induction. These findings will contribute to improving conidiation according to the characteristics of collected isolates of B. oryzae.

Development and application of prediction model of hyperlipidemia using SVM and meta-learning algorithm (SVM과 meta-learning algorithm을 이용한 고지혈증 유병 예측모형 개발과 활용)

  • Lee, Seulki;Shin, Taeksoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.111-124
    • /
    • 2018
  • This study aims to develop a classification model for predicting the occurrence of hyperlipidemia, one of the chronic diseases. Prior studies applying data mining techniques for predicting disease can be classified into a model design study for predicting cardiovascular disease and a study comparing disease prediction research results. In the case of foreign literatures, studies predicting cardiovascular disease were predominant in predicting disease using data mining techniques. Although domestic studies were not much different from those of foreign countries, studies focusing on hypertension and diabetes were mainly conducted. Since hypertension and diabetes as well as chronic diseases, hyperlipidemia, are also of high importance, this study selected hyperlipidemia as the disease to be analyzed. We also developed a model for predicting hyperlipidemia using SVM and meta learning algorithms, which are already known to have excellent predictive power. In order to achieve the purpose of this study, we used data set from Korea Health Panel 2012. The Korean Health Panel produces basic data on the level of health expenditure, health level and health behavior, and has conducted an annual survey since 2008. In this study, 1,088 patients with hyperlipidemia were randomly selected from the hospitalized, outpatient, emergency, and chronic disease data of the Korean Health Panel in 2012, and 1,088 nonpatients were also randomly extracted. A total of 2,176 people were selected for the study. Three methods were used to select input variables for predicting hyperlipidemia. First, stepwise method was performed using logistic regression. Among the 17 variables, the categorical variables(except for length of smoking) are expressed as dummy variables, which are assumed to be separate variables on the basis of the reference group, and these variables were analyzed. Six variables (age, BMI, education level, marital status, smoking status, gender) excluding income level and smoking period were selected based on significance level 0.1. Second, C4.5 as a decision tree algorithm is used. The significant input variables were age, smoking status, and education level. Finally, C4.5 as a decision tree algorithm is used. In SVM, the input variables selected by genetic algorithms consisted of 6 variables such as age, marital status, education level, economic activity, smoking period, and physical activity status, and the input variables selected by genetic algorithms in artificial neural network consist of 3 variables such as age, marital status, and education level. Based on the selected parameters, we compared SVM, meta learning algorithm and other prediction models for hyperlipidemia patients, and compared the classification performances using TP rate and precision. The main results of the analysis are as follows. First, the accuracy of the SVM was 88.4% and the accuracy of the artificial neural network was 86.7%. Second, the accuracy of classification models using the selected input variables through stepwise method was slightly higher than that of classification models using the whole variables. Third, the precision of artificial neural network was higher than that of SVM when only three variables as input variables were selected by decision trees. As a result of classification models based on the input variables selected through the genetic algorithm, classification accuracy of SVM was 88.5% and that of artificial neural network was 87.9%. Finally, this study indicated that stacking as the meta learning algorithm proposed in this study, has the best performance when it uses the predicted outputs of SVM and MLP as input variables of SVM, which is a meta classifier. The purpose of this study was to predict hyperlipidemia, one of the representative chronic diseases. To do this, we used SVM and meta-learning algorithms, which is known to have high accuracy. As a result, the accuracy of classification of hyperlipidemia in the stacking as a meta learner was higher than other meta-learning algorithms. However, the predictive performance of the meta-learning algorithm proposed in this study is the same as that of SVM with the best performance (88.6%) among the single models. The limitations of this study are as follows. First, various variable selection methods were tried, but most variables used in the study were categorical dummy variables. In the case with a large number of categorical variables, the results may be different if continuous variables are used because the model can be better suited to categorical variables such as decision trees than general models such as neural networks. Despite these limitations, this study has significance in predicting hyperlipidemia with hybrid models such as met learning algorithms which have not been studied previously. It can be said that the result of improving the model accuracy by applying various variable selection techniques is meaningful. In addition, it is expected that our proposed model will be effective for the prevention and management of hyperlipidemia.

Resolving the 'Gray sheep' Problem Using Social Network Analysis (SNA) in Collaborative Filtering (CF) Recommender Systems (소셜 네트워크 분석 기법을 활용한 협업필터링의 특이취향 사용자(Gray Sheep) 문제 해결)

  • Kim, Minsung;Im, Il
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.137-148
    • /
    • 2014
  • Recommender system has become one of the most important technologies in e-commerce in these days. The ultimate reason to shop online, for many consumers, is to reduce the efforts for information search and purchase. Recommender system is a key technology to serve these needs. Many of the past studies about recommender systems have been devoted to developing and improving recommendation algorithms and collaborative filtering (CF) is known to be the most successful one. Despite its success, however, CF has several shortcomings such as cold-start, sparsity, gray sheep problems. In order to be able to generate recommendations, ordinary CF algorithms require evaluations or preference information directly from users. For new users who do not have any evaluations or preference information, therefore, CF cannot come up with recommendations (Cold-star problem). As the numbers of products and customers increase, the scale of the data increases exponentially and most of the data cells are empty. This sparse dataset makes computation for recommendation extremely hard (Sparsity problem). Since CF is based on the assumption that there are groups of users sharing common preferences or tastes, CF becomes inaccurate if there are many users with rare and unique tastes (Gray sheep problem). This study proposes a new algorithm that utilizes Social Network Analysis (SNA) techniques to resolve the gray sheep problem. We utilize 'degree centrality' in SNA to identify users with unique preferences (gray sheep). Degree centrality in SNA refers to the number of direct links to and from a node. In a network of users who are connected through common preferences or tastes, those with unique tastes have fewer links to other users (nodes) and they are isolated from other users. Therefore, gray sheep can be identified by calculating degree centrality of each node. We divide the dataset into two, gray sheep and others, based on the degree centrality of the users. Then, different similarity measures and recommendation methods are applied to these two datasets. More detail algorithm is as follows: Step 1: Convert the initial data which is a two-mode network (user to item) into an one-mode network (user to user). Step 2: Calculate degree centrality of each node and separate those nodes having degree centrality values lower than the pre-set threshold. The threshold value is determined by simulations such that the accuracy of CF for the remaining dataset is maximized. Step 3: Ordinary CF algorithm is applied to the remaining dataset. Step 4: Since the separated dataset consist of users with unique tastes, an ordinary CF algorithm cannot generate recommendations for them. A 'popular item' method is used to generate recommendations for these users. The F measures of the two datasets are weighted by the numbers of nodes and summed to be used as the final performance metric. In order to test performance improvement by this new algorithm, an empirical study was conducted using a publically available dataset - the MovieLens data by GroupLens research team. We used 100,000 evaluations by 943 users on 1,682 movies. The proposed algorithm was compared with an ordinary CF algorithm utilizing 'Best-N-neighbors' and 'Cosine' similarity method. The empirical results show that F measure was improved about 11% on average when the proposed algorithm was used

    . Past studies to improve CF performance typically used additional information other than users' evaluations such as demographic data. Some studies applied SNA techniques as a new similarity metric. This study is novel in that it used SNA to separate dataset. This study shows that performance of CF can be improved, without any additional information, when SNA techniques are used as proposed. This study has several theoretical and practical implications. This study empirically shows that the characteristics of dataset can affect the performance of CF recommender systems. This helps researchers understand factors affecting performance of CF. This study also opens a door for future studies in the area of applying SNA to CF to analyze characteristics of dataset. In practice, this study provides guidelines to improve performance of CF recommender systems with a simple modification.

  • Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

    • Kim, Myoung-Jong
      • Journal of Intelligence and Information Systems
      • /
      • v.18 no.2
      • /
      • pp.29-45
      • /
      • 2012
    • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

    Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

    • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
      • Journal of Intelligence and Information Systems
      • /
      • v.26 no.2
      • /
      • pp.105-129
      • /
      • 2020
    • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

    Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

    • Seo, Jeoung-soo;Ahn, Hyunchul
      • Journal of Intelligence and Information Systems
      • /
      • v.26 no.4
      • /
      • pp.173-198
      • /
      • 2020
    • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

    Research Framework for International Franchising (국제프랜차이징 연구요소 및 연구방향)

    • Kim, Ju-Young;Lim, Young-Kyun;Shim, Jae-Duck
      • Journal of Global Scholars of Marketing Science
      • /
      • v.18 no.4
      • /
      • pp.61-118
      • /
      • 2008
    • The purpose of this research is to construct research framework for international franchising based on existing literature and to identify research components in the framework. Franchise can be defined as management styles that allow franchisee use various management assets of franchisor in order to make or sell product or service. It can be divided into product distribution franchise that is designed to sell products and business format franchise that is designed for running it as business whatever its form is. International franchising can be defined as a way of internationalization of franchisor to foreign country by providing its business format or package to franchisee of host country. International franchising is growing fast for last four decades but academic research on this is quite limited. Especially in Korea, research about international franchising is carried out on by case study format with single case or empirical study format with survey based on domestic franchise theory. Therefore, this paper tries to review existing literature on international franchising research, providing research framework, and then stimulating new research on this field. International franchising research components include motives and environmental factors for decision of expanding to international franchising, entrance modes and development plan for international franchising, contracts and management strategy of international franchising, and various performance measures from different perspectives. First, motives of international franchising are fee collection from franchisee. Also it provides easier way to expanding to foreign country. The other motives including increase total sales volume, occupying better strategic position, getting quality resources, and improving efficiency. Environmental factors that facilitating international franchising encompasses economic condition, trend, and legal or political factors in host and/or home countries. In addition, control power and risk management capability of franchisor plays critical role in successful franchising contract. Final decision to enter foreign country via franchising is determined by numerous factors like history, size, growth, competitiveness, management system, bonding capability, industry characteristics of franchisor. After deciding to enter into foreign country, franchisor needs to set entrance modes of international franchising. Within contractual mode, there are master franchising and area developing franchising, licensing, direct franchising, and joint venture. Theories about entrance mode selection contain concepts of efficiency, knowledge-based approach, competence-based approach, agent theory, and governance cost. The next step after entrance decision is operation strategy. Operation strategy starts with selecting a target city and a target country for franchising. In order to finding, screening targets, franchisor needs to collect information about candidates. Critical information includes brand patent, commercial laws, regulations, market conditions, country risk, and industry analysis. After selecting a target city in target country, franchisor needs to select franchisee, in other word, partner. The first important criteria for selecting partners are financial credibility and capability, possession of real estate. And cultural similarity and knowledge about franchisor and/or home country are also recognized as critical criteria. The most important element in operating strategy is legal document between franchisor and franchisee with home and host countries. Terms and conditions in legal documents give objective information about characteristics of franchising agreement for academic research. Legal documents have definitions of terminology, territory and exclusivity, agreement of term, initial fee, continuing fees, clearing currency, and rights about sub-franchising. Also, legal documents could have terms about softer elements like training program and operation manual. And harder elements like law competent court and terms of expiration. Next element in operating strategy is about product and service. Especially for business format franchising, product/service deliverable, benefit communicators, system identifiers (architectural features), and format facilitators are listed for product/service strategic elements. Another important decision on product/service is standardization vs. customization. The rationale behind standardization is cost reduction, efficiency, consistency, image congruence, brand awareness, and competitiveness on price. Also standardization enables large scale R&D and innovative change in management style. Another element in operating strategy is control management. The simple way to control franchise contract is relying on legal terms, contractual control system. There are other control systems, administrative control system and ethical control system. Contractual control system is a coercive source of power, but franchisor usually doesn't want to use legal power since it doesn't help to build up positive relationship. Instead, self-regulation is widely used. Administrative control system uses control mechanism from ordinary work relationship. Its main component is supporting activities to franchisee and communication method. For example, franchisor provides advertising, training, manual, and delivery, then franchisee follows franchisor's direction. Another component is building franchisor's brand power. The last research element is performance factor of international franchising. Performance elements can be divided into franchisor's performance and franchisee's performance. The conceptual performance measures of franchisor are simple but not easy to obtain objectively. They are profit, sale, cost, experience, and brand power. The performance measures of franchisee are mostly about benefits of host country. They contain small business development, promotion of employment, introduction of new business model, and level up technology status. There are indirect benefits, like increase of tax, refinement of corporate citizenship, regional economic clustering, and improvement of international balance. In addition to those, host country gets socio-cultural change other than economic effects. It includes demographic change, social trend, customer value change, social communication, and social globalization. Sometimes it is called as westernization or McDonaldization of society. In addition, the paper reviews on theories that have been frequently applied to international franchising research, such as agent theory, resource-based view, transaction cost theory, organizational learning theory, and international expansion theories. Resource based theory is used in strategic decision based on resources, like decision about entrance and cooperation depending on resources of franchisee and franchisor. Transaction cost theory can be applied in determination of mutual trust or satisfaction of franchising players. Agent theory tries to explain strategic decision for reducing problem caused by utilizing agent, for example research on control system in franchising agreements. Organizational Learning theory is relatively new in franchising research. It assumes organization tries to maximize performance and learning of organization. In addition, Internalization theory advocates strategic decision of direct investment for removing inefficiency of market transaction and is applied in research on terms of contract. And oligopolistic competition theory is used to explain various entry modes for international expansion. Competency theory support strategic decision of utilizing key competitive advantage. Furthermore, research methodologies including qualitative and quantitative methodologies are suggested for more rigorous international franchising research. Quantitative research needs more real data other than survey data which is usually respondent's judgment. In order to verify theory more rigorously, research based on real data is essential. However, real quantitative data is quite hard to get. The qualitative research other than single case study is also highly recommended. Since international franchising has limited number of applications, scientific research based on grounded theory and ethnography study can be used. Scientific case study is differentiated with single case study on its data collection method and analysis method. The key concept is triangulation in measurement, logical coding and comparison. Finally, it provides overall research direction for international franchising after summarizing research trend in Korea. International franchising research in Korea has two different types, one is for studying Korean franchisor going overseas and the other is for Korean franchisee of foreign franchisor. Among research on Korean franchisor, two common patterns are observed. First of all, they usually deal with success story of one franchisor. The other common pattern is that they focus on same industry and country. Therefore, international franchise research needs to extend their focus to broader subjects with scientific research methodology as well as development of new theory.

    • PDF

    Correlations between the Capacity of In Vitro Fertilization and the Assays of Sperm Function and Characteristics in Frozen-thawed Bovine Spermatozoa (소 동결-융해 정자에 있어서 체외수정능력과 정자 기능 및 성상 분석법간의 상관관계)

    • Ryu, B.Y.;Chung, Y.C.;Kim, C.K.;Shin, H.A.;Han, J.H.;Kim, S.H.;Moon, S.Y.;Kim, H.R.;Choi, H.
      • Korean Journal of Animal Reproduction
      • /
      • v.26 no.3
      • /
      • pp.275-289
      • /
      • 2002
    • The objective of this study was to develop an in vitro assessment of sperm fertilizing capacity of bulls and investigate the factors influencing sperm function and characteristics of frozen-thawed bovine spermatozoa. in vitro fertilization (IVF), the evaluation of motility and normal morphology, HOST (hypoosmotic swelling test), Ca-ionophore induced acrosome reaction, luminol and lucigenin-dependent chemiluminescence for the measurement of reactive oxygen species (ROS), the measurement of malondialdehyde formation for the analysis of lipid peroxidation (LPO), and the evaluation of DNA fragmentation using the method of 747-mediated nick end labelling (TUNEL) by flow cytometry were performed in frozen-thawed bovine spermatozoa. Correlations between the rates of fertilization, blastocyst formation after IVF and the values of respective assays were investigated. 1. IVF rate and blastocyst formation rate averaged 64.4% and 34.3% for spermatozoa from high -fertility bull group and averaged 18.5% and 6.2% for spermatozoa from low-fertility bull group, respectively. There were significantly different between two bull groups. Sperm motility and percentage acrosome reaction averaged 79.0% and 66.2% for spermatozoa from high-fertility bull group and averaged 40.7% and 22.9% for spermatozoa from low-fertility bull group, respectivitely. There were not different between two bull groups. 2. Luminol depenent chemiluminescence, LPO and DNA fragementation averaged 6.4, 2.0 nmol and 2.6% from spermatozoa from high-fertility bull group and averaged 6.5, 3.1 nmol and 7.4% for spermatozoa from low-fertility bull group, respectively. There were significantly different between two bull groups. There was no significant difference in lucigenin dependent chemiluminescence between two bull groups. 3. Fertilization rate was positively correlated with motility and the rate of Ca-ionophore induced acrosome reaction, but negatively correlated with the frequency of luminol-dependent chemiluminescence, the rate of LPO, and the percentage of sperm with DNA fragmentation. There was no correlation between fertilization rate and the percentage of swollen spermatozoa, normal morphology, and the frequency of lucigenin-dependent chemiluminescence. 4. Blastocyst formation rate was positively correlated with the rate of Ca-ionophore induced acrosome reaction, but negatively correlated with the frequency of luminol-dependent chemiluminescence, the rate of LPO, and the percentage of sperm with DNA fragmentation. There was no correlation between blastocyst formation rate and motility, the percentage of swollen spermatozoa, normal morphology, and the frequency of lucigenin-dependent chemiluminescence. In conclusion, these data suggest that ROS significantly impact semen quality. The assays of this study may provide a basis fur improving in vitro assessment of sperm fertilizing capacity.

    On the present bamboo groves of Cholla-nam-do and their proper treatment -No. 1. On the growing stock of reprsentative phyllostachys reticulata grove by county (전라남도(全羅南道)의 죽림현황(竹林現況)과 그 개선대책(改善對策) -제일(第一), 각군별대표고죽림(各郡別代表苦竹林)의 몇가지 죽간형질(竹桿形質)과 축적(蓄積)에 대하여)

    • Chung, Dong Oh
      • Journal of Korean Society of Forest Science
      • /
      • v.2 no.1
      • /
      • pp.19-28
      • /
      • 1962
    • Total area of bamboo groves in Korea which is limited to $37^{\circ}$ north latitude, i.e., to southern part of Chungchung-nam-do Province and Kangwon-do Province, is 3,235ha., but this country must import about 3,000 metric ton's bamboo culms from Japan every year. It may be true that the country is not so fit for economical cultivation of bamboo groves from the view point of climatic condition, but the author believes that self-sufficiency in bamboo is not impossible if some scientific method for improving bamboo groves is introduced to our primitive groves. Keeping this point in his mind the auther tried to study on the bamboo groves in the country, and as the first step set about to investigate the actual state of twenty good bamboo groves located in Cholla-nam-do Province from March, 1961 to January, 1962. This is a report on some characters of bamboo culms and growing stock with samples collected in the present investigation. 1) Numbers of bamboo culm per 0.1ha. are 1,183 in average, 1,840 in maximum and 87.5 in minimum before harvesting. 2) According to owners' saying, 1960 was such an off-year that they could hardly see any yearling bamboos in groves, but in 1961 very many new bamboos are produced as follows: the proportion of the number of yearling bamboos produced this year to that of mature bamboos (over 2 years old) is 58.7% in average; the highest 110.5% and the lowest 16.8%. 3) the average diameter of culms at eye height is 6.5cm, but the biggest diameter comes to 11.2 cm, and the average diameters of yearling and mature bamboos are 6.5cm and 6.6cm respectively. 4) Internode length records 29.4 cm in average, the shortest 21.3 cm and the longest 38.4 cm. Average internode lengths of new culms and mature culms are 27.6 cm and 29.4 cm respectively. This shows that the internode length of new culms is in the decrease to that of maturer's. 5) Through this investigation, it was found that internode length is in the influence of the exposure and density of bamboo groves, i. e., the more the dencity of bamboo groves is and the more the exposure nears the north-east, the longer the internode length becomes (see Table 7 and 8). 6) In the growing stock of bamboo groves, bundles per 0.1ha. amount to 271 sok (unit of bundle) in total average, 445 sok in maximum and 126 sok in minimum. 7) Among twenty typical bamboo groves, chosen in each County in Cholla-nam-do Province, only one passes perfectly by Veda's standard rule* prescribing the good bamboo grove, but the eight groves shown in Table 9 could be recommended as good ones in Cholla-nam-do Province, because the auther believes that those groves may be improved better, if we pay more attention to the management of them. 8) Considering that they have managed their groves carelessly and primitively, and that unfortunately their groves must have faced almost on clear felling over the entire area at the time of the Korean War, we can surely expect much more increments in bamboo groves, if we introduce some scientific methods in managing their groves.

    • PDF

    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.