• Title/Summary/Keyword: Studies

Search Result 109,472, Processing Time 0.123 seconds

A Conceptual Review of the Transaction Costs within a Distribution Channel (유통경로내의 거래비용에 대한 개념적 고찰)

  • Kwon, Young-Sik;Mun, Jang-Sil
    • Journal of Distribution Science
    • /
    • v.10 no.2
    • /
    • pp.29-41
    • /
    • 2012
  • This paper undertakes a conceptual review of transaction cost to broaden the understanding of the transaction cost analysis (TCA) approach. More than 40 years have passed since Coase's fundamental insight that transaction, coordination, and contracting costs must be considered explicitly in explaining the extent of vertical integration. Coase (1937) forced economists to identify previously neglected constraints on the trading process to foster efficient intrafirm, rather than interfirm, transactions. The transaction cost approach to economic organization study regards transactions as the basic units of analysis and holds that understanding transaction cost economy is central to organizational study. The approach applies to determining efficient boundaries, as between firms and markets, and to internal transaction organization, including employment relations design. TCA, developed principally by Oliver Williamson (1975,1979,1981a) blends institutional economics, organizational theory, and contract law. Further progress in transaction costs research awaits the identification of critical dimensions in which transaction costs differ and an examination of the economizing properties of alternative institutional modes for organizing transactions. The crucial investment distinction is: To what degree are transaction-specific (non-marketable) expenses incurred? Unspecialized items pose few hazards, since buyers can turn toalternative sources, and suppliers can sell output intended for one order to other buyers. Non-marketability problems arise when specific parties' identities have important cost-bearing consequences. Transactions of this kind are labeled idiosyncratic. The summarized results of the review are as follows. First, firms' distribution decisions often prompt examination of the make-or-buy question: Should a marketing activity be performed within the organization by company employees or contracted to an external agent? Second, manufacturers introducing an industrial product to a foreign market face a difficult decision. Should the product be marketed primarily by captive agents (the company sales force and distribution division) or independent intermediaries (outside sales agents and distribution)? Third, the authors develop a theoretical extension to the basic transaction cost model by combining insights from various theories with the TCA approach. Fourth, other such extensions are likely required for the general model to be applied to different channel situations. It is naive to assume the basic model appliesacross markedly different channel contexts without modifications and extensions. Although this study contributes to scholastic research, it is limited by several factors. First, the theoretical perspective of TCA has attracted considerable recent interest in the area of marketing channels. The analysis aims to match the properties of efficient governance structures with the attributes of the transaction. Second, empirical evidence about TCA's basic propositions is sketchy. Apart from Anderson's (1985) study of the vertical integration of the selling function and John's (1984) study of opportunism by franchised dealers, virtually no marketing studies involving the constructs implicated in the analysis have been reported. We hope, therefore, that further research will clarify distinctions between the different aspects of specific assets. Another important line of future research is the integration of efficiency-oriented TCA with organizational approaches that emphasize specific assets' conceptual definition and industry structure. Finally, research of transaction costs, uncertainty, opportunism, and switching costs is critical to future study.

  • PDF

STUDIES ON THE PROPAGATION OF ABALONE (전복의 증식에 관한 연구)

  • PYEN Choong-Kyu
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.3 no.3
    • /
    • pp.177-186
    • /
    • 1970
  • The spawning of the abalone, Haliotis discus hannai, was induced In October 1969 by air ex-position for about 30 minutes. At temperatures of from 14.0 to $18.8^{\circ}C$, the youngest trochophore stage was reached within 22 hours after the egg was laid. The trochophore was transformed into the veliger stage within 34 hours after fertilization. For $7\~9$ days after oviposition the veliger floated in sea water and then settled to the bottom. The peristomal shell was secreted along the outer lip of the aperture of the larval shell, and the first respiratory pore appears at about 110 days after fertilization. The shell attained a length of 0.40 mm in 15 days, 1.39 mm in 49 days, 2.14 mm in 110 days, 5.20 mm in 170 days and 10.00 mm in 228 days respectively. Monthly growth rate of the shell length is expressed by the following equation :$L=0.9981\;e^{0.18659M}$ where L is shell length and M is time in month. The density of floating larvae in the culture tank was about 10 larvae per 100 co. The number of larvae attached to a polyethylene collector ($30\times20\;cm$) ranged from 10 to 600. Mortality of the settled larvae on the polyethylene collector was about $87.0\%$ during 170 days following settlement. The culture of Nauicula sp. was made with rough polyethylene collectors hung at three different depths, namely 5 cm, 45 cm and 85 cm. At each depth the highest cell concentration appeared after $15\~17$ days, and the numbers of cells are shown as follows: $$5\;cm\;34.3\times10^4\;Cells/cm^2$$ $$45\;cm\;27.2\times10^4\;Cells/cm^2$$ $$85\;cm\;26.3\times10^4\;Cells/cm^2$$ At temperatures of from 13.0 to $14.3^{\circ}C$, the distance travelled by the larvae (3.0 mm In shell length) averaged 11.36 mm for a Period of 30 days. Their locomation was relatively active between 6 p.m. and 9 p.m., and $52.2\%$ of them moved during this period. When the larvae (2.0 mm in shell length) were kept in water at $0\;to\;\~1.8^{\circ}C$, they moved 1.15cm between 4 p.m. and 8 p.m. and 0.10 cm between midnight and 8 a.m. The relationships between shell length and body weight of the abalone sampled from three different localities are shown as follows: Dolsan-do $W=0.2479\;L^{2.5721}$ Huksan-do $W=0.1001\;L^{3.1021}$ Pohang $W=0.9632\;L^{2.0611}$

  • PDF

A Study of 'Emotion Trigger' by Text Mining Techniques (텍스트 마이닝을 이용한 감정 유발 요인 'Emotion Trigger'에 관한 연구)

  • An, Juyoung;Bae, Junghwan;Han, Namgi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.69-92
    • /
    • 2015
  • The explosion of social media data has led to apply text-mining techniques to analyze big social media data in a more rigorous manner. Even if social media text analysis algorithms were improved, previous approaches to social media text analysis have some limitations. In the field of sentiment analysis of social media written in Korean, there are two typical approaches. One is the linguistic approach using machine learning, which is the most common approach. Some studies have been conducted by adding grammatical factors to feature sets for training classification model. The other approach adopts the semantic analysis method to sentiment analysis, but this approach is mainly applied to English texts. To overcome these limitations, this study applies the Word2Vec algorithm which is an extension of the neural network algorithms to deal with more extensive semantic features that were underestimated in existing sentiment analysis. The result from adopting the Word2Vec algorithm is compared to the result from co-occurrence analysis to identify the difference between two approaches. The results show that the distribution related word extracted by Word2Vec algorithm in that the words represent some emotion about the keyword used are three times more than extracted by co-occurrence analysis. The reason of the difference between two results comes from Word2Vec's semantic features vectorization. Therefore, it is possible to say that Word2Vec algorithm is able to catch the hidden related words which have not been found in traditional analysis. In addition, Part Of Speech (POS) tagging for Korean is used to detect adjective as "emotional word" in Korean. In addition, the emotion words extracted from the text are converted into word vector by the Word2Vec algorithm to find related words. Among these related words, noun words are selected because each word of them would have causal relationship with "emotional word" in the sentence. The process of extracting these trigger factor of emotional word is named "Emotion Trigger" in this study. As a case study, the datasets used in the study are collected by searching using three keywords: professor, prosecutor, and doctor in that these keywords contain rich public emotion and opinion. Advanced data collecting was conducted to select secondary keywords for data gathering. The secondary keywords for each keyword used to gather the data to be used in actual analysis are followed: Professor (sexual assault, misappropriation of research money, recruitment irregularities, polifessor), Doctor (Shin hae-chul sky hospital, drinking and plastic surgery, rebate) Prosecutor (lewd behavior, sponsor). The size of the text data is about to 100,000(Professor: 25720, Doctor: 35110, Prosecutor: 43225) and the data are gathered from news, blog, and twitter to reflect various level of public emotion into text data analysis. As a visualization method, Gephi (http://gephi.github.io) was used and every program used in text processing and analysis are java coding. The contributions of this study are as follows: First, different approaches for sentiment analysis are integrated to overcome the limitations of existing approaches. Secondly, finding Emotion Trigger can detect the hidden connections to public emotion which existing method cannot detect. Finally, the approach used in this study could be generalized regardless of types of text data. The limitation of this study is that it is hard to say the word extracted by Emotion Trigger processing has significantly causal relationship with emotional word in a sentence. The future study will be conducted to clarify the causal relationship between emotional words and the words extracted by Emotion Trigger by comparing with the relationships manually tagged. Furthermore, the text data used in Emotion Trigger are twitter, so the data have a number of distinct features which we did not deal with in this study. These features will be considered in further study.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.

Product Evaluation Criteria Extraction through Online Review Analysis: Using LDA and k-Nearest Neighbor Approach (온라인 리뷰 분석을 통한 상품 평가 기준 추출: LDA 및 k-최근접 이웃 접근법을 활용하여)

  • Lee, Ji Hyeon;Jung, Sang Hyung;Kim, Jun Ho;Min, Eun Joo;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.97-117
    • /
    • 2020
  • Product evaluation criteria is an indicator describing attributes or values of products, which enable users or manufacturers measure and understand the products. When companies analyze their products or compare them with competitors, appropriate criteria must be selected for objective evaluation. The criteria should show the features of products that consumers considered when they purchased, used and evaluated the products. However, current evaluation criteria do not reflect different consumers' opinion from product to product. Previous studies tried to used online reviews from e-commerce sites that reflect consumer opinions to extract the features and topics of products and use them as evaluation criteria. However, there is still a limit that they produce irrelevant criteria to products due to extracted or improper words are not refined. To overcome this limitation, this research suggests LDA-k-NN model which extracts possible criteria words from online reviews by using LDA and refines them with k-nearest neighbor. Proposed approach starts with preparation phase, which is constructed with 6 steps. At first, it collects review data from e-commerce websites. Most e-commerce websites classify their selling items by high-level, middle-level, and low-level categories. Review data for preparation phase are gathered from each middle-level category and collapsed later, which is to present single high-level category. Next, nouns, adjectives, adverbs, and verbs are extracted from reviews by getting part of speech information using morpheme analysis module. After preprocessing, words per each topic from review are shown with LDA and only nouns in topic words are chosen as potential words for criteria. Then, words are tagged based on possibility of criteria for each middle-level category. Next, every tagged word is vectorized by pre-trained word embedding model. Finally, k-nearest neighbor case-based approach is used to classify each word with tags. After setting up preparation phase, criteria extraction phase is conducted with low-level categories. This phase starts with crawling reviews in the corresponding low-level category. Same preprocessing as preparation phase is conducted using morpheme analysis module and LDA. Possible criteria words are extracted by getting nouns from the data and vectorized by pre-trained word embedding model. Finally, evaluation criteria are extracted by refining possible criteria words using k-nearest neighbor approach and reference proportion of each word in the words set. To evaluate the performance of the proposed model, an experiment was conducted with review on '11st', one of the biggest e-commerce companies in Korea. Review data were from 'Electronics/Digital' section, one of high-level categories in 11st. For performance evaluation of suggested model, three other models were used for comparing with the suggested model; actual criteria of 11st, a model that extracts nouns by morpheme analysis module and refines them according to word frequency, and a model that extracts nouns from LDA topics and refines them by word frequency. The performance evaluation was set to predict evaluation criteria of 10 low-level categories with the suggested model and 3 models above. Criteria words extracted from each model were combined into a single words set and it was used for survey questionnaires. In the survey, respondents chose every item they consider as appropriate criteria for each category. Each model got its score when chosen words were extracted from that model. The suggested model had higher scores than other models in 8 out of 10 low-level categories. By conducting paired t-tests on scores of each model, we confirmed that the suggested model shows better performance in 26 tests out of 30. In addition, the suggested model was the best model in terms of accuracy. This research proposes evaluation criteria extracting method that combines topic extraction using LDA and refinement with k-nearest neighbor approach. This method overcomes the limits of previous dictionary-based models and frequency-based refinement models. This study can contribute to improve review analysis for deriving business insights in e-commerce market.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

Developmental Plans and Research on Private Security in Korea (한국 민간경비 실태 및 발전방안)

  • Kim, Tea-Hwan;Park, Ok-Cheol
    • Korean Security Journal
    • /
    • no.9
    • /
    • pp.69-98
    • /
    • 2005
  • The security industry for civilians (Private Security), was first introduced to Korea via the US army's security system in the early 1960's. Shortly after then, official police laws were enforced in 1973, and private security finally started to develop with the passing of the 'service security industry' law in 1976. Korea's Private Security industry grew rapidly in the 1980's with the support of foreign funds and products, and now there are thought to be approximately 2000 private security enterprises currently running in Korea. However, nowadays the majority of these enterprises are experiencing difficulties such as lack of funds, insufficient management, and lack of control over employees, as a result, it seems difficult for some enterprises to avoid the low production output and bankruptcy. As a result of this these enterprises often settle these matters illegally, such as excessive dumping or avoiding problems by hiring inappropriate employees who don't have the right skills or qualifications for the jobs. The main problem with the establishment of this kind of security service is that it is so easy to make inroads into this private service market. All these hindering factors inhibit the market growth and impede qualitative development. Based on these main reasons, I researched this area, and will analyze and criticize the present condition of Korea's private security. I will present a possible development plan for the private security of Korea by referring to cases from the US and Japan. My method of researching was to investigate any related documentary records and articles and to interview people for necessary evidence. The theoretical study, involves investigation books and dissertations which are published from inside and outside of the country, and studying the complete collection of laws and regulations, internet data, various study reports, and the documentary records and the statistical data of many institutions such as the National Police Office, judicial training institute, and the enterprises of private security. Also, in addition, the contents of professionals who are in charge of practical affairs on the spot in order to overcomes the critical points of documentary records when investigating dissertation. I tried to get a firm grasp of the problems and difficulties which people in these work enterprises experience, this I thought would be most effective by interviewing the workers, for example: how they feel in the work places and what are the elements which inpede development? And I also interviewed policemen who are in charge of supervising the private escort enterprises, in an effort to figure out the problems and differences in opinion between domestic private security service and the police. From this investigation and research I will try to pin point the major problems of the private security and present a developmental plan. Firstly-Companies should unify the private police law and private security service law. Secondly-It is essential to introduce the 'specialty certificate' system for the quality improvement of private security service. Thirdly-must open up a new private security market by improving old system. Fourth-must build up the competitive power of the security service enterprises which is based on an efficient management. Fifth-needs special marketing strategy to hold customers Sixth-needs positive research based on theoretical studies. Seventh-needs the consistent and even training according to effective market demand. Eighth-Must maintain interrelationship with the police department. Ninth-must reinforce the system of Korean private security service association. Tenth-must establish private security laboratory. Based on these suggestions there should be improvement of private security service.

  • PDF

Studies on Ripening Physiology of Rice plant. -I Difference in Ripening Structure between Jinheung and IR667 (수도(水稻)의 등숙생리(登熟生理)에 관(關)한 연구(硏究) -I 진흥(振興)과 IR667의 등숙구조비교(登熟構造比較))

  • Kwon, Hang Gwang;Park, Hoon
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.5 no.2
    • /
    • pp.65-74
    • /
    • 1972
  • A local rice variety, Jinheung and newly bred IR667-Suwon 214 were grown in $5m^2$ concret pot with two spacings and two nitrogen levels and their ripening structure and its function were comparatively investigated to elucidate the causes of unusually low ripened grain ratio of IR667 lines. The following differences between two varieties were found. 1. Though IR667 had much lower ripened grain ratio (64%) than Jinheung (85%) grain yield(790 kg/10a) of IR667 was higher than that (760 kg/10a) of Jinheung. 2. Number of ripined grain per net assimiration rate (NAR) at 10 days after heading was a little higher in IR667 (6,490) than in Jinheung (6,360) consiting to lower grain weight ($29.9{\times}10^{-3}g$) in IR667 than $31.2{\times}10^{-3}g$ of Jinheung. But number of total grain per NAR was much higher (10,530) in IR667 than 7,290 of Jinheung indicating that it was the probable cause of low ripened grain ratio of IR667. 3. Extinction coeificient (K) was 0.115 in IR667 and 0.200 in Jinheung, thus IR667 could construct greater ripening structure per unit area. 4. Number of grain per LAI was decreased with increasing LAI at heading and the decreasing rate was similar for both IR667 and Jinheung. 5. Critical leaf area index at which crop growth rata (CGR) is maximum was 6.5 for IR667 and 5.2 for Jinheung. Below 5.2 of LAI net assimilation rate was always higher an Jinheung throughout the growing season. 6. The estimated optimum leaf area index having maximum grain yield was 7.4 for IR667 and 6.2 for Jinheung at 10 days after heading. However, actual leaf area index was 6.2 for IR-667 and 4.7 for Jinheung and these were even below critical leaf area index. 7. The decrease of LAI during ripening period was great in IR667 but photosynthesis per $m^2$ was decreased more rapidly in Jinheung. 8. Net assimilation rate (NAR) decreased with the increase of LAI at any time of ripening period. The decreasing rate of NAR with the increase of LAI was greater in IR667 with ripening. The greater decreasing rate of NAR in IR667 seemed to be attributed to low photosynthetic activity and high respiratory loss due to the requirement of higher optimum temperature of ripening. 9. Grain yield-ripened grain ratio curve showed less contribution of dry matter yield after heading to grain yield in IR667 than in Jinheung due to unfavorable ripening environment(specialy air temperature) indicating that yield of IR667 could most effectively increased through the improvement of ripening environment.

  • PDF

The Comparative Studies on Hatched Silkworm Dominance Seperation against Sex Seperation to meet Silk Promotion (잠견생산성 개선을 위한 의잠우열분리와 자웅분리의 비교연구)

  • Choe, Byong-Hee
    • Journal of Sericultural and Entomological Science
    • /
    • v.15 no.2
    • /
    • pp.21-28
    • /
    • 1973
  • This report is prepared to promote cocoon natures for the use of silk reeling material. It is easily understandable that there must be disuniformity composed with superior group and inferior group in commercial silkworms. If such different groups be seperated by some method, it would be a great contribution for the cocoon production. For a comparative purpose, silkworm sex seperation carried out because male silkworms produce more silk than female worms. The author has developed a new chemical reagent available for the seperation of superior group and inferior group from commercial silkworms, which he has named it as Better Hybrid Controller (BHC). The obtained comparative results are summarized as followings. 1. Basic investigation of BHC application a) In case BHC applied with hybrid worms and pure line, the former one starts to adapt mulberry leaves earlyer than pure line variety. b) The mulberry adapting interval distribution of pure line worms after BHC application showed U type distribution, but hybrid worms showed L type or Poisson's distribution. c) In case of BHC application with silkworms, the longer period application is, the duller distribution was formed. d) When silkworms are seperated in two groups by use of BHC application, the earlyer mulberry adapted group is seemed as stronger than the other part and the group ratio is 2 : 1. 2. Comparation between sex seperation result and better hydrid control (BHC) seperation result. a) The cocoon shell per cent of male worm group showed betweer result than the female group but only 0.4% difference between sexes. b) The cocoon shell per cent of superior group, seperated by BHC, showed 0.7% more than the inferior group. c) The average cocoon shell per cent of BHC treated cocoons showed much more than the Control group as 1.6∼2.4% increase. Enven the inferior group showed better result than the Control. d) Such unexpected result is considered to be the result that BHC application is activating some thing with silkworm physiology. e) On the ether hand, the result of sexes seperated groups or male worm group did not show desirable conclusion as far as cocoon shell per cent is concerned. f) However, when the male group was reeled as silk, it showed much better silk yield or silk per cent of cocoon than the female group as much as one per cent difference between by sexes. Such result was brought by superior silk yield from cocoon shell as much as 87.4%. g) On the other hand, the male group showed lowest non breaking reelable ratio (63%) among all group comparation. h) When we compare cocoon qualities by sex seperation and BHC seperation against the Control, there is no qualitative change, but BHC group showed quantitative promotion with cocoon bave length as much as about hundred meters. i) In case of calculation for productive income of cocoon production, BHC applied group showed about ten per cent income promotion more than the Control. The sexes seperated group, however, showed rather negative result because the male cocoon produced poor weight per box eggs which could not cover it by the inclose of silk yield of it. j) So, the BHC application with the fetched worm stage brought about big promotion for cocoon production. k) BHC method may be used either for seperation purpose or quantitative promotion with whole silk-worms. 3. Only male silkworms rearing did not show desirable productivity, so there is no reason to work out it in the fetching stage of worm.

  • PDF