• Title/Summary/Keyword: Fitness Selection

Search Result 157, Processing Time 0.028 seconds

Establishment of Korean Native Chicken Auto-Sexing Lines Using Sex-Linked Feathering Gene (한국토종닭의 깃털 발육성 반성 유전자를 이용한 자가성감별 계통 조성)

  • Kwon, Jae Hyun;Choi, Eun Sik;Sohn, Sea Hwan
    • Korean Journal of Poultry Science
    • /
    • v.48 no.1
    • /
    • pp.41-50
    • /
    • 2021
  • Although feather-sexing using sex-linked genes related to feather development is a widely used chick sexing method in the poultry industry, the feather-sexing method has yet to be used for Korean native chickens (KNCs). The purpose of this study was to construct a KNC feather-sexing line using early-feathering (EF) and late-feathering (LF) genes for industrial application. Using 557 reddish-brown KNCs as the basal flock, frequencies of the EF (k) and LF (K) genes were estimated to be 0.814 and 0.186, respectively. This indicating that it would be feasible to construct a feather-sexing line using this chicken group, and we accordingly constructed EF paternal and LF maternal lines. On the basis of test-cross for the selection of LF homozygous (KK) males in the maternal line, we confirmed that three of 40 chickens were homozygous males. The survival rate, body weight, days at first egg-laying, hen-day egg production, and egg weight were analyzed to compare the production performance of EF and LF chickens. The results revealed that EF chickens were characterized by a superior survival rate, whereas LF chickens were superior in terms of egg production rate. However, no differences between LF and EF chickens were detected with respect to other production performance parameters. In addition, assessment of the fitness of sexed chicks produced in the established KNC feather-sexing lines revealed that the accuracy of sexing was 98.6%. Collectively, these findings indicate the feasibility of constructing effective KNC feather-sexing lines with potential industrial application.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

A Study on Users' Resistance toward ERP in the Pre-adoption Context (ERP 도입 전 구성원의 저항)

  • Park, Jae-Sung;Cho, Yong-Soo;Koh, Joon
    • Asia pacific journal of information systems
    • /
    • v.19 no.4
    • /
    • pp.77-100
    • /
    • 2009
  • Information Systems (IS) is an essential tool for any organizations. The last decade has seen an increasing body of knowledge on IS usage. Yet, IS often fails because of its misuse or non-use. In general, decisions regarding the selection of a system, which involve the evaluation of many IS vendors and an enormous initial investment, are made not through the consensus of employees but through the top-down decision making by top managers. In situations where the selected system does not satisfy the needs of the employees, the forced use of the selected IS will only result in their resistance to it. Many organizations have been either integrating dispersed legacy systems such as archipelago or adopting a new ERP (Enterprise Resource Planning) system to enhance employee efficiency. This study examines user resistance prior to the adoption of the selected IS or ERP system. As such, this study identifies the importance of managing organizational resistance that may appear in the pre-adoption context of an integrated IS or ERP system, explores key factors influencing user resistance, and investigates how prior experience with other integrated IS or ERP systems may change the relationship between the affecting factors and user resistance. This study focuses on organizational members' resistance and the affecting factors in the pre-adoption context of an integrated IS or ERP system rather than in the context of an ERP adoption itself or ERP post-adoption. Based on prior literature, this study proposes a research model that considers six key variables, including perceived benefit, system complexity, fitness with existing tasks, attitude toward change, the psychological reactance trait, and perceived IT competence. They are considered as independent variables affecting user resistance toward an integrated IS or ERP system. This study also introduces the concept of prior experience (i.e., whether a user has prior experience with an integrated IS or ERP system) as a moderating variable to examine the impact of perceived benefit and attitude toward change in user resistance. As such, we propose eight hypotheses with respect to the model. For the empirical validation of the hypotheses, we developed relevant instruments for each research variable based on prior literature and surveyed 95 professional researchers and the administrative staff of the Korea Photonics Technology Institute (KOPTI). We examined the organizational characteristics of KOPTI, the reasons behind their adoption of an ERP system, process changes caused by the introduction of the system, and employees' resistance/attitude toward the system at the time of the introduction. The results of the multiple regression analysis suggest that, among the six variables, perceived benefit, complexity, attitude toward change, and the psychological reactance trait significantly influence user resistance. These results further suggest that top management should manage the psychological states of their employees in order to minimize their resistance to the forced IS, even in the new system pre-adoption context. In addition, the moderating variable-prior experience was found to change the strength of the relationship between attitude toward change and system resistance. That is, the effect of attitude toward change in user resistance was significantly stronger in those with prior experience than those with no prior experience. This result implies that those with prior experience should be identified and provided with some type of attitude training or change management programs to minimize their resistance to the adoption of a system. This study contributes to the IS field by providing practical implications for IS practitioners. This study identifies system resistance stimuli of users, focusing on the pre-adoption context in a forced ERP system environment. We have empirically validated the proposed research model by examining several significant factors affecting user resistance against the adoption of an ERP system. In particular, we find a clear and significant role of the moderating variable, prior ERP usage experience, in the relationship between the affecting factors and user resistance. The results of the study suggest the importance of appropriately managing the factors that affect user resistance in organizations that plan to introduce a new ERP system or integrate legacy systems. Moreover, this study offers to practitioners several specific strategies (in particular, the categorization of users by their prior usage experience) for alleviating the resistant behaviors of users in the process of the ERP adoption before a system becomes available to them. Despite the valuable contributions of this study, there are also some limitations which will be discussed in this paper to make the study more complete and consistent.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

Mature Market Sub-segmentation and Its Evaluation by the Degree of Homogeneity (동질도 평가를 통한 실버세대 세분군 분류 및 평가)

  • Bae, Jae-ho
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.27-35
    • /
    • 2010
  • As the population, buying power, and intensity of self-expression of the elderly generation increase, its importance as a market segment is also growing. Therefore, the mass marketing strategy for the elderly generation must be changed to a micro-marketing strategy based on the results of sub-segmentation that suitably captures the characteristics of this generation. Furthermore, as a customer access strategy is decided by sub-segmentation, proper segmentation is one of the key success factors for micro-marketing. Segments or sub-segments are different from sectors, because segmentation or sub-segmentation for micro-marketing is based on the homogeneity of customer needs. Theoretically, complete segmentation would reveal a single voice. However, it is impossible to achieve complete segmentation because of economic factors, factors that affect effectiveness, etc. To obtain a single voice from a segment, we sometimes need to divide it into many individual cases. In such a case, there would be a many segments to deal with. On the other hand, to maximize market access performance, fewer segments are preferred. In this paper, we use the term "sub-segmentation" instead of "segmentation," because we divide a specific segment into more detailed segments. To sub-segment the elderly generation, this paper takes their lifestyles and life stages into consideration. In order to reflect these aspects, various surveys and several rounds of expert interviews and focused group interviews (FGIs) were performed. Using the results of these qualitative surveys, we can define six sub-segments of the elderly generation. This paper uses five rules to divide the elderly generation. The five rules are (1) mutually exclusive and collectively exhaustive (MECE) sub-segmentation, (2) important life stages, (3) notable lifestyles, (4) minimum number of and easy classifiable sub-segments, and (5) significant difference in voices among the sub-segments. The most critical point for dividing the elderly market is whether children are married. The other points are source of income, gender, and occupation. In this paper, the elderly market is divided into six sub-segments. As mentioned, the number of sub-segments is a very key point for a successful marketing approach. Too many sub-segments would lead to narrow substantiality or lack of actionability. On the other hand, too few sub-segments would have no effects. Therefore, the creation of the optimum number of sub-segments is a critical problem faced by marketers. This paper presents a method of evaluating the fitness of sub-segments that was deduced from the preceding surveys. The presented method uses the degree of homogeneity (DoH) to measure the adequacy of sub-segments. This measure uses quantitative survey questions to calculate adequacy. The ratio of significantly homogeneous questions to the total numbers of survey questions indicates the DoH. A significantly homogeneous question is defined as a question in which one case is selected significantly more often than others. To show whether a case is selected significantly more often than others, we use a hypothesis test. In this case, the null hypothesis (H0) would be that there is no significant difference between the selection of one case and that of the others. Thus, the total number of significantly homogeneous questions is the total number of cases in which the null hypothesis is rejected. To calculate the DoH, we conducted a quantitative survey (total sample size was 400, 60 questions, 4~5 cases for each question). The sample size of the first sub-segment-has no unmarried offspring and earns a living independently-is 113. The sample size of the second sub-segment-has no unmarried offspring and is economically supported by its offspring-is 57. The sample size of the third sub-segment-has unmarried offspring and is employed and male-is 70. The sample size of the fourth sub-segment-has unmarried offspring and is not employed and male-is 45. The sample size of the fifth sub-segment-has unmarried offspring and is female and employed (either the female herself or her husband)-is 63. The sample size of the last sub-segment-has unmarried offspring and is female and not employed (not even the husband)-is 52. Statistically, the sample size of each sub-segment is sufficiently large. Therefore, we use the z-test for testing hypotheses. When the significance level is 0.05, the DoHs of the six sub-segments are 1.00, 0.95, 0.95, 0.87, 0.93, and 1.00, respectively. When the significance level is 0.01, the DoHs of the six sub-segments are 0.95, 0.87, 0.85, 0.80, 0.88, and 0.87, respectively. These results show that the first sub-segment is the most homogeneous category, while the fourth has more variety in terms of its needs. If the sample size is sufficiently large, more segmentation would be better in a given sub-segment. However, as the fourth sub-segment is smaller than the others, more detailed segmentation is not proceeded. A very critical point for a successful micro-marketing strategy is measuring the fit of a sub-segment. However, until now, there have been no robust rules for measuring fit. This paper presents a method of evaluating the fit of sub-segments. This method will be very helpful for deciding the adequacy of sub-segmentation. However, it has some limitations that prevent it from being robust. These limitations include the following: (1) the method is restricted to only quantitative questions; (2) the type of questions that must be involved in calculation pose difficulties; (3) DoH values depend on content formation. Despite these limitations, this paper has presented a useful method for conducting adequate sub-segmentation. We believe that the present method can be applied widely in many areas. Furthermore, the results of the sub-segmentation of the elderly generation can serve as a reference for mature marketing.

  • PDF

Studies on the Mechanical Properties of Weathered Granitic Soil -On the Elements of Shear Strength and Hardness- (화강암질풍화토(花崗岩質風化土)의 역학적(力學的) 성질(性質)에 관(關)한 연구(硏究) -전단강도(剪斷强度)의 영향요소(影響要素)와 견밀도(堅密度)에 대(對)하여-)

  • Cho, Hi Doo
    • Journal of Korean Society of Forest Science
    • /
    • v.66 no.1
    • /
    • pp.16-36
    • /
    • 1984
  • It is very important in forestry to study the shear strength of weathered granitic soil, because the soil covers 66% of our country, and because the majority of land slides have been occured in the soil. In general, the causes of land slide can be classified both the external and internal factors. The external factors are known as vegetations, geography and climate, but internal factors are known as engineering properties originated from parent rocks and weathering. Soil engineering properties are controlled by the skeleton structure, texture, consistency, cohesion, permeability, water content, mineral components, porosity and density etc. of soils. And the effects of these internal factors on sliding down summarize as resistance, shear strength, against silding of soil mass. Shear strength basically depends upon effective stress, kinds of soils, density (void ratio), water content, the structure and arrangement of soil particles, among the properties. But these elements of shear strength work not all alone, but together. The purpose of this thesis is to clarify the characteristics of shear strength and the related elements, such as water content ($w_o$), void ratio($e_o$), dry density (${\gamma}_d$) and specific gravity ($G_s$), and the interrelationship among related elements in order to decide the dominant element chiefly influencing on shear strength in natural/undisturbed state of weathered granitic soil, in addition to the characteristics of soil hardness of weathered granitic soil and root distribution of Pinus rigida Mill and Pinus rigida ${\times}$ taeda planted in erosion-controlled lands. For the characteristics of shear strength of weathered granitic soil and the related elements of shear strength, three sites were selected from Kwangju district. The outlines of sampling sites in the district were: average specific gravity, 2.63 ~ 2.79; average natural water content, 24.3 ~ 28.3%; average dry density, $1.31{\sim}1.43g/cm^3$, average void ratio, 0.93 ~ 1.001 ; cohesion, $ 0.2{\sim}0.75kg/cm^2$ ; angle of internal friction, $29^{\circ}{\sim}45^{\circ}$ ; soil texture, SL. The shear strength of the soil in different sites was measured by a direct shear apparatus (type B; shear box size, $62.5{\times}20mm$; ${\sigma}$, $1.434kg/cm^2$; speed, 1/100mm/min.). For the related element analyses, water content was moderated through a series of drainage experiments with 4 levels of drainage period, specific gravity was measured by KS F 308, analysis of particle size distribution, by KS F 2302 and soil samples were dried at $110{\pm}5^{\circ}C$ for more than 12 hours in dry oven. Soil hardness represents physical properties, such as particle size distribution, porosity, bulk density and water content of soil, and test of the hardness by soil hardness tester is the simplest approach and totally indicative method to grasp the mechanical properties of soil. It is important to understand the mechanical properties of soil as well as the chemical in order to realize the fundamental phenomena in the growth and the distribution of tree roots. The writer intended to study the correlation between the soil hardness and the distribution of tree roots of Pinus rigida Mill. planted in 1966 and Pinus rigida ${\times}$ taeda in 199 to 1960 in the denuded forest lands with and after several erosion control works. The soil texture of the sites investigated was SL originated from weathered granitic soil. The former is situated at Py$\ddot{o}$ngchangri, Ky$\ddot{o}$m-my$\ddot{o}$n, Kogs$\ddot{o}$ng-gun, Ch$\ddot{o}$llanam-do (3.63 ha; slope, $17^{\circ}{\sim}41^{\circ}$ soil depth, thin or medium; humidity, dry or optimum; height, 5.66/3.73 ~ 7.63 m; D.B.H., 9.7/8.00 ~ 12.00 cm) and the Latter at changun-long Kwangju-shi (3.50 ha; slope, $12^{\circ}{\sim}23^{\circ}$; soil depth, thin; humidity, dry; height, 10.47/7.3 ~ 12.79 m; D.B.H., 16.94/14.3 ~ 19.4 cm).The sampling areas were 24quadrats ($10m{\times}10m$) in the former area and 12 in the latter expanding from summit to foot. Each sampling trees for hardness test and investigation of root distribution were selected by purposive selection and soil profiles of these trees were made at the downward distance of 50 cm from the trees, at each quadrat. Soil layers of the profile were separated by the distance of 10 cm from the surface (layer I, II, ... ...). Soil hardness was measured with Yamanaka soil hardness tester and indicated as indicated soil hardness at the different soil layers. The distribution of tree root number per unit area in different soil depth was investigated, and the relationship between the soil hardness and the number of tree roots was discussed. The results obtained from the experiments are summarized as follows. 1. Analyses of simple relationship between shear strength and elements of shear strength, water content ($w_o$), void ratio ($e_o$), dry density (${\gamma}_d$) and specific gravity ($G_s$). 1) Negative correlation coefficients were recognized between shear strength and water content. and shear strength and void ratio. 2) Positive correlation coefficients were recognized between shear strength and dry density. 3) The correlation coefficients between shear strength and specific gravity were not significant. 2. Analyses of partial and multiple correlation coefficients between shear strength and the related elements: 1) From the analyses of the partial correlation coefficients among water content ($x_1$), void ratio ($x_2$), and dry density ($x_3$), the direct effect of the water content on shear strength was the highest, and effect on shear strength was in order of void ratio and dry density. Similar trend was recognized from the results of multiple correlation coefficient analyses. 2) Multiple linear regression equations derived from two independent variables, water content ($x_1$ and dry density ($x_2$) were found to be ineffective in estimating shear strength ($\hat{Y}$). However, the simple linear regression equations with an independent variable, water content (x) were highly efficient to estimate shear strength ($\hat{Y}$) with relatively high fitness. 3. A relationship between soil hardness and the distribution of root number: 1) The soil hardness increased proportionally to the soil depth. Negative correlation coefficients were recognized between indicated soil hardness and the number of tree roots in both plantations. 2) The majority of tree roots of Pinus rigida Mill and Pinus rigida ${\times}$ taeda planted in erosion-controlled lands distributed at 20 cm deep from the surface. 3) Simple linear regression equations were derived from indicated hardness (x) and the number of tree roots (Y) to estimate root numbers in both plantations.

  • PDF