• Title/Summary/Keyword: Initial data

Search Result 4,975, Processing Time 0.037 seconds

Prediction of commitment and persistence in heterosexual involvements according to the styles of loving using a datamining technique (데이터마이닝을 활용한 사랑의 형태에 따른 연인관계 몰입수준 및 관계 지속여부 예측)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.69-85
    • /
    • 2016
  • Successful relationship with loving partners is one of the most important factors in life. In psychology, there have been some previous researches studying the factors influencing romantic relationships. However, most of these researches were performed based on statistical analysis; thus they have limitations in analyzing complex non-linear relationships or rules based reasoning. This research analyzes commitment and persistence in heterosexual involvement according to styles of loving using a datamining technique as well as statistical methods. In this research, we consider six different styles of loving - 'eros', 'ludus', 'stroge', 'pragma', 'mania' and 'agape' which influence romantic relationships between lovers, besides the factors suggested by the previous researches. These six types of love are defined by Lee (1977) as follows: 'eros' is romantic, passionate love; 'ludus' is a game-playing or uncommitted love; 'storge' is a slow developing, friendship-based love; 'pragma' is a pragmatic, practical, mutually beneficial relationship; 'mania' is an obsessive or possessive love and, lastly, 'agape' is a gentle, caring, giving type of love, brotherly love, not concerned with the self. In order to do this research, data from 105 heterosexual couples were collected. Using the data, a linear regression method was first performed to find out the important factors associated with a commitment to partners. The result shows that 'satisfaction', 'eros' and 'agape' are significant factors associated with the commitment level for both male and female. Interestingly, in male cases, 'agape' has a greater effect on commitment than 'eros'. On the other hand, in female cases, 'eros' is a more significant factor than 'agape' to commitment. In addition to that, 'investment' of the male is also crucial factor for male commitment. Next, decision tree analysis was performed to find out the characteristics of high commitment couples and low commitment couples. In order to build decision tree models in this experiment, 'decision tree' operator in the datamining tool, Rapid Miner was used. The experimental result shows that males having a high satisfaction level in relationship show a high commitment level. However, even though a male may not have a high satisfaction level, if he has made a lot of financial or mental investment in relationship, and his partner shows him a certain amount of 'agape', then he also shows a high commitment level to the female. In the case of female, a women having a high 'eros' and 'satisfaction' level shows a high commitment level. Otherwise, even though a female may not have a high satisfaction level, if her partner shows a certain amount of 'mania' then the female also shows a high commitment level. Finally, this research built a prediction model to establish whether the relationship will persist or break up using a decision tree. The result shows that the most important factor influencing to the break up is a 'narcissistic tendency' of the male. In addition to that, 'satisfaction', 'investment' and 'mania' of both male and female also affect a break up. Interestingly, while the 'mania' level of a male works positively to maintain the relationship, that of a female has a negative influence. The contribution of this research is adopting a new technique of analysis using a datamining method for psychology. In addition, the results of this research can provide useful advice to couples for building a harmonious relationship with each other. This research has several limitations. First, the experimental data was sampled based on oversampling technique to balance the size of each classes. Thus, it has a limitation of evaluating performances of the predictive models objectively. Second, the result data, whether the relationship persists of not, was collected relatively in short periods - 6 months after the initial data collection. Lastly, most of the respondents of the survey is in their 20's. In order to get more general results, we would like to extend this research to general populations.

S-wave Velocity Derivation Near the BSR Depth of the Gas-hydrate Prospect Area Using Marine Multi-component Seismic Data (해양 다성분 탄성파 자료를 이용한 가스하이드레이트 유망지역의 BSR 상하부 S파 속도 도출)

  • Kim, Byoung-Yeop;Byun, Joong-Moo
    • Economic and Environmental Geology
    • /
    • v.44 no.3
    • /
    • pp.229-238
    • /
    • 2011
  • S-wave, which provides lithology and pore fluid information, plays a key role in estimating gas-hydrate saturation. In general, P- and S-wave velocities increase in the presence of gas-hydrate and the P-wave velocity decreases in the presence of free gas under the gas-hydrate layer. Whereas there are very small changes, even slightly increases, in the S-wave velocity in the free gas layer because S-wave is not affected by the pore fluid when propagating in the free gas layer. To verify those velocity properties of the BSR (bottom-simulating reflector) depth in the gas-hydrate prospect area in the Ulleung Basin, P- and S-wave velocity profiles were derived from multi-component ocean-bottom seismic data which were acquired by Korea Institute of Geoscience and Mineral Resources (KIGAM) in May 2009. OBS (ocean-bottom seismometer) hydrophone component data were modeled and inverted first through the traveltime inversion method to derive P-wave velocity and depth model of survey area. 2-D multichannel stacked data were incorporated as an initial model. Two horizontal geophone component data, then, were polarization filtered and rotated to make radial component section. Traveltimes of main S-wave events were picked and used for forward modeling incorporating Poisson's ratio. This modeling provides S-wave profiles and Poisson's ratio profiles at every OBS site. The results shows that P-wave velocities in most OBS sites decrease beneath the BSR, whereas S-wave velocities slightly increase. Consequently, Poisson's ratio decreased strongly beneath the BSR indicating the presence of a free gas layer under the BSR.

The GOCI-II Early Mission Ocean Color Products in Comparison with the GOCI Toward the Continuity of Chollian Multi-satellite Ocean Color Data (천리안해양위성 연속자료 구축을 위한 GOCI-II 임무 초기 주요 해색산출물의 GOCI 자료와 비교 분석)

  • Park, Myung-Sook;Jung, Hahn Chul;Lee, Seonju;Ahn, Jae-Hyun;Bae, Sujung;Choi, Jong-Kuk
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_2
    • /
    • pp.1281-1293
    • /
    • 2021
  • The recent launch of the GOCI-II enables South Korea to have the world's first capability in deriving the ocean color data at geostationary satellite orbit for about 20 years. It is necessary to develop a consistent long-term ocean color time-series spanning GOCI to GOCI-II mission and improve the accuracy through validation using in situ data. To assess the GOCI-II's early mission performance, the objective of this study is to compare the GOCI-II Chlorophyll-a concentration (Chl-a), Colored Dissolved Organic Matter (CDOM), and remote sensing reflectances (Rrs) through comparison with the GOCI data. Overall, the distribution of GOCI-II Chl-a corresponds with that of the GOCI over the Yellow Sea, Korea Strait, and the Ulleung Basin. In particular, a smaller RMSE value (0.07) between GOCI and GOCI-II over the summer Ulleung Basin confirms the GOCI-II data's reliability. However, despite the excellent correlation, the GOCI-II tends to overestimate Chl-a than the GOCI over the Yellow Sea and Korea Strait. The similar over-estimation bias of the GOCI-II is also notable in CDOM. Whereas no significant bias or error is found for Rrs at 490 nm and 550 nm (RMSE~0), the underestimation of Rrs at 443 nm contributes to the overestimation of GOCI-II Chl-a and CDOM over the Yellow Sea and the Korea Strait. Also, we show over-estimation of GOCI-II Rrs at 660 nm relative to GOCI to cause a possible bias in Total suspended sediment. In conclusion, this study confirms the initial reliability of the GOCI-II ocean color products, and upcoming update of GOCI-II radiometric calibration will lessen the inconsistency between GOCI and GOCI-II ocean color products.

Comparative assessment and uncertainty analysis of ensemble-based hydrologic data assimilation using airGRdatassim (airGRdatassim을 이용한 앙상블 기반 수문자료동화 기법의 비교 및 불확실성 평가)

  • Lee, Garim;Lee, Songhee;Kim, Bomi;Woo, Dong Kook;Noh, Seong Jin
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.10
    • /
    • pp.761-774
    • /
    • 2022
  • Accurate hydrologic prediction is essential to analyze the effects of drought, flood, and climate change on flow rates, water quality, and ecosystems. Disentangling the uncertainty of the hydrological model is one of the important issues in hydrology and water resources research. Hydrologic data assimilation (DA), a technique that updates the status or parameters of a hydrological model to produce the most likely estimates of the initial conditions of the model, is one of the ways to minimize uncertainty in hydrological simulations and improve predictive accuracy. In this study, the two ensemble-based sequential DA techniques, ensemble Kalman filter, and particle filter are comparatively analyzed for the daily discharge simulation at the Yongdam catchment using airGRdatassim. The results showed that the values of Kling-Gupta efficiency (KGE) were improved from 0.799 in the open loop simulation to 0.826 in the ensemble Kalman filter and to 0.933 in the particle filter. In addition, we analyzed the effects of hyper-parameters related to the data assimilation methods such as precipitation and potential evaporation forcing error parameters and selection of perturbed and updated states. For the case of forcing error conditions, the particle filter was superior to the ensemble in terms of the KGE index. The size of the optimal forcing noise was relatively smaller in the particle filter compared to the ensemble Kalman filter. In addition, with more state variables included in the updating step, performance of data assimilation improved, implicating that adequate selection of updating states can be considered as a hyper-parameter. The simulation experiments in this study implied that DA hyper-parameters needed to be carefully optimized to exploit the potential of DA methods.

Geomagnetic Paleosecular Variation in the Korean Peninsula during the First Six Centuries (기원후 600년간 한반도 지구 자기장 고영년변화)

  • Park, Jong kyu;Park, Yong-Hee
    • The Journal of Engineering Geology
    • /
    • v.32 no.4
    • /
    • pp.611-625
    • /
    • 2022
  • One of the applications of geomagnetic paleo-secular variation (PSV) is the age dating of archeological remains (i.e., the archeomagnetic dating technique). This application requires the local model of PSV that reflects non-dipole fields with regional differences. Until now, the tentative Korean paleosecular variation (t-KPSV) calculated based on JPSV (SW Japanese PSV) has been applied as a reference curve for individual archeomagnetic directions in Korea. However, it is less reliable due to regional differences in the non-dipole magnetic field. Here, we present PSV curves for AD 1 to 600, corresponding to the Korean Three Kingdoms (including the Proto Three Kingdoms) Period, using the results of archeomagnetic studies in the Korean Peninsula and published research data. Then we compare our PSV with the global geomagnetic prediction model and t-KPSV. A total of 49 reliable archeomagnetic directional data from 16 regions were compiled for our PSV. In detail, each data showed statistical consistency (N > 6, 𝛼95 < 7.8°, and k > 57.8) and had radiocarbon or archeological ages in the range of AD 1 to 600 years with less than ±200 years error range. The compiled PSV for the initial six centuries (KPSV0.6k) showed declination and inclination in the range of 341.7° to 20.1° and 43.5° to 60.3°, respectively. Compared to the t-KPSV, our curve revealed different variation patterns both in declination and inclination. On the other hand, KPSV0.6k and global geomagnetic prediction models (ARCH3K.1, CALS3K.4, and SED3K.1) revealed consistent variation trends during the first six centennials. In particular, the ARCH3K.1 showed the best fitting with our KPSV0.6k. These results indicate that contribution of the non-dipole field to Korea and Japan is quite different, despite their geographical proximity. Moreover, the compilation of archeomagnetic data from the Korea territory is essential to build a reliable PSV curve for an age dating tool. Lastly, we double-check the reliability of our KPSV0.6k by showing a good fitting of newly acquired age-controlled archeomagnetic data on our curve.

Semantic Visualization of Dynamic Topic Modeling (다이내믹 토픽 모델링의 의미적 시각화 방법론)

  • Yeon, Jinwook;Boo, Hyunkyung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.131-154
    • /
    • 2022
  • Recently, researches on unstructured data analysis have been actively conducted with the development of information and communication technology. In particular, topic modeling is a representative technique for discovering core topics from massive text data. In the early stages of topic modeling, most studies focused only on topic discovery. As the topic modeling field matured, studies on the change of the topic according to the change of time began to be carried out. Accordingly, interest in dynamic topic modeling that handle changes in keywords constituting the topic is also increasing. Dynamic topic modeling identifies major topics from the data of the initial period and manages the change and flow of topics in a way that utilizes topic information of the previous period to derive further topics in subsequent periods. However, it is very difficult to understand and interpret the results of dynamic topic modeling. The results of traditional dynamic topic modeling simply reveal changes in keywords and their rankings. However, this information is insufficient to represent how the meaning of the topic has changed. Therefore, in this study, we propose a method to visualize topics by period by reflecting the meaning of keywords in each topic. In addition, we propose a method that can intuitively interpret changes in topics and relationships between or among topics. The detailed method of visualizing topics by period is as follows. In the first step, dynamic topic modeling is implemented to derive the top keywords of each period and their weight from text data. In the second step, we derive vectors of top keywords of each topic from the pre-trained word embedding model. Then, we perform dimension reduction for the extracted vectors. Then, we formulate a semantic vector of each topic by calculating weight sum of keywords in each vector using topic weight of each keyword. In the third step, we visualize the semantic vector of each topic using matplotlib, and analyze the relationship between or among the topics based on the visualized result. The change of topic can be interpreted in the following manners. From the result of dynamic topic modeling, we identify rising top 5 keywords and descending top 5 keywords for each period to show the change of the topic. Existing many topic visualization studies usually visualize keywords of each topic, but our approach proposed in this study differs from previous studies in that it attempts to visualize each topic itself. To evaluate the practical applicability of the proposed methodology, we performed an experiment on 1,847 abstracts of artificial intelligence-related papers. The experiment was performed by dividing abstracts of artificial intelligence-related papers into three periods (2016-2017, 2018-2019, 2020-2021). We selected seven topics based on the consistency score, and utilized the pre-trained word embedding model of Word2vec trained with 'Wikipedia', an Internet encyclopedia. Based on the proposed methodology, we generated a semantic vector for each topic. Through this, by reflecting the meaning of keywords, we visualized and interpreted the themes by period. Through these experiments, we confirmed that the rising and descending of the topic weight of a keyword can be usefully used to interpret the semantic change of the corresponding topic and to grasp the relationship among topics. In this study, to overcome the limitations of dynamic topic modeling results, we used word embedding and dimension reduction techniques to visualize topics by era. The results of this study are meaningful in that they broadened the scope of topic understanding through the visualization of dynamic topic modeling results. In addition, the academic contribution can be acknowledged in that it laid the foundation for follow-up studies using various word embeddings and dimensionality reduction techniques to improve the performance of the proposed methodology.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.

A Fast Processor Architecture and 2-D Data Scheduling Method to Implement the Lifting Scheme 2-D Discrete Wavelet Transform (리프팅 스킴의 2차원 이산 웨이브릿 변환 하드웨어 구현을 위한 고속 프로세서 구조 및 2차원 데이터 스케줄링 방법)

  • Kim Jong Woog;Chong Jong Wha
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.42 no.4 s.334
    • /
    • pp.19-28
    • /
    • 2005
  • In this paper, we proposed a parallel fast 2-D discrete wavelet transform hardware architecture based on lifting scheme. The proposed architecture improved the 2-D processing speed, and reduced internal memory buffer size. The previous lifting scheme based parallel 2-D wavelet transform architectures were consisted with row direction and column direction modules, which were pair of prediction and update filter module. In 2-D wavelet transform, column direction processing used the row direction results, which were not generated in column direction order but in row direction order, so most hardware architecture need internal buffer memory. The proposed architecture focused on the reducing of the internal memory buffer size and the total calculation time. Reducing the total calculation time, we proposed a 4-way data flow scheduling and memory based parallel hardware architecture. The 4-way data flow scheduling can increase the row direction parallel performance, and reduced the initial latency of starting of the row direction calculation. In this hardware architecture, the internal buffer memory didn't used to store the results of the row direction calculation, while it contained intermediate values of column direction calculation. This method is very effective in column direction processing, because the input data of column direction were not generated in column direction order The proposed architecture was implemented with VHDL and Altera Stratix device. The implementation results showed overall calculation time reduced from $N^2/2+\alpha$ to $N^2/4+\beta$, and internal buffer memory size reduced by around $50\%$ of previous works.

Effects of cutting Frequency and the Last cutting Date on Regrowth and Production in Timothy-dominated Sward (티머시 우점초지에서 예취빈도와 최종예취시기가 목초의 재생 및 생산성에 미치는 영향)

  • 신재순;이병석;신기준;이효원
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.6 no.2
    • /
    • pp.84-90
    • /
    • 1986
  • This experiment was carried out to evaluate the effect of cutting frequency and the last cutting date on the dry matter yield, the initial characteristics of spring growth, the yield of the first crops after winter, crude protein and crude fiber yield and the correlation efficients among the above items in timothy-dominated award. Cutting frequency was scheduled by 2, 3 and 4 times a year as main plot and the last cutting date in autumm were sept. 30, Oct. 10 and Oct. 20 as subplot. Experiment was arranged as a split-plot design with three replications and was performed for 4 years from 1980 to 1983 at the alpine area. The results obtained are summarized as follows: 1. The start of spring growth was somehow early as cutting frequency increased but not significant, and was not influenced by the last cutting data. 2. The dry matter yield was decreased by cutting frequency, but was not affected by the last cutting data. 3. The dry matter yield of the first crops after winter significantly decreased by cutting frequency, but failed to show and significant differences by the last cutting date. 4. Crude protein yield was increased by cutting frequency, while dry matter percentage was decreased. Crude fiber yield did not show the same trends. 5. There was a significant positive correlation between DM yield and DM percentage and yield of the first crops after winter, and between DM percentage and yield of the first crops after winter. However, there was a significant negative correlation between crude protein yield and DM percentage and yield of the first crops after winter. 6. It may be concluded from the above results that three times as cutting frequency and Sept. 30 as the last cutting data were desirable for the DM yield, but four times as cutting frequency and Sept. 30 as the last cutting data for the crude protein yield.

  • PDF