• Title/Summary/Keyword: System of systems(SoS)

Search Result 2,231, Processing Time 0.047 seconds

COATED PARTICLE FUEL FOR HIGH TEMPERATURE GAS COOLED REACTORS

  • Verfondern, Karl;Nabielek, Heinz;Kendall, James M.
    • Nuclear Engineering and Technology
    • /
    • v.39 no.5
    • /
    • pp.603-616
    • /
    • 2007
  • Roy Huddle, having invented the coated particle in Harwell 1957, stated in the early 1970s that we know now everything about particles and coatings and should be going over to deal with other problems. This was on the occasion of the Dragon fuel performance information meeting London 1973: How wrong a genius be! It took until 1978 that really good particles were made in Germany, then during the Japanese HTTR production in the 1990s and finally the Chinese 2000-2001 campaign for HTR-10. Here, we present a review of history and present status. Today, good fuel is measured by different standards from the seventies: where $9*10^{-4}$ initial free heavy metal fraction was typical for early AVR carbide fuel and $3*10^{-4}$ initial free heavy metal fraction was acceptable for oxide fuel in THTR, we insist on values more than an order of magnitude below this value today. Half a percent of particle failure at the end-of-irradiation, another ancient standard, is not even acceptable today, even for the most severe accidents. While legislation and licensing has not changed, one of the reasons we insist on these improvements is the preference for passive systems rather than active controls of earlier times. After renewed HTGR interest, we are reporting about the start of new or reactivated coated particle work in several parts of the world, considering the aspects of designs/ traditional and new materials, manufacturing technologies/ quality control quality assurance, irradiation and accident performance, modeling and performance predictions, and fuel cycle aspects and spent fuel treatment. In very general terms, the coated particle should be strong, reliable, retentive, and affordable. These properties have to be quantified and will be eventually optimized for a specific application system. Results obtained so far indicate that the same particle can be used for steam cycle applications with $700-750^{\circ}C$ helium coolant gas exit, for gas turbine applications at $850-900^{\circ}C$ and for process heat/hydrogen generation applications with $950^{\circ}C$ outlet temperatures. There is a clear set of standards for modem high quality fuel in terms of low levels of heavy metal contamination, manufacture-induced particle defects during fuel body and fuel element making, irradiation/accident induced particle failures and limits on fission product release from intact particles. While gas-cooled reactor design is still open-ended with blocks for the prismatic and spherical fuel elements for the pebble-bed design, there is near worldwide agreement on high quality fuel: a $500{\mu}m$ diameter $UO_2$ kernel of 10% enrichment is surrounded by a $100{\mu}m$ thick sacrificial buffer layer to be followed by a dense inner pyrocarbon layer, a high quality silicon carbide layer of $35{\mu}m$ thickness and theoretical density and another outer pyrocarbon layer. Good performance has been demonstrated both under operational and under accident conditions, i.e. to 10% FIMA and maximum $1600^{\circ}C$ afterwards. And it is the wide-ranging demonstration experience that makes this particle superior. Recommendations are made for further work: 1. Generation of data for presently manufactured materials, e.g. SiC strength and strength distribution, PyC creep and shrinkage and many more material data sets. 2. Renewed start of irradiation and accident testing of modem coated particle fuel. 3. Analysis of existing and newly created data with a view to demonstrate satisfactory performance at burnups beyond 10% FIMA and complete fission product retention even in accidents that go beyond $1600^{\circ}C$ for a short period of time. This work should proceed at both national and international level.

Enhancing the performance of the facial keypoint detection model by improving the quality of low-resolution facial images (저화질 안면 이미지의 화질 개선를 통한 안면 특징점 검출 모델의 성능 향상)

  • KyoungOok Lee;Yejin Lee;Jonghyuk Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.171-187
    • /
    • 2023
  • When a person's face is recognized through a recording device such as a low-pixel surveillance camera, it is difficult to capture the face due to low image quality. In situations where it is difficult to recognize a person's face, problems such as not being able to identify a criminal suspect or a missing person may occur. Existing studies on face recognition used refined datasets, so the performance could not be measured in various environments. Therefore, to solve the problem of poor face recognition performance in low-quality images, this paper proposes a method to generate high-quality images by performing image quality improvement on low-quality facial images considering various environments, and then improve the performance of facial feature point detection. To confirm the practical applicability of the proposed architecture, an experiment was conducted by selecting a data set in which people appear relatively small in the entire image. In addition, by choosing a facial image dataset considering the mask-wearing situation, the possibility of expanding to real problems was explored. As a result of measuring the performance of the feature point detection model by improving the image quality of the face image, it was confirmed that the face detection after improvement was enhanced by an average of 3.47 times in the case of images without a mask and 9.92 times in the case of wearing a mask. It was confirmed that the RMSE for facial feature points decreased by an average of 8.49 times when wearing a mask and by an average of 2.02 times when not wearing a mask. Therefore, it was possible to verify the applicability of the proposed method by increasing the recognition rate for facial images captured in low quality through image quality improvement.

Classification Algorithm-based Prediction Performance of Order Imbalance Information on Short-Term Stock Price (분류 알고리즘 기반 주문 불균형 정보의 단기 주가 예측 성과)

  • Kim, S.W.
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.157-177
    • /
    • 2022
  • Investors are trading stocks by keeping a close watch on the order information submitted by domestic and foreign investors in real time through Limit Order Book information, so-called price current provided by securities firms. Will order information released in the Limit Order Book be useful in stock price prediction? This study analyzes whether it is significant as a predictor of future stock price up or down when order imbalances appear as investors' buying and selling orders are concentrated to one side during intra-day trading time. Using classification algorithms, this study improved the prediction accuracy of the order imbalance information on the short-term price up and down trend, that is the closing price up and down of the day. Day trading strategies are proposed using the predicted price trends of the classification algorithms and the trading performances are analyzed through empirical analysis. The 5-minute KOSPI200 Index Futures data were analyzed for 4,564 days from January 19, 2004 to June 30, 2022. The results of the empirical analysis are as follows. First, order imbalance information has a significant impact on the current stock prices. Second, the order imbalance information observed in the early morning has a significant forecasting power on the price trends from the early morning to the market closing time. Third, the Support Vector Machines algorithm showed the highest prediction accuracy on the day's closing price trends using the order imbalance information at 54.1%. Fourth, the order imbalance information measured at an early time of day had higher prediction accuracy than the order imbalance information measured at a later time of day. Fifth, the trading performances of the day trading strategies using the prediction results of the classification algorithms on the price up and down trends were higher than that of the benchmark trading strategy. Sixth, except for the K-Nearest Neighbor algorithm, all investment performances using the classification algorithms showed average higher total profits than that of the benchmark strategy. Seventh, the trading performances using the predictive results of the Logical Regression, Random Forest, Support Vector Machines, and XGBoost algorithms showed higher results than the benchmark strategy in the Sharpe Ratio, which evaluates both profitability and risk. This study has an academic difference from existing studies in that it documented the economic value of the total buy & sell order volume information among the Limit Order Book information. The empirical results of this study are also valuable to the market participants from a trading perspective. In future studies, it is necessary to improve the performance of the trading strategy using more accurate price prediction results by expanding to deep learning models which are actively being studied for predicting stock prices recently.

A Recidivism Prediction Model Based on XGBoost Considering Asymmetric Error Costs (비대칭 오류 비용을 고려한 XGBoost 기반 재범 예측 모델)

  • Won, Ha-Ram;Shim, Jae-Seung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.127-137
    • /
    • 2019
  • Recidivism prediction has been a subject of constant research by experts since the early 1970s. But it has become more important as committed crimes by recidivist steadily increase. Especially, in the 1990s, after the US and Canada adopted the 'Recidivism Risk Assessment Report' as a decisive criterion during trial and parole screening, research on recidivism prediction became more active. And in the same period, empirical studies on 'Recidivism Factors' were started even at Korea. Even though most recidivism prediction studies have so far focused on factors of recidivism or the accuracy of recidivism prediction, it is important to minimize the prediction misclassification cost, because recidivism prediction has an asymmetric error cost structure. In general, the cost of misrecognizing people who do not cause recidivism to cause recidivism is lower than the cost of incorrectly classifying people who would cause recidivism. Because the former increases only the additional monitoring costs, while the latter increases the amount of social, and economic costs. Therefore, in this paper, we propose an XGBoost(eXtream Gradient Boosting; XGB) based recidivism prediction model considering asymmetric error cost. In the first step of the model, XGB, being recognized as high performance ensemble method in the field of data mining, was applied. And the results of XGB were compared with various prediction models such as LOGIT(logistic regression analysis), DT(decision trees), ANN(artificial neural networks), and SVM(support vector machines). In the next step, the threshold is optimized to minimize the total misclassification cost, which is the weighted average of FNE(False Negative Error) and FPE(False Positive Error). To verify the usefulness of the model, the model was applied to a real recidivism prediction dataset. As a result, it was confirmed that the XGB model not only showed better prediction accuracy than other prediction models but also reduced the cost of misclassification most effectively.

Effect of Organizational Support Perception on Intrinsic Job Motivation : Verification of the Causal Effects of Work-Family Conflict and Work-Family Balance (조직지원인식이 내재적 직무동기에 미치는 영향 : 일-가정 갈등 및 일-가정 균형의 인과관계 효과 검증)

  • Yoo, Joon-soo;Kang, Chang-wan
    • Journal of Venture Innovation
    • /
    • v.6 no.1
    • /
    • pp.181-198
    • /
    • 2023
  • This study aims to analyze the influence of organizational support perception of workers in medical institutions on intrinsic job motivation, and to check whether there is significance in the mediating effect of work-family conflict and work-family balance factors in this process. The results of empirical analysis through the questionnaire are as follows. First, it was confirmed that organizational support recognition had a significant positive effect on work-family balance as well as intrinsic job motivation, and work-family balance had a significant positive effect on intrinsic job motivation. Second, it was confirmed that organizational support recognition had a significant negative effect on work-family conflict, but work-family conflict had no significant influence on intrinsic job motivation. Third, in order to reduce job stress for medical institution workers, it is necessary to reduce job intensity, assign appropriate workload for ability. And in order to improve manpower operation and job efficiency, Job training and staffing in the right place are needed. Fourth, in order to improve positive organizational support perception and intrinsic job motivation, It is necessary to induce long-term service by providing support and institutional devices to increase attachment to the current job and recognize organizational problems as their own problems with various incentive systems. The limitations of this study and future research directions are as follows. First, it is believed that an expanded analysis of medical institution workers nationwide by region, gender, medical institution, academic, and income will not only provide more valuable results, but also evaluate the quality of medical services. Second, it is necessary to reflect the impact of the work-life balance support system on each employee depending on the environmental uncertainty or degree of competition in the hospital to which medical institution workers belong. Third, organizational support perception will be recognized differently depending on organizational culture and organizational type, and organizational size and work characteristics, working years, and work types, so it is necessary to reflect this. Fourth, it is necessary to analyze various new personnel management techniques such as hospital's organizational structure, job design, organizational support method, motivational approach, and personnel evaluation method in line with the recent change in the government's medical institution policy and the global business environment. It is also considered important to analyze by reflecting recent and near future medical trends.

Improved Social Network Analysis Method in SNS (SNS에서의 개선된 소셜 네트워크 분석 방법)

  • Sohn, Jong-Soo;Cho, Soo-Whan;Kwon, Kyung-Lag;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • Due to the recent expansion of the Web 2.0 -based services, along with the widespread of smartphones, online social network services are being popularized among users. Online social network services are the online community services which enable users to communicate each other, share information and expand human relationships. In the social network services, each relation between users is represented by a graph consisting of nodes and links. As the users of online social network services are increasing rapidly, the SNS are actively utilized in enterprise marketing, analysis of social phenomenon and so on. Social Network Analysis (SNA) is the systematic way to analyze social relationships among the members of the social network using the network theory. In general social network theory consists of nodes and arcs, and it is often depicted in a social network diagram. In a social network diagram, nodes represent individual actors within the network and arcs represent relationships between the nodes. With SNA, we can measure relationships among the people such as degree of intimacy, intensity of connection and classification of the groups. Ever since Social Networking Services (SNS) have drawn increasing attention from millions of users, numerous researches have made to analyze their user relationships and messages. There are typical representative SNA methods: degree centrality, betweenness centrality and closeness centrality. In the degree of centrality analysis, the shortest path between nodes is not considered. However, it is used as a crucial factor in betweenness centrality, closeness centrality and other SNA methods. In previous researches in SNA, the computation time was not too expensive since the size of social network was small. Unfortunately, most SNA methods require significant time to process relevant data, and it makes difficult to apply the ever increasing SNS data in social network studies. For instance, if the number of nodes in online social network is n, the maximum number of link in social network is n(n-1)/2. It means that it is too expensive to analyze the social network, for example, if the number of nodes is 10,000 the number of links is 49,995,000. Therefore, we propose a heuristic-based method for finding the shortest path among users in the SNS user graph. Through the shortest path finding method, we will show how efficient our proposed approach may be by conducting betweenness centrality analysis and closeness centrality analysis, both of which are widely used in social network studies. Moreover, we devised an enhanced method with addition of best-first-search method and preprocessing step for the reduction of computation time and rapid search of the shortest paths in a huge size of online social network. Best-first-search method finds the shortest path heuristically, which generalizes human experiences. As large number of links is shared by only a few nodes in online social networks, most nods have relatively few connections. As a result, a node with multiple connections functions as a hub node. When searching for a particular node, looking for users with numerous links instead of searching all users indiscriminately has a better chance of finding the desired node more quickly. In this paper, we employ the degree of user node vn as heuristic evaluation function in a graph G = (N, E), where N is a set of vertices, and E is a set of links between two different nodes. As the heuristic evaluation function is used, the worst case could happen when the target node is situated in the bottom of skewed tree. In order to remove such a target node, the preprocessing step is conducted. Next, we find the shortest path between two nodes in social network efficiently and then analyze the social network. For the verification of the proposed method, we crawled 160,000 people from online and then constructed social network. Then we compared with previous methods, which are best-first-search and breath-first-search, in time for searching and analyzing. The suggested method takes 240 seconds to search nodes where breath-first-search based method takes 1,781 seconds (7.4 times faster). Moreover, for social network analysis, the suggested method is 6.8 times and 1.8 times faster than betweenness centrality analysis and closeness centrality analysis, respectively. The proposed method in this paper shows the possibility to analyze a large size of social network with the better performance in time. As a result, our method would improve the efficiency of social network analysis, making it particularly useful in studying social trends or phenomena.

Context Sharing Framework Based on Time Dependent Metadata for Social News Service (소셜 뉴스를 위한 시간 종속적인 메타데이터 기반의 컨텍스트 공유 프레임워크)

  • Ga, Myung-Hyun;Oh, Kyeong-Jin;Hong, Myung-Duk;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.39-53
    • /
    • 2013
  • The emergence of the internet technology and SNS has increased the information flow and has changed the way people to communicate from one-way to two-way communication. Users not only consume and share the information, they also can create and share it among their friends across the social network service. It also changes the Social Media behavior to become one of the most important communication tools which also includes Social TV. Social TV is a form which people can watch a TV program and at the same share any information or its content with friends through Social media. Social News is getting popular and also known as a Participatory Social Media. It creates influences on user interest through Internet to represent society issues and creates news credibility based on user's reputation. However, the conventional platforms in news services only focus on the news recommendation domain. Recent development in SNS has changed this landscape to allow user to share and disseminate the news. Conventional platform does not provide any special way for news to be share. Currently, Social News Service only allows user to access the entire news. Nonetheless, they cannot access partial of the contents which related to users interest. For example user only have interested to a partial of the news and share the content, it is still hard for them to do so. In worst cases users might understand the news in different context. To solve this, Social News Service must provide a method to provide additional information. For example, Yovisto known as an academic video searching service provided time dependent metadata from the video. User can search and watch partial of video content according to time dependent metadata. They also can share content with a friend in social media. Yovisto applies a method to divide or synchronize a video based whenever the slides presentation is changed to another page. However, we are not able to employs this method on news video since the news video is not incorporating with any power point slides presentation. Segmentation method is required to separate the news video and to creating time dependent metadata. In this work, In this paper, a time dependent metadata-based framework is proposed to segment news contents and to provide time dependent metadata so that user can use context information to communicate with their friends. The transcript of the news is divided by using the proposed story segmentation method. We provide a tag to represent the entire content of the news. And provide the sub tag to indicate the segmented news which includes the starting time of the news. The time dependent metadata helps user to track the news information. It also allows them to leave a comment on each segment of the news. User also may share the news based on time metadata as segmented news or as a whole. Therefore, it helps the user to understand the shared news. To demonstrate the performance, we evaluate the story segmentation accuracy and also the tag generation. For this purpose, we measured accuracy of the story segmentation through semantic similarity and compared to the benchmark algorithm. Experimental results show that the proposed method outperforms benchmark algorithms in terms of the accuracy of story segmentation. It is important to note that sub tag accuracy is the most important as a part of the proposed framework to share the specific news context with others. To extract a more accurate sub tags, we have created stop word list that is not related to the content of the news such as name of the anchor or reporter. And we applied to framework. We have analyzed the accuracy of tags and sub tags which represent the context of news. From the analysis, it seems that proposed framework is helpful to users for sharing their opinions with context information in Social media and Social news.

Suggestion of Urban Regeneration Type Recommendation System Based on Local Characteristics Using Text Mining (텍스트 마이닝을 활용한 지역 특성 기반 도시재생 유형 추천 시스템 제안)

  • Kim, Ikjun;Lee, Junho;Kim, Hyomin;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.149-169
    • /
    • 2020
  • "The Urban Renewal New Deal project", one of the government's major national projects, is about developing underdeveloped areas by investing 50 trillion won in 100 locations on the first year and 500 over the next four years. This project is drawing keen attention from the media and local governments. However, the project model which fails to reflect the original characteristics of the area as it divides project area into five categories: "Our Neighborhood Restoration, Housing Maintenance Support Type, General Neighborhood Type, Central Urban Type, and Economic Base Type," According to keywords for successful urban regeneration in Korea, "resident participation," "regional specialization," "ministerial cooperation" and "public-private cooperation", when local governments propose urban regeneration projects to the government, they can see that it is most important to accurately understand the characteristics of the city and push ahead with the projects in a way that suits the characteristics of the city with the help of local residents and private companies. In addition, considering the gentrification problem, which is one of the side effects of urban regeneration projects, it is important to select and implement urban regeneration types suitable for the characteristics of the area. In order to supplement the limitations of the 'Urban Regeneration New Deal Project' methodology, this study aims to propose a system that recommends urban regeneration types suitable for urban regeneration sites by utilizing various machine learning algorithms, referring to the urban regeneration types of the '2025 Seoul Metropolitan Government Urban Regeneration Strategy Plan' promoted based on regional characteristics. There are four types of urban regeneration in Seoul: "Low-use Low-Level Development, Abandonment, Deteriorated Housing, and Specialization of Historical and Cultural Resources" (Shon and Park, 2017). In order to identify regional characteristics, approximately 100,000 text data were collected for 22 regions where the project was carried out for a total of four types of urban regeneration. Using the collected data, we drew key keywords for each region according to the type of urban regeneration and conducted topic modeling to explore whether there were differences between types. As a result, it was confirmed that a number of topics related to real estate and economy appeared in old residential areas, and in the case of declining and underdeveloped areas, topics reflecting the characteristics of areas where industrial activities were active in the past appeared. In the case of the historical and cultural resource area, since it is an area that contains traces of the past, many keywords related to the government appeared. Therefore, it was possible to confirm political topics and cultural topics resulting from various events. Finally, in the case of low-use and under-developed areas, many topics on real estate and accessibility are emerging, so accessibility is good. It mainly had the characteristics of a region where development is planned or is likely to be developed. Furthermore, a model was implemented that proposes urban regeneration types tailored to regional characteristics for regions other than Seoul. Machine learning technology was used to implement the model, and training data and test data were randomly extracted at an 8:2 ratio and used. In order to compare the performance between various models, the input variables are set in two ways: Count Vector and TF-IDF Vector, and as Classifier, there are 5 types of SVM (Support Vector Machine), Decision Tree, Random Forest, Logistic Regression, and Gradient Boosting. By applying it, performance comparison for a total of 10 models was conducted. The model with the highest performance was the Gradient Boosting method using TF-IDF Vector input data, and the accuracy was 97%. Therefore, the recommendation system proposed in this study is expected to recommend urban regeneration types based on the regional characteristics of new business sites in the process of carrying out urban regeneration projects."

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Influences of the devastated forest lands on flood damages (Observed at Chonbo and the neighbouring Mt. Jook-yop area) (황폐임야(荒廢林野)가 수해참상(水害慘狀)에 미치는 영향(影響) (천보산(天寶山)과 인접(隣接) 죽엽산(竹葉山)을 중심(中心)으로))

  • Chung, In Koo
    • Journal of Korean Society of Forest Science
    • /
    • v.5 no.1
    • /
    • pp.4-9
    • /
    • 1966
  • 1. On 13 September 1964 a storm raged for 3 hours and 20 minutes with pounding heavy rainfalls, and precipitation of 287.5 mm was recorded on that day. The numerous landslides were occured in the eroded forest land neighbouring Mt. Chunbo, while no landslides recorde at all on Mt. Jookyup within the premise of Kwangnung Experiment Station, the Forest Experiment Station. 2. Small-scalled Landslides were occured in 43 different places of watershed area (21.97 ha.) in which the survey had already been done, in and around Mt. Chunbo (378 m a.s.l.). The accumulated soil amount totaled $2,146,56m^3$ due to the above mentioned landslides, while soil accumulated from riverside erosion has reached to $24,168.79m^3$, consisting of soils, stones, and pebbles. However, no landslides were reported in the Mt. Jook yup area because of dense forest covers. The ratio of the eroded soil amount accumulated from the riversides to that of watershed area was 1 to 25. On the other hand, the loss and damage in the research area of Mt. Chonbo are as follows: 28 houses completly destroyed or missing 7 houses partially destroyed 51 men were dead 5 missing, and 57 wounded. It was a terrible human disaster However, no human casualties were recorded at all, 1 house-completly destroyed and missing, 2 houses-partially destroyed. Total:3 houses were destroyed or damaged, in The area of Mt. Jookyup 3. In the calculation of the quanty of accumulated soil, the or mula of "V=1/3h ($a+{\sqrt{ab}}+b$)" was used and it showed that 24, 168.79m of soil, sands, stones and pebbles carried away. 4. Average slope of the stream stood 15 at the time of accident and well found that there was a correlation between the 87% of cross-area sufferd valley erosion and the length of eroded valley, after a study on regression and correlation of the length and cross-area. In other works, the soil erosion was and severe as we approached to the down-stream, counting at a place of average ($15^{\circ}1^{\prime}$) and below. We might draw a correlation such as "Y=ax-b" in terms of the length and cross-area of the eroded valley. 5. Sites of char-coal pits were found in the upper part of the desert-like Mt. Chunbo and a professional opinion shows that the mountain was once covered by the oak three species. Furthermore, we found that the soil of both mountains have been kept the same soil system according to a research of the soil cross-area. In other words, we can draw out the fact that, originally, the forest type and soil type of both Mt. Chunbo (378m) and Mt. Jookyup (610m) have been and are the same. However, Mt. Chunbo has been much more devastated than Mt. Jookyup, and carried away its soil nutrition to the extent that the ratios of N. $P_2O_5K_2O$ and Humus C.E.C between these two mountains are 1:10;1:5 respectively. 6. Mt. Chunbo has been mostly eroded for the past 30 years, and it consists of gravels of 2mm or larger size in the upper part of the mountain, while in the lower foot part, the sandy loam was formulated due to the fact that the gluey soil has been carried and accumulated. On the hand, Mt. Jookyup has consitantly kept the all the same forest type and sandy loam of brown colour both in the upper and lower parts. 7. As for the capability of absorbing and saturating maximum humidity by the surface soil, the ratios of wet soil to dry soil are 42.8% in the hill side and lower part of the eroded Mt. Chunbo and 28.5% in the upper part. On the contrary, Mt. Jookyup on which the forest type has not been changed, shows that the ratio in 77.4% in the hill-side and 68.2% in the upper part, approximately twice as much humidity as Mt. Chunbo. This proves the fact that the forest lands with dense forest covers are much more capable of maintaining water by wood, vegitation, and an organic material. The strength of dreventing from carring away surface soil is great due to the vigorous network of the root systems. 8. As mentioned above, the devastated forest land cause not only much greater devastation, but also human loss and property damage. We must bear in mind that the eroded forest land has taken the valuable soil, which is the very existance of origin of both human being and all creatures. As for the prescription for preventing erosion of forest land, the trees for furtilization has to be planted in the hill,side with at least reasonable amount of aertilizer, in order to restore the strength of earth soil, while in the lower part, thorough erosion control and reforestation, and establishments along the riversides have to be made, so as to restore the forest type.

  • PDF