• Title/Summary/Keyword: sliding system

Search Result 1,754, Processing Time 0.022 seconds

Environmental Interpretation on soil mass movement spot and disaster dangerous site for precautionary measures -in Peong Chang Area- (산사태발생지(山沙汰發生地)와 피해위험지(被害危險地)의 환경학적(環境學的) 해석(解析)과 예방대책(豫防對策) -평창지구(平昌地區)를 중심(中心)으로-)

  • Ma, Sang Kyu
    • Journal of Korean Society of Forest Science
    • /
    • v.45 no.1
    • /
    • pp.11-25
    • /
    • 1979
  • There was much mass movement at many different mountain side of Peong Chang area in Kwangwon province by the influence of heavy rainfall through August/4 5, 1979. This study have done with the fact observed through the field survey and the information of the former researchers. The results are as follows; 1. Heavy rainfall area with more than 200mm per day and more than 60mm per hour as maximum rainfall during past 6 years, are distributed in the western side of the connecting line through Hoeng Seong, Weonju, Yeongdong, Muju, Namweon and Suncheon, and of the southern sea side of KeongsangNam-do. The heavy rain fan reason in the above area seems to be influenced by the mouktam range and moving direction of depression. 2. Peak point of heavy rainfall distribution always happen during the night time and seems to cause directly mass movement and serious damage. 3. Soil mass movement in Peongchang break out from the course sandy loam soil of granite group and the clay soil of lime stone and shale. Earth have moved along the surface of both bedrock or also the hardpan in case of the lime stone area. 4. Infiltration seems to be rapid on the both bedrock soil, the former is by the soil texture and the latter is by the crumb structure, high humus content and dense root system in surface soil. 5. Topographic pattern of mass movement spot is mostly the concave slope at the valley head or at the upper part of middle slope which run-off can easily come together from the surrounding slope. Soil profile of mass movement spot has wet soil in the lime stone area and loose or deep soil in the granite area. 6. Dominant slope degree of the soil mass movement site has steep slope, mostly, more than 25 degree and slope position that start mass movement is mostly in the range of the middle slope line to ridge line. 7. Vegetation status of soil mass movement area are mostly fire field agriculture area, it's abandoned grass land, young plantation made on the fire field poor forest of the erosion control site and non forest land composed mainly grass and shrubs. Very rare earth sliding can be found in the big tree stands but mostly from the thin soil site on the un-weatherd bed rock. 8. Dangerous condition of soil mass movement and land sliding seems to be estimated by the several environmental factors, namely, vegetation cover, slope degree, slope shape and position, bed rock and soil profile characteristics etc. 9. House break down are mostly happen on the following site, namely, colluvial cone and fan, talus, foot area of concave slope and small terrace or colluvial soil between valley and at the small river side Dangerous house from mass movement could be interpreted by the aerial photo with reference of the surrounding site condition of house and village in the mountain area 10. As a counter plan for the prevention of mass movement damage the technics of it's risk diagnosis and the field survey should be done, and the mass movement control of prevention should be started with the goverment support as soon as possible. The precautionary measures of house and village protection from mass movement damage should be made and executed and considered the protecting forest making around the house and village. 11. Dangerous or safety of house and village from mass movement and flood damage will be indentified and informed to the village people of mountain area through the forest extension work. 12. Clear cutting activity on the steep granite site, fire field making on the steep slope, house or village construction on the dangerous site and fuel collection in the eroded forest or the steep forest land should be surely prohibited When making the management plan the mass movement, soil erosion and flood problem will be concidered and also included the prevention method of disaster.

  • PDF

The Adaptive Personalization Method According to Users Purchasing Index : Application to Beverage Purchasing Predictions (고객별 구매빈도에 동적으로 적응하는 개인화 시스템 : 음료수 구매 예측에의 적용)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.95-108
    • /
    • 2011
  • TThis is a study of the personalization method that intelligently adapts the level of clustering considering purchasing index of a customer. In the e-biz era, many companies gather customers' demographic and transactional information such as age, gender, purchasing date and product category. They use this information to predict customer's preferences or purchasing patterns so that they can provide more customized services to their customers. The previous Customer-Segmentation method provides customized services for each customer group. This method clusters a whole customer set into different groups based on their similarity and builds predictive models for the resulting groups. Thus, it can manage the number of predictive models and also provide more data for the customers who do not have enough data to build a good predictive model by using the data of other similar customers. However, this method often fails to provide highly personalized services to each customer, which is especially important to VIP customers. Furthermore, it clusters the customers who already have a considerable amount of data as well as the customers who only have small amount of data, which causes to increase computational cost unnecessarily without significant performance improvement. The other conventional method called 1-to-1 method provides more customized services than the Customer-Segmentation method for each individual customer since the predictive model are built using only the data for the individual customer. This method not only provides highly personalized services but also builds a relatively simple and less costly model that satisfies with each customer. However, the 1-to-1 method has a limitation that it does not produce a good predictive model when a customer has only a few numbers of data. In other words, if a customer has insufficient number of transactional data then the performance rate of this method deteriorate. In order to overcome the limitations of these two conventional methods, we suggested the new method called Intelligent Customer Segmentation method that provides adaptive personalized services according to the customer's purchasing index. The suggested method clusters customers according to their purchasing index, so that the prediction for the less purchasing customers are based on the data in more intensively clustered groups, and for the VIP customers, who already have a considerable amount of data, clustered to a much lesser extent or not clustered at all. The main idea of this method is that applying clustering technique when the number of transactional data of the target customer is less than the predefined criterion data size. In order to find this criterion number, we suggest the algorithm called sliding window correlation analysis in this study. The algorithm purposes to find the transactional data size that the performance of the 1-to-1 method is radically decreased due to the data sparity. After finding this criterion data size, we apply the conventional 1-to-1 method for the customers who have more data than the criterion and apply clustering technique who have less than this amount until they can use at least the predefined criterion amount of data for model building processes. We apply the two conventional methods and the newly suggested method to Neilsen's beverage purchasing data to predict the purchasing amounts of the customers and the purchasing categories. We use two data mining techniques (Support Vector Machine and Linear Regression) and two types of performance measures (MAE and RMSE) in order to predict two dependent variables as aforementioned. The results show that the suggested Intelligent Customer Segmentation method can outperform the conventional 1-to-1 method in many cases and produces the same level of performances compare with the Customer-Segmentation method spending much less computational cost.

Intensity Modulated Radiation Therapy Commissioning and Quality Assurance: Implementation of AAPM TG119 (세기조절방사선치료(IMRT)의 Commissioning 및 정도관리: AAPM TG119 적용)

  • Ahn, Woo-Sang;Cho, Byung-Chul
    • Progress in Medical Physics
    • /
    • v.22 no.2
    • /
    • pp.99-105
    • /
    • 2011
  • The purpose of this study is to evaluate the accuracy of IMRT in our clinic from based on TG119 procedure and establish action level. Five IMRT test cases were described in TG119: multi-target, head&neck, prostate, and two C-shapes (easy&hard). There were used and delivered to water-equivalent solid phantom for IMRT. Absolute dose for points in target and OAR was measured by using an ion chamber (CC13, IBA). EBT2 film was utilized to compare the measured two-dimensional dose distribution with the calculated one by treatment planning system. All collected data were analyzed using the TG119 specifications to determine the confidence limit. The mean of relative error (%) between measured and calculated value was $1.2{\pm}1.1%$ and $1.2{\pm}0.7%$ for target and OAR, respectively. The resulting confidence limits were 3.4% and 2.6%. In EBT2 film dosimetry, the average percentage of points passing the gamma criteria (3%/3 mm) was $97.7{\pm}0.8%$. Confidence limit values determined by EBT2 film analysis was 3.9%. This study has focused on IMRT commissioning and quality assurance based on TG119 guideline. It is concluded that action level were ${\pm}4%$ and ${\pm}3%$ for target and OAR and 97% for film measurement, respectively. It is expected that TG119-based procedure can be used as reference to evaluate the accuracy of IMRT for each institution.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.