• 제목/요약/키워드: 개발고려요소

Search Result 2,705, Processing Time 0.03 seconds

Directions of Implementing Documentation Strategies for Local Regions (지역 기록화를 위한 도큐멘테이션 전략의 적용)

  • Seol, Moon-Won
    • The Korean Journal of Archival Studies
    • /
    • no.26
    • /
    • pp.103-149
    • /
    • 2010
  • Documentation strategy has been experimented in various subject areas and local regions since late 1980's when it was proposed as archival appraisal and selection methods by archival communities in the United States. Though it was criticized to be too ideal, it needs to shed new light on the potentialities of the strategy for documenting local regions in digital environment. The purpose of this study is to analyse the implementation issues of documentation strategy and to suggest the directions for documenting local regions of Korea through the application of the strategy. The documentation strategy which was developed more than twenty years ago in mostly western countries gives us some implications for documenting local regions even in current digital environments. They are as follows; Firstly, documentation strategy can enhance the value of archivists as well as archives in local regions because archivist should be active shaper of history rather than passive receiver of archives according to the strategy. It can also be a solution for overcoming poor conditions of local archives management in Korea. Secondly, the strategy can encourage cooperation between collecting institutions including museums, libraries, archives, cultural centers, history institutions, etc. in each local region. In the networked environment the cooperation can be achieved more effectively than in traditional environment where the heavy workload of cooperative institutions is needed. Thirdly, the strategy can facilitate solidarity of various groups in local region. According to the analysis of the strategy projects, it is essential to collect their knowledge, passion, and enthusiasm of related groups to effectively implement the strategy. It can also provide a methodology for minor groups of society to document their memories. This study suggests the directions of documenting local regions in consideration of current archival infrastructure of Korean as follows; Firstly, very selective and intensive documentation should be pursued rather than comprehensive one for documenting local regions. Though it is a very political problem to decide what subject has priority for documentation, interests of local community members as well as professional groups should be considered in the decision-making process seriously. Secondly, it is effective to plan integrated representation of local history in the distributed custody of local archives. It would be desirable to implement archival gateway for integrated search and representation of local archives regardless of the location of archives. Thirdly, it is necessary to try digital documentation using Web 2.0 technologies. Documentation strategy as the methodology of selecting and acquiring archives can not avoid subjectivity and prejudices of appraiser completely. To mitigate the problems, open documentation system should be prepared for reflecting different interests of different groups. Fourth, it is desirable to apply a conspectus model used in cooperative collection management of libraries to document local regions digitally. Conspectus can show existing documentation strength and future documentation intensity for each participating institution. Using this, documentation level of each subject area can be set up cooperatively and effectively in the local regions.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Dynamical Study on the Blasting with One-Free-Face to Utilize AN-FO Explosives (초유폭약류(硝油爆藥類)를 활용(活用)한 단일자유면발파(單一自由面發破)의 역학적(力學的) 연구(硏究))

  • Huh, Ginn
    • Economic and Environmental Geology
    • /
    • v.5 no.4
    • /
    • pp.187-209
    • /
    • 1972
  • Drilling position is one of the most important factors affecting on the blasting effects. There has been many reports on several blasting factors of burn-cut by Messrs. Brown and Cook, but in this study the author tried to compare drilling positions of burn-cut to pyramid-cut, and also to correlate burn-cut effects of drilling patterns, not being dealt by Prof. Ito in his theory, which emphasized on dynamical stress analysis between explosion and free face. According to former theories, there break out additional tensile stress reflected at the free face supplemented to primary compressive stress on the blasting with one-free-face. But with these experimented new drilling patterns of burn-cut, more free faces and nearer distance of each drilling holes make blasting effects greater than any other methods. To promote the above explosive effect rationary, it has to be considered two important categories under-mentioned. First, unloaded hole in the key holes should be drilled in wider diameter possibly so that it breaks out greater stress relief. Second, key holes possibly should have closer distances each other to result clean blasting. These two important factors derived from experiments with, theories of that the larger the dia of the unloaded hole, it can be allowed wider secondary free faces and closes distances of each holes make more developed stress relief, between loaded and unloaded holes. It was suggested that most ideal distance between holes is about 4 clearance in U. S. A., but the author, according to the experiments, it results that the less distance allow, the more effective blasting with increased broken rock volume and longer drifted length can be accomplished. Developed large hole burn-cut method aimed to increase drifting length technically under the above considerations, and progressive success resulted to achieve maximum 7 blasting cycles per day with 3.1m drifting length per cycle. This achievement originated high-speed-drifting works, and it was also proven that application of Metallic AN-FO on large hole burn-cut method overcomes resistance of one-free-face. AN-FO which was favored with low price and safety handling is the mixture of the fertilizer or industrial Ammonium-Nitrate and fuel oil, and it is also experienced that it shows insensible property before the initiation, but once it is initiated by the booster, it has equal explosive power of Ammonium Nitrate Explosives (ANE). There was many reports about AN-FO. On AN-FO mixing ratio, according to these experiments, prowdered AN-FO, 93.5 : 6.5 and prilled AN-FO 94 : 6, are the best ratios. Detonation, shock, and friction sensities are all more insensitive than any other explosives. Residual gas is not toxic, too. On initation and propagation of the detonation test, prilled AN-FO is more effective than powered AN-FO. AN-FO has the best explosion power at 7 days elapsed after it has mixed. While AN-FO was used at open pit in past years prior to other conditions, the author developed new improved explosives, Metallic AN-FO and Underwater explosive, based on the experiments of these fundmental characteristics by study on its usage utilizing AN-FO. Metallic AN-FO is the mixture of AN-FO and Al, Fe-Si powder, and Underwater explosive is made from usual explosive and AN-FO. The explanations about them are described in the other paper. In this study, it is confirmed that the blasting effects of utilizing AN-FO explosives are very good.

  • PDF

An Analytical Study on Stem Growth of Chamaecyparis obtusa (편백(扁栢)의 수간성장(樹幹成長)에 관(關)한 해석적(解析的) 연구(硏究))

  • An, Jong Man;Lee, Kwang Nam
    • Journal of Korean Society of Forest Science
    • /
    • v.77 no.4
    • /
    • pp.429-444
    • /
    • 1988
  • Considering the recent trent toward the development of multiple-use of forest trees, investigations for comprehensive information on these young stands of Hinoki cypress are necessary for rational forest management. From this point of view, 83 sample trees were selected and cut down from 23-ear old stands of Hinoki cypress at Changsung-gun, Chonnam-do. Various stem growth factors of felled trees were measured and canonical correlaton analysis, principal component analysis and factor analysis were applied to investigate the stem growth characteristics, relationships among stem growth factors, and to get potential information and comprehensive information. The results are as follows ; Canonical correlation coefficient between stem volume and quality growth factor was 0.9877. Coefficient of canonical variates showed that DBH among diameter growth factors and height among height growth factors had important effects on stem volume. From the analysis of relationship between stem-volume and canonical variates, which were linearly combined DBH with height as one set, DBH had greater influence on volume growth than height. The 1st-2nd principal components here adopted to fit the effective value of 85% from the pincipal component analysis for 12 stem growth factors. The result showed that the 1st-2nd principal component had cumulative contribution rate of 88.10%. The 1st and the 2nd principal components were interpreted as "size factor" and "shape factor", respectively. From summed proportion of the efficient principal component fur each variate, information of variates except crown diameter, clear length and form height explained more than 87%. Two common factors were set by the eigen value obtained from SMC (squared multiple correlation) of diagonal elements of canonical matrix. There were 2 latent factors, $f_1$ and $f_2$. The former way interpreted as nature of diameter growth system. In inherent phenomenon of 12 growth factor, communalities except clear length and crown diameter had great explanatory poorer of 78.62-98.30%. Eighty three sample trees could he classified into 5 stem types as follows ; medium type within a radius of ${\pm}1$ standard deviation of factor scores, uniformity type in diameter and height growth in the 1st quadrant, slim type in the 2nd quadrant, dwarfish type in the 3rd quadrant, and fall-holed type in the 4 th quadrant.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.