• Title/Summary/Keyword: K^+,\

Search Result 755,416, Processing Time 0.625 seconds

Conclusion of Conventions on Compensation for Damage Caused by Aircraft in Flight to Third Parties (항공운항 시 제3자 피해 배상 관련 협약 채택 -그 혁신적 내용과 배경 고찰-)

  • Park, Won-Hwa
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.24 no.1
    • /
    • pp.35-58
    • /
    • 2009
  • A treaty that governs the compensation on damage caused by aircraft to the third parties on surface was first adopted in Rome in 1933, but without support from the international aviation community it was replaced by another convention adopted again in Rome in 1952. Despite the increase of the compensation amount and some improvements to the old version, the Rome Convention 1952 with 49 State parties as of today is not considered universally accepted. Neither is the Montreal Protocol 1978 amending the Rome Convention 1952, with only 12 State parties excluding major aviation powers like USA, Japan, UK, and Germany. Consequently, it is mostly the local laws that apply to the compensation case of surface damage caused by the aircraft, contrary to the intention of those countries and people who involved themselves in the drafting of the early conventions on surface damage. The terrorist attacks 9/11 proved that even the strongest power in the world like the USA cannot with ease bear all the damages done to the third parties by the terrorist acts involving aircraft. Accordingly as a matter of urgency, the International Civil Aviation Organization(ICAO) picked up the matter and have it considered among member States for a few years through its Legal Committee before proposing for adoption as a new treaty in the Diplomatic Conference held in Montreal, Canada 20 April to 2 May 2009. Accordingly, two treaties based on the drafts of the Legal Committee were adopted in Montreal by consensus, one on the compensation for general risk damage caused by aircraft, the other one on compensation for damage from acts of unlawful interference involving aircraft. Both Conventions improved the old Convention/Protocol in many aspects. Deleting 'surface' in defining the damage to the third parties in the title and contents of the Conventions is the first improvement because the third party damage is not necessarily limited to surface on the soil and sea of the Earth. Thus Mid-air collision is now the new scope of application. Increasing compensation limit in big gallop is another improvement, so is the inclusion of the mental injury accompanied by bodily injury as the damage to be compensated. In fact, jurisprudence in recent years for cases of passengers in aircraft accident holds aircraft operators to be liable to such mental injuries. However, "Terror Convention" involving unlawful interference of aircraft has some unique provisions of innovation and others. While establishing the International Civil Aviation Compensation Fund to supplement, when necessary, the damages that exceed the limit to be covered by aircraft operators through insurance taking is an innovation, leaving the fate of the Convention to a State Party, implying in fact the USA, is harming its universality. Furthermore, taking into account the fact that the damage incurred by the terrorist acts, where ever it takes place targeting whichever sector or industry, are the domain of the State responsibility, imposing the burden of compensation resulting from terrorist acts in the air industry on the aircraft operators and passengers/shippers is a source of serious concern for the prospect of the Convention. This is more so when the risks of terrorist acts normally aimed at a few countries because of current international political situation are spread out to many innocent countries without quid pro quo.

  • PDF

The Current Status of the Discussions on International Norms Related to Space Activities in the UN COPUOS Legal Subcommittee (우주활동 국제규범에 관한 유엔 우주평화적이용위원회 법률소위원회의 최근 논의 현황)

  • Jung, Yung-Jin
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.29 no.1
    • /
    • pp.127-160
    • /
    • 2014
  • The UN COPUOS was established in 1959 as a permanent committee of the UN General Assembly with the aims to promote international cooperation in peaceful uses of outer space, to formulate space-related programmes within the UN, to encourage research and dissemination of information on space, and to study legal problems arising from the outer space activities. Its members have been enlarged from 24 members in 1959 to 76 in 2014. The Legal Subcommittee, which has been established under COPUOS in 1962 to deal with legal problems associated with space activities, through its first three decades of work has set up a framework of international space law: the five treaties and agreements - namely the Outer Space Treaty, Rescue Agreement, Liability Convention, Registration Convention, Moon Agreement - and the five declarations and legal principles. However, some sceptical views on this legal framework has been expressed, concerning the applicability of existing international space law to practical issues and new kinds of emerging space activities. UNISPACE III, which took place in 1999, served as a momentum to revitalize the discussions of the legal issues faced by the international community in outer space activities. The agenda of the Legal Subcommittee is currently structured into three categories: regular items, single issue/items, and items considered under a multi-year workplan. The regular items, which deal with basic legal issues, include definition and delimitation of outer space, status and application of the five UN treaties on outer space, and national legislation relevant to the peaceful exploration and use of outer space. The single issues/items, which are decided upon the preceding year, are discussed only for one year in the plenary unless renewed. They include items related to the use of nuclear power sources in outer space and to the space debris mitigation. The agenda items considered under a multi-year work plan are discussed in working group. Items under this category deal with non-legally binding UN instruments on outer space and international mechanism for cooperation. In recent years, the Subcommittee has made some progress on agenda items related to nuclear power sources, space debris, and international cooperation by means of establishing non-legally binding instruments, or soft law. The Republic of Korea became the member state of COPUOS in 2001, after rotating seats every two years with Cuba and Peru since 1994. Korea's joining of COPUOS seems to be late, in considering that some countries with hardly any space activity, such Chad, Sierra Leone, Kenya, Lebanon, Cameroon, joined COPUOS as early as 1960s and 1970s and contributed to the drafting of the aforementioned treaties, declarations, and legal principles. Given the difficulties to conclude a treaty and un urgency to regulate newly emerging space activities, Legal Subcommittee now focuses its effort on developing soft law such as resolutions and guideline to be adopted by UN General Assembly. In order to have its own practices reflected in the international practices, one of the constituent elements of international customary law, Korea should analyse its technical capability, policy, and law related to outer space activities and participate actively in the formation process of the soft law.

A Study on Aviation Safety and Third Country Operator of EU Regulation in light of the Convention on international Civil Aviation (시카고협약체계에서의 EU의 항공법규체계 연구 - TCO 규정을 중심으로 -)

  • Lee, Koo-Hee
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.29 no.1
    • /
    • pp.67-95
    • /
    • 2014
  • Some Contracting States of the Chicago Convention issue FAOC(Foreign Air Operator Certificate) and conduct various safety assessments for the safety of the foreign operators which operate to their state. These FAOC and safety audits on the foreign operators are being expanded to other parts of the world. While this trend is the strengthening measure of aviation safety resulting in the reduction of aircraft accident. FAOC also burdens the other contracting States to the Chicago Convention due to additional requirements and late permission. EASA(European Aviation Safety Agency) is a body governed by European Basic Regulation. EASA was set up in 2003 and conduct specific regulatory and executive tasks in the field of civil aviation safety and environmental protection. EASA's mission is to promote the highest common standards of safety and environmental protection in civil aviation. The task of the EASA has been expanded from airworthiness to air operations and currently includes the rulemaking and standardization of airworthiness, air crew, air operations, TCO, ATM/ANS safety oversight, aerodromes, etc. According to Implementing Rule, Commission Regulation(EU) No 452/2014, EASA has the mandate to issue safety authorizations to commercial air carriers from outside the EU as from 26 May 2014. Third country operators (TCO) flying to any of the 28 EU Member States and/or to 4 EFTA States (Iceland, Norway, Liechtenstein, Switzerland) must apply to EASA for a so called TCO authorization. EASA will only take over the safety-related part of foreign operator assessment. Operating permits will continue to be issued by the national authorities. A 30-month transition period ensures smooth implementation without interrupting international air operations of foreign air carriers to the EU/EASA. Operators who are currently flying to Europe can continue to do so, but must submit an application for a TCO authorization before 26 November 2014. After the transition period, which lasts until 26 November 2016, a valid TCO authorization will be a mandatory prerequisite, in the absence of which an operating permit cannot be issued by a Member State. The European TCO authorization regime does not differentiate between scheduled and non-scheduled commercial air transport operations in principle. All TCO with commercial air transport need to apply for a TCO authorization. Operators with a potential need of operating to the EU at some time in the near future are advised to apply for a TCO authorization in due course, even when the date of operations is unknown. For all the issue mentioned above, I have studied the function of EASA and EU Regulation including TCO Implementing Rule newly introduced, and suggested some proposals. I hope that this paper is 1) to help preparation of TCO authorization, 2) to help understanding about the international issue, 3) to help the improvement of korean aviation regulations and government organizations, 4) to help compliance with international standards and to contribute to the promotion of aviation safety, in addition.

A Study on Developing a VKOSPI Forecasting Model via GARCH Class Models for Intelligent Volatility Trading Systems (지능형 변동성트레이딩시스템개발을 위한 GARCH 모형을 통한 VKOSPI 예측모형 개발에 관한 연구)

  • Kim, Sun-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.19-32
    • /
    • 2010
  • Volatility plays a central role in both academic and practical applications, especially in pricing financial derivative products and trading volatility strategies. This study presents a novel mechanism based on generalized autoregressive conditional heteroskedasticity (GARCH) models that is able to enhance the performance of intelligent volatility trading systems by predicting Korean stock market volatility more accurately. In particular, we embedded the concept of the volatility asymmetry documented widely in the literature into our model. The newly developed Korean stock market volatility index of KOSPI 200, VKOSPI, is used as a volatility proxy. It is the price of a linear portfolio of the KOSPI 200 index options and measures the effect of the expectations of dealers and option traders on stock market volatility for 30 calendar days. The KOSPI 200 index options market started in 1997 and has become the most actively traded market in the world. Its trading volume is more than 10 million contracts a day and records the highest of all the stock index option markets. Therefore, analyzing the VKOSPI has great importance in understanding volatility inherent in option prices and can afford some trading ideas for futures and option dealers. Use of the VKOSPI as volatility proxy avoids statistical estimation problems associated with other measures of volatility since the VKOSPI is model-free expected volatility of market participants calculated directly from the transacted option prices. This study estimates the symmetric and asymmetric GARCH models for the KOSPI 200 index from January 2003 to December 2006 by the maximum likelihood procedure. Asymmetric GARCH models include GJR-GARCH model of Glosten, Jagannathan and Runke, exponential GARCH model of Nelson and power autoregressive conditional heteroskedasticity (ARCH) of Ding, Granger and Engle. Symmetric GARCH model indicates basic GARCH (1, 1). Tomorrow's forecasted value and change direction of stock market volatility are obtained by recursive GARCH specifications from January 2007 to December 2009 and are compared with the VKOSPI. Empirical results indicate that negative unanticipated returns increase volatility more than positive return shocks of equal magnitude decrease volatility, indicating the existence of volatility asymmetry in the Korean stock market. The point value and change direction of tomorrow VKOSPI are estimated and forecasted by GARCH models. Volatility trading system is developed using the forecasted change direction of the VKOSPI, that is, if tomorrow VKOSPI is expected to rise, a long straddle or strangle position is established. A short straddle or strangle position is taken if VKOSPI is expected to fall tomorrow. Total profit is calculated as the cumulative sum of the VKOSPI percentage change. If forecasted direction is correct, the absolute value of the VKOSPI percentage changes is added to trading profit. It is subtracted from the trading profit if forecasted direction is not correct. For the in-sample period, the power ARCH model best fits in a statistical metric, Mean Squared Prediction Error (MSPE), and the exponential GARCH model shows the highest Mean Correct Prediction (MCP). The power ARCH model best fits also for the out-of-sample period and provides the highest probability for the VKOSPI change direction tomorrow. Generally, the power ARCH model shows the best fit for the VKOSPI. All the GARCH models provide trading profits for volatility trading system and the exponential GARCH model shows the best performance, annual profit of 197.56%, during the in-sample period. The GARCH models present trading profits during the out-of-sample period except for the exponential GARCH model. During the out-of-sample period, the power ARCH model shows the largest annual trading profit of 38%. The volatility clustering and asymmetry found in this research are the reflection of volatility non-linearity. This further suggests that combining the asymmetric GARCH models and artificial neural networks can significantly enhance the performance of the suggested volatility trading system, since artificial neural networks have been shown to effectively model nonlinear relationships.

A Study on Drainage Facilities in Mountainous Urban Neighborhood Parks - The Cases of Baebongsan Park and Ogeum Park in Seoul - (산지형 도시근린공원의 배수시설 특성 - 서울시 배봉산공원과 오금공원을 사례로 -)

  • Lee, Sang-Suk
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.38 no.5
    • /
    • pp.80-92
    • /
    • 2010
  • The purpose of this study was to analyze drainage facilities in mountainous urban neigbborhood parks--Baebongsan Park and Ogeum Park--in Seoul. Based on an analysis of existing drainage facilities, the volume of storm water runoff (VSW), the runoff rate of open channels(ROC), and the detention capacity of open charmels(DCOC) by each drainage watershed, the coefficient of runoff rate(CROC) as evaluated to be relevant between VSW and ROC and the coefficient of the detention capacity of open channe1s(CDCOC) as evaluated with DCOC compared to VSW were estimated and analyzed by parks and by watersheds. The results are as follows: 1. The total drainage area of Baebongsan Park was 34.13ha including surface runoff area(15.05ha; 44.09%), open channel area(l4.60ha; 42.78%), and natural waterway area(4.48ha; 13.13%). The total drainage area of Ogeum Park was 20.39ha including open channel area (10.14ha; 49.73%), ridge-side gutter area(7.17ha; 35.16%), surface runoff area (2.52ha; 12.36%), and natural waterway area (0.56ha; 2.75%). In Baebongsan Park, the portion of surface runoff was comparatively higher while the portion of artificial drainage area was higber in Ogeum Park. 2. In Baebongsan Park drainage districts were largely divided: VSW was $7.28m^3/s$ in total(average $0.23m^3/s$). Comparatively, tbe VSW in Ogeum Park, including smaller drainage districts, was $4.37m^3/s$ in total(average $0.12m^3/s$). 3. The ROC of Baebmgsan Park was $11.58m^3/s$ in total(average $0.77m^3/s$) and the CROC was 5.26, while in Ogeum Park, the ROC was $15.40m^3/s$(average $0.34m^3/s$) and tbe CROC was 8.87 higher than that of Baebongsan Because the size and slope of the open channel in Baebongsan Park was higher, the average ROC was larger, while tbe CROC of Ogeum Park was higher than that of Baebongsan Park, for the VSW in Ogeum Park was comparatively lower. 4. The DCOC in Baebongsan Park was $554.54m^3$ and the average of CDCOC was 179.83. That of Ogeum Park was $717.74m^3$ and the average of the CDCOC was 339.69, meaning that the DCOC of Ogeum Park was so much higber that drainage facilities in Ogeum Park were built intensively. This study was focused m the capacity of the drainage facilities in mountainous urban neighborhood parks by using the CROC to evaluate relevance between VSW and ROC and the CDCOC to evaluate the DCOC as compared with VSW. The devised methodology and coefficient for evaluating drainage facilities in mountainous urban neighborhood parks may he universally applicable through additional study. Further study m sustainable urban drainage systems for retaining rainwater in a reservoir and for enhancing ecological value is required in the near future.

Solution Phase Photolyses of Substituted Diphenyl Ether Herbicides under Simulated Environmental Conditions (모조(模造) 환경조건하(環境條件下)에서의 치환(置換) Diphenyl Ether 제초제(除草劑)의 광분해(光分解)에 관(關)한 연구(硏究))

  • Lee, Jae-Koo
    • Applied Biological Chemistry
    • /
    • v.17 no.3
    • /
    • pp.149-176
    • /
    • 1974
  • Eight substituted diphenyl ether herbicides and some of their photoproducts were studied in terms of solution phase photolysis under simulated environmental conditions by using a Rayonet photochemical reactor. The test compounds absorbed sufficient light energy at the wavelength of 300 nm to undergo various photoreactions. All the photoproducts were confirmed by means of tlc, glc, ir, ms, and/or nmr spectrometry. The results obtained are summarized as follows: Solution phase photolysis of C-6989: An exceedingly large amount of p-nitrophenol formed strongly indicates the readiness of the ether linkage cleavage of this compound as the main reaction in all solvents used. Photoreduction of nitro to amino group(s) and photooxidation of trifluoromethyl to carboxyl group were recognized as minor reactions. Aqueous photolysis of p-nitrophenol: Quinone(0.28%), hydroquinone (0.66%), and p-aminophenol (0.42%) were confirmed as photoproducts, in addition to a relatively small amount of an unknown compound. The mechanisms of formation of these products were proposed to be the nitro-nitrite rearrangement via $n{\rightarrow}{\pi}^*$ excitation and the photoreduction through hydrogen abstractions by radicals, respectively. Solution phase photolysis of Nitrofen: Photochemical reduction leading to the p-amino derivative was the main reaction in n-hexane. In aqueous solution, the photoreduction of nitro to amino group and hydroxylation predominated over the ether linkage cleavage. Nucleophilic displacement of the nitro group by hydroxide ion and replacement of chlorine substituents by hydroxyl group or, to a lesser extent, hydrogen were also observed as minor reactoins. Solution phase photolysis of MO-338: Photoreduction of the nitro to amino group was marked in the n-hexane solution photolysis. In the aqueous solution, photoreduction of the nitro substituent and hydroxylation were the main reactions with replacement of chlorine substituents by the hydroxyl group and hydrogen, and cleavage of the ether linkage as minor reactions. Photolyses of MC-4379, MC-3761, MC-5127, MC-6063, and MC-7181 in n-hexane and cyclohexane: Photoreduction of the nitro group leading to the corresponding amino derivative and replacement of one of the halogen substituents by hydrogen from the solvent used were the key reactions in each compound. Aqueous photolysis of MC-4379: Cleavage of the ether linkage, replacement of the carboxymethyl by hydroxyl group, hydroxylation, and replacement of the nitro by hydroxy group were prominent with photoreduction and dechlorination as minor reactions. Aqueous photolysis of MC-3761: Cleavage of the ether linkage, replacement of the carboxymethyl by hydroxyl group, and photoreduction followed by hydroxylation were the main reactions. Aqueous photolysis of MC-5127: Replacement of carboxyethyl by hydrogen was predominant with ether linkage cleavage, photoreduction, and dechlorination as minor reactions. It was obvious that the decarboxyethylation proceeded more readily than decarboxymethylation occurring in the other compounds. Aqueous photolysis of MC-6063: Cleavage of the ether linkage and photodechlorination were the main reactions. Aqueous photolysis of MC-7181: Replacement of the carboxymethyl group by hydrogen and monodechlorination were the remarkable reactions. Cleavage of the ether linkage and hydroxylation were thought to be the minor reactions. Aqueous photolysis of 3-carboxymethyl-4-nitrophenol: The photo-induced Fries rearrangement common to aromatic esters did not appear to occur in the carboxymethyl group of this type of compound. Conversion of nitro to nitroso group was the main reaction.

  • PDF

Studies on the ${\beta}-Tyrosinase$ -Part 2. On the Synthesis of Halo-tyrosine by ${\beta}-Tyrosinase$- (${\beta}-Tyrosinase$에 관한 연구 -제2보 ${\beta}-Tyrosinase$에 의한 Halogen화(化) Tyrosine의 합성(合成)-)

  • Kim, Chan-Jo;Nagasawa, Toru;Tani, Yoshiki;Yamada, Hideaki
    • Applied Biological Chemistry
    • /
    • v.22 no.4
    • /
    • pp.198-209
    • /
    • 1979
  • L-Tyrosine, 2-chloro-L-tyrosine, 2-bromo-L-tyrosine, and 2-iodo-L-tyrosine were synthesized by ${\beta}-tyrosinase$ obtained from cells of Escherichia intermedia A-21, through the reversal of the ${\alpha},{\beta}-elimination$ reaction, and their molecular structures were analyzed by element analysis, NMR spectroscopy, mass spectrometry and IR spectroscopy. Rates of synthesis and hydrolysis of halogenated tyrosines by ${\beta}-tyrosinase$, inhibition of the enzyme activity by halogenated phenols, and effects of addition of m-bromophenol on the synthesis of 2-bromotyrosine were determined. The results obtained were as follows: 1) In the synthesis of halogenated tyrosines, the yield of 2-chlorotyrosine from m-chlorophenol were approximately 15 per cent, that of 2-bromotyrosine from m-bromophenol 13.8 per cent, and that of 2-iodotyrosine from m-iodophenol 9.8 per cent. 2) Rate of synthesis of halogenated tyrosines by ${\beta}-tyrosinase$ was slower than that of tyrosine and the rates were decreased in the order of chlorine, bromine and iodine, that is, by increasing the atomic radius. Relative rate of 2-chlorotyrosine synthesis was determined to be 28.2, that of 2-bromotyrosine to be 8.13, and that of 2-iodotyrosine to be 0.98, respectively, against 100 of tyrosine. However 3-iodotyrosine was not synthesized by the enzyme. 3) The relative rate of 2-chlorotyrosine hydrolysis by ${\beta}-tyrosinase$ was 70.7, that of 2-bromotyrosine was 39.0, and that of 2-iodotyrosine was 12.6 against 100 of tyrosine, respectively. The rate of hydrolysis appeared to be decreased in the order of chlorine, bromine and iodine, that is, by increasing the atomic radius or by decreasing the electronegativity. But 3-iodotyrosine was not hydrolyzed by the enzyme. 4) The activity of ${\beta}-tyrosinase$ was inhibited by phenol markedly. Of the halogenated phenols, o-, or m-chlorophenol and o-bromophenol gave marked inhibition on the enzyme action, however inhibition by iodophenol was not strong. Plotting by Lineweaver-Burk method, a mixed-type inhibition by m-chlorophenol was observed and its Ki value was found to be $5.46{\times}10^{-4}M$. 5) During the synthesizing reaction of 2-bromotyrosine by the enzyme, sequential addition of substrate which was m-bromophenol with time intervals and in a small amount resulted in better yield of the product. 6) The halogenated tyrosines which were produced by ${\beta}-tyrosinase$ from pyruvate, ammonia and m-halogenated phenols were analysed to determine their molecular structures by element analysis, NMR spectroscopy, mass spectrometry, and IR spectroscopy. The result indicated that they were 2-chloro-L-tyrosine, 2-bromo-L-tyrosine, and 2-iodo-L-tyrosine, respectively.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Measuring the Public Service Quality Using Process Mining: Focusing on N City's Building Licensing Complaint Service (프로세스 마이닝을 이용한 공공서비스의 품질 측정: N시의 건축 인허가 민원 서비스를 중심으로)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.35-52
    • /
    • 2019
  • As public services are provided in various forms, including e-government, the level of public demand for public service quality is increasing. Although continuous measurement and improvement of the quality of public services is needed to improve the quality of public services, traditional surveys are costly and time-consuming and have limitations. Therefore, there is a need for an analytical technique that can measure the quality of public services quickly and accurately at any time based on the data generated from public services. In this study, we analyzed the quality of public services based on data using process mining techniques for civil licensing services in N city. It is because the N city's building license complaint service can secure data necessary for analysis and can be spread to other institutions through public service quality management. This study conducted process mining on a total of 3678 building license complaint services in N city for two years from January 2014, and identified process maps and departments with high frequency and long processing time. According to the analysis results, there was a case where a department was crowded or relatively few at a certain point in time. In addition, there was a reasonable doubt that the increase in the number of complaints would increase the time required to complete the complaints. According to the analysis results, the time required to complete the complaint was varied from the same day to a year and 146 days. The cumulative frequency of the top four departments of the Sewage Treatment Division, the Waterworks Division, the Urban Design Division, and the Green Growth Division exceeded 50% and the cumulative frequency of the top nine departments exceeded 70%. Higher departments were limited and there was a great deal of unbalanced load among departments. Most complaint services have a variety of different patterns of processes. Research shows that the number of 'complementary' decisions has the greatest impact on the length of a complaint. This is interpreted as a lengthy period until the completion of the entire complaint is required because the 'complement' decision requires a physical period in which the complainant supplements and submits the documents again. In order to solve these problems, it is possible to drastically reduce the overall processing time of the complaints by preparing thoroughly before the filing of the complaints or in the preparation of the complaints, or the 'complementary' decision of other complaints. By clarifying and disclosing the cause and solution of one of the important data in the system, it helps the complainant to prepare in advance and convinces that the documents prepared by the public information will be passed. The transparency of complaints can be sufficiently predictable. Documents prepared by pre-disclosed information are likely to be processed without problems, which not only shortens the processing period but also improves work efficiency by eliminating the need for renegotiation or multiple tasks from the point of view of the processor. The results of this study can be used to find departments with high burdens of civil complaints at certain points of time and to flexibly manage the workforce allocation between departments. In addition, as a result of analyzing the pattern of the departments participating in the consultation by the characteristics of the complaints, it is possible to use it for automation or recommendation when requesting the consultation department. In addition, by using various data generated during the complaint process and using machine learning techniques, the pattern of the complaint process can be found. It can be used for automation / intelligence of civil complaint processing by making this algorithm and applying it to the system. This study is expected to be used to suggest future public service quality improvement through process mining analysis on civil service.

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.