• Title/Summary/Keyword: Local Information

Search Result 8,445, Processing Time 0.038 seconds

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

School Experiences and the Next Gate Path : An analysis of Univ. Student activity log (대학생의 학창경험이 사회 진출에 미치는 영향: 대학생활 활동 로그분석을 중심으로)

  • YI, EUNJU;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.149-171
    • /
    • 2020
  • The period at university is to make decision about getting an actual job. As our society develops rapidly and highly, jobs are diversified, subdivided, and specialized, and students' job preparation period is also getting longer and longer. This study analyzed the log data of college students to see how the various activities that college students experience inside and outside of school might have influences on employment. For this experiment, students' various activities were systematically classified, recorded as an activity data and were divided into six core competencies (Job reinforcement competency, Leadership & teamwork competency, Globalization competency, Organizational commitment competency, Job exploration competency, and Autonomous implementation competency). The effect of the six competency levels on the employment status (employed group, unemployed group) was analyzed. As a result of the analysis, it was confirmed that the difference in level between the employed group and the unemployed group was significant for all of the six competencies, so it was possible to infer that the activities at the school are significant for employment. Next, in order to analyze the impact of the six competencies on the qualitative performance of employment, we had ANOVA analysis after dividing the each competency level into 2 groups (low and high group), and creating 6 groups by the range of first annual salary. Students with high levels of globalization capability, job search capability, and autonomous implementation capability were also found to belong to a higher annual salary group. The theoretical contributions of this study are as follows. First, it connects the competencies that can be extracted from the school experience with the competencies in the Human Resource Management field and adds job search competencies and autonomous implementation competencies which are required for university students to have their own successful career & life. Second, we have conducted this analysis with the competency data measured form actual activity and result data collected from the interview and research. Third, it analyzed not only quantitative performance (employment rate) but also qualitative performance (annual salary level). The practical use of this study is as follows. First, it can be a guide when establishing career development plans for college students. It is necessary to prepare for a job that can express one's strengths based on an analysis of the world of work and job, rather than having a no-strategy, unbalanced, or accumulating excessive specifications competition. Second, the person in charge of experience design for college students, at an organizations such as schools, businesses, local governments, and governments, can refer to the six competencies suggested in this study to for the user-useful experiences design that may motivate more participation. By doing so, one event may bring mutual benefits for both event designers and students. Third, in the era of digital transformation, the government's policy manager who envisions the balanced development of the country can make a policy in the direction of achieving the curiosity and energy of college students together with the balanced development of the country. A lot of manpower is required to start up novel platform services that have not existed before or to digitize existing analog products, services and corporate culture. The activities of current digital-generation-college-students are not only catalysts in all industries, but also for very benefit and necessary for college students by themselves for their own successful career development.

A Study on the System of Aircraft Investigation (항공기(航空機) 사고조사제도(事故調査制度)에 관한 연구(硏究))

  • Kim, Doo-Hwan
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.9
    • /
    • pp.85-143
    • /
    • 1997
  • The main purpose of the investigation of an accident caused by aircraft is to be prevented the sudden and casual accidents caused by wilful misconduct and fault from pilots, air traffic controllers, hijack, trouble of engine and machinery of aircraft, turbulence during the bad weather, collision between birds and aircraft, near miss flight by aircrafts etc. It is not the purpose of this activity to apportion blame or liability for offender of aircraft accidents. Accidents to aircraft, especially those involving the general public and their property, are a matter of great concern to the aviation community. The system of international regulation exists to improve safety and minimize, as far as possible, the risk of accidents but when they do occur there is a web of systems and procedures to investigate and respond to them. I would like to trace the general line of regulation from an international source in the Chicago Convention of 1944. Article 26 of the Convention lays down the basic principle for the investigation of the aircraft accident. Where there has been an accident to an aircraft of a contracting state which occurs in the territory of another contracting state and which involves death or serious injury or indicates serious technical defect in the aircraft or air navigation facilities, the state in which the accident occurs must institute an inquiry into the circumstances of the accident. That inquiry will be in accordance, in so far as its law permits, with the procedure which may be recommended from time to time by the International Civil Aviation Organization ICAO). There are very general provisions but they state two essential principles: first, in certain circumstances there must be an investigation, and second, who is to be responsible for undertaking that investigation. The latter is an important point to establish otherwise there could be at least two states claiming jurisdiction on the inquiry. The Chicago Convention also provides that the state where the aircraft is registered is to be given the opportunity to appoint observers to be present at the inquiry and the state holding the inquiry must communicate the report and findings in the matter to that other state. It is worth noting that the Chicago Convention (Article 25) also makes provision for assisting aircraft in distress. Each contracting state undertakes to provide such measures of assistance to aircraft in distress in its territory as it may find practicable and to permit (subject to control by its own authorities) the owner of the aircraft or authorities of the state in which the aircraft is registered, to provide such measures of assistance as may be necessitated by circumstances. Significantly, the undertaking can only be given by contracting state but the duty to provide assistance is not limited to aircraft registered in another contracting state, but presumably any aircraft in distress in the territory of the contracting state. Finally, the Convention envisages further regulations (normally to be produced under the auspices of ICAO). In this case the Convention provides that each contracting state, when undertaking a search for missing aircraft, will collaborate in co-ordinated measures which may be recommended from time to time pursuant to the Convention. Since 1944 further international regulations relating to safety and investigation of accidents have been made, both pursuant to Chicago Convention and, in particular, through the vehicle of the ICAO which has, for example, set up an accident and reporting system. By requiring the reporting of certain accidents and incidents it is building up an information service for the benefit of member states. However, Chicago Convention provides that each contracting state undertakes collaborate in securing the highest practicable degree of uniformity in regulations, standards, procedures and organization in relation to aircraft, personnel, airways and auxiliary services in all matters in which such uniformity will facilitate and improve air navigation. To this end, ICAO is to adopt and amend from time to time, as may be necessary, international standards and recommended practices and procedures dealing with, among other things, aircraft in distress and investigation of accidents. Standards and Recommended Practices for Aircraft Accident Injuries were first adopted by the ICAO Council on 11 April 1951 pursuant to Article 37 of the Chicago Convention on International Civil Aviation and were designated as Annex 13 to the Convention. The Standards Recommended Practices were based on Recommendations of the Accident Investigation Division at its first Session in February 1946 which were further developed at the Second Session of the Division in February 1947. The 2nd Edition (1966), 3rd Edition, (1973), 4th Edition (1976), 5th Edition (1979), 6th Edition (1981), 7th Edition (1988), 8th Edition (1992) of the Annex 13 (Aircraft Accident and Incident Investigation) of the Chicago Convention was amended eight times by the ICAO Council since 1966. Annex 13 sets out in detail the international standards and recommended practices to be adopted by contracting states in dealing with a serious accident to an aircraft of a contracting state occurring in the territory of another contracting state, known as the state of occurrence. It provides, principally, that the state in which the aircraft is registered is to be given the opportunity to appoint an accredited representative to be present at the inquiry conducted by the state in which the serious aircraft accident occurs. Article 26 of the Chicago Convention does not indicate what the accredited representative is to do but Annex 13 amplifies his rights and duties. In particular, the accredited representative participates in the inquiry by visiting the scene of the accident, examining the wreckage, questioning witnesses, having full access to all relevant evidence, receiving copies of all pertinent documents and making submissions in respect of the various elements of the inquiry. The main shortcomings of the present system for aircraft accident investigation are that some contracting sates are not applying Annex 13 within its express terms, although they are contracting states. Further, and much more important in practice, there are many countries which apply the letter of Annex 13 in such a way as to sterilise its spirit. This appears to be due to a number of causes often found in combination. Firstly, the requirements of the local law and of the local procedures are interpreted and applied so as preclude a more efficient investigation under Annex 13 in favour of a legalistic and sterile interpretation of its terms. Sometimes this results from a distrust of the motives of persons and bodies wishing to participate or from commercial or related to matters of liability and bodies. These may be political, commercial or related to matters of liability and insurance. Secondly, there is said to be a conscious desire to conduct the investigation in some contracting states in such a way as to absolve from any possibility of blame the authorities or nationals, whether manufacturers, operators or air traffic controllers, of the country in which the inquiry is held. The EEC has also had an input into accidents and investigations. In particular, a directive was issued in December 1980 encouraging the uniformity of standards within the EEC by means of joint co-operation of accident investigation. The sharing of and assisting with technical facilities and information was considered an important means of achieving these goals. It has since been proposed that a European accident investigation committee should be set up by the EEC (Council Directive 80/1266 of 1 December 1980). After I would like to introduce the summary of the legislation examples and system for aircraft accidents investigation of the United States, the United Kingdom, Canada, Germany, The Netherlands, Sweden, Swiss, New Zealand and Japan, and I am going to mention the present system, regulations and aviation act for the aircraft accident investigation in Korea. Furthermore I would like to point out the shortcomings of the present system and regulations and aviation act for the aircraft accident investigation and then I will suggest my personal opinion on the new and dramatic innovation on the system for aircraft accident investigation in Korea. I propose that it is necessary and desirable for us to make a new legislation or to revise the existing aviation act in order to establish the standing and independent Committee of Aircraft Accident Investigation under the Korean Government.

  • PDF

Roles of the Insulin-like Growth Factor System in the Reproductive Function;Uterine Connection (Insulin-like Growth Factor Systems의 생식기능에서의 역할;자궁편)

  • Lee, Chul-Young
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.23 no.3
    • /
    • pp.247-268
    • /
    • 1996
  • It has been known for a long time that gonadotropins and steroid hormones play a pivotal role in a series of reproductive biological phenomena including the maturation of ovarian follicles and oocytes, ovulation and implantation, maintenance of pregnancy and fetal growth & development, parturition and mammary development and lactation. Recent investigations, however, have elucidated that in addition to these classic hormones, multiple growth factors also are involved in these phenomena. Most growth factors in reproductive organs mediate the actions of gonadotropins and steroid hormones or synergize with them in an autocrine/paracrine manner. The insulin-like growth factor(IGF) system, which is one of the most actively investigated areas lately in the reproductive organs, has been found to have important roles in a wide gamut of reproductive phenomena. In the present communication, published literature pertaining to the intrauterine IGF system will be reviewed preceded by general information of the IGF system. The IGF family comprises of IGF-I & IGF-II ligands, two types of IGF receptors and six classes of IGF-binding proteins(IGFBPs) that are known to date. IGF-I and IGF-II peptides, which are structurally homologous to proinsulin, possess the insulin-like activity including the stimulatory effect of glucose and amino acid transport. Besides, IGFs as mitogens stimulate cell division, and also play a role in cellular differentiation and functions in a variety of cell lines. IGFs are expressed mainly in the liver and messenchymal cells, and act on almost all types of tissues in an autocrine/paracrine as well as endocrine mode. There are two types of IGF receptors. Type I IGF receptors, which are tyrosine kinase receptors having high-affinity for IGF-I and IGF-II, mediate almost all the IGF actions that are described above. Type II IGF receptors or IGF-II/mannose-6-phosphate receptors have two distinct binding sites; the IGF-II binding site exhibits a high affinity only for IGF-II. The principal role of the type II IGF receptor is to destroy IGF-II by targeting the ligand to the lysosome. IGFs in biological fluids are mostly bound to IGFBP. IGFBPs, in general, are IGF storage/carrier proteins or modulators of IGF actions; however, as for distinct roles for individual IGFBPs, only limited information is available. IGFBPs inhibit IGF actions under most in vitro situations, seemingly because affinities of IGFBPs for IGFs are greater than those of IGF receptors. How IGF is released from IGFBP to reach IGF receptors is not known; however, various IGFBP protease activities that are present in blood and interstitial fluids are believed to play an important role in the process of IGF release from the IGFBP. According to latest reports, there is evidence that under certain in vitro circumstances, IGFBP-1, -3, -5 have their own biological activities independent of the IGF. This may add another dimension of complexity of the already complicated IGF system. Messenger ribonucleic acids and proteins of the IGF family members are expressed in the uterine tissue and conceptus of the primates, rodents and farm animals to play important roles in growth and development of the uterus and fetus. Expression of the uterine IGF system is regulated by gonadal hormones and local regulatory substances with temporal and spatial specificities. Locally expressed IGFs and IGFBPs act on the uterine tissue in an autocrine/paracrine manner, or are secreted into the uterine lumen to participate in conceptus growth and development. Conceptus also expresses the IGF system beginning from the peri-implantation period. When an IGF family member is expressed in the conceptus, however, is determined by the presence or absence of maternally inherited mRNAs, genetic programming of the conceptus itself and an interaction with the maternal tissue. The site of IGF action also follows temporal (physiological status) and spatial specificities. These facts that expression of the IGF system is temporally and spatially regulated support indirectly a hypothesis that IGFs play a role in conceptus growth and development. Uterine and conceptus-derived IGFs stimulate cell division and differentiation, glucose and amino acid transport, general protein synthesis and the biosynthesis of mammotropic hormones including placental lactogen and prolactin, and also play a role in steroidogenesis. The suggested role for IGFs in conceptus growth and development has been proven by the result of IGF-I, IGF-II or IGF receptor gene disruption(targeting) of murine embryos by the homologous recombination technique. Mice carrying a null mutation for IGF-I and/or IGF-II or type I IGF receptor undergo delayed prenatal and postnatal growth and development with 30-60% normal weights at birth. Moreover, mice lacking the type I IGF receptor or IGF-I plus IGF-II die soon after birth. Intrauterine IGFBPs generally are believed to sequester IGF ligands within the uterus or to play a role of negative regulators of IGF actions by inhibiting IGF binding to cognate receptors. However, when it is taken into account that IGFBP-1 is expressed and secreted in primate uteri in amounts assessedly far exceeding those of local IGFs and that IGFBP-1 is one of the major secretory proteins of the primate decidua, the possibility that this IGFBP may have its own biological activity independent of IGF cannot be excluded. Evidently, elucidating the exact role of each IGFBP is an essential step into understanding the whole IGF system. As such, further research in this area is awaited with a lot of anticipation and attention.

  • PDF

Corporate Governance and Managerial Performance in Public Enterprises: Focusing on CEOs and Internal Auditors (공기업의 지배구조와 경영성과: CEO와 내부감사인을 중심으로)

  • Yu, Seung-Won
    • KDI Journal of Economic Policy
    • /
    • v.31 no.1
    • /
    • pp.71-103
    • /
    • 2009
  • Considering the expenditure size of public institutions centering on public enterprises, about 28% of Korea's GDP in 2007, public institutions have significant influence on the Korean economy. However, still in the new government, there are voices of criticism about the need of constant reform on public enterprises due to their irresponsible management impeding national competitiveness. Especially, political controversy over appointment of executives such as CEOs of public enterprises has caused the distrust of the people. As one of various reform measures for public enterprises, this study analyzes the effect of internal governance structure of public enterprises on their managerial performance, since, regardless of privatization of public enterprises, improving the governance structure of public enterprises is a matter of great importance. There are only a few prior researches focusing on the governance structure and managerial performance of public enterprises compared to those of private enterprises. Most of prior researches studied the relationship between parachuting employment of CEO and managerial performance, and concluded that parachuting produces negative effect on managerial performance. However, different from the results of such researches, recent studies suggest that there is no relationship between employment type of CEOs and managerial performance in public enterprises. This study is distinguished from prior researches in view of following. First, prior researches focused on the relationship between employment type of public enterprises' CEOs and managerial performance. However, in addition to this, this study analyzes the relationship of internal auditors and managerial performance. Second, unlike prior researches studying the relationship between employment type of public corporations' CEOs and managerial performance with an emphasis on parachuting employment, this study researches impact of employment type as well as expertise of CEOs and internal auditors on managerial performance. Third, prior researchers mainly used non-financial indicators from various samples. However, this study eliminated subjectivity of researchers by analyzing public enterprises designated by the government and their financial statements, which were externally audited and inspected. In this study, regression analysis is applied in analyzing the relationship of independence and expertise of public enterprises' CEOs and internal auditors and managerial performance in the same year. Financial information from 2003 to 2007 of 24 public enterprises, which are designated by the government, and their personnel information from the board of directors are used as samples. Independence of CEOs is identified by dividing CEOs into persons from the same public enterprise and persons from other organization, and independence of internal auditors is determined by classifying them into two groups, people from academic field, economic world, and civic groups, and people from political community, government ministries, and military. Also, expertise of CEOs and internal auditors is divided into business expertise and financial expertise. As control variables, this study applied foundation year, asset size, government subsidies as a proportion to corporate earnings, and dummy variables by year. Analysis showed that there is significantly positive relationship between independence and financial expertise of internal auditors and managerial performance. In addition, although business expertise and financial expertise of CEOs were not statistically significant, they have positive relationship with managerial performance. However, unlike a general idea, independence of CEOs is not statistically significant, but it is negatively related to managerial performance. Contrary to general concerns, it seems that the impact of independence of public enterprises' CEOs on managerial performance has slightly decreased. Instead, it explains that expertise of public enterprises' CEOs and internal auditors plays more important role in managerial performance rather than their independence. Meanwhile, there are limitations in this study as follows. First, in contrast to private enterprises, public enterprises simultaneously pursue publicness and entrepreneurship. However, this study focuses on entrepreneurship, excluding considerations on publicness of public enterprises. Second, public enterprises in this study are limited to those in the central government. Accordingly, it should be carefully considered when the result of this study is applied to public enterprises in local governments. Finally, this study excludes factors related to transparency and democracy issues which are raised in appointment process of executives of public enterprises, as it may cause the issue of subjectivity of researchers.

  • PDF

If This Brand Were a Person, or Anthropomorphism of Brands Through Packaging Stories (가설품패시인(假设品牌是人), 혹통과고사포장장품패의인화(或通过故事包装将品牌拟人化))

  • Kniazeva, Maria;Belk, Russell W.
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.3
    • /
    • pp.231-238
    • /
    • 2010
  • The anthropomorphism of brands, defined as seeing human beings in brands (Puzakova, Kwak, and Rosereto, 2008) is the focus of this study. Specifically, the research objective is to understand the ways in which brands are rendered humanlike. By analyzing consumer readings of stories found on food product packages we intend to show how marketers and consumers humanize a spectrum of brands and create meanings. Our research question considers the possibility that a single brand may host multiple or single meanings, associations, and personalities for different consumers. We start by highlighting the theoretical and practical significance of our research, explain why we turn our attention to packages as vehicles of brand meaning transfer, then describe our qualitative methodology, discuss findings, and conclude with a discussion of managerial implications and directions for future studies. The study was designed to directly expose consumers to potential vehicles of brand meaning transfer and then engage these consumers in free verbal reflections on their perceived meanings. Specifically, we asked participants to read non-nutritional stories on selected branded food packages, in order to elicit data about received meanings. Packaging has yet to receive due attention in consumer research (Hine, 1995). Until now, attention has focused solely on its utilitarian function and has generated a body of research that has explored the impact of nutritional information and claims on consumer perceptions of products (e.g., Loureiro, McCluskey and Mittelhammer, 2002; Mazis and Raymond, 1997; Nayga, Lipinski and Savur, 1998; Wansik, 2003). An exception is a recent study that turns its attention to non-nutritional packaging narratives and treats them as cultural productions and vehicles for mythologizing the brand (Kniazeva and Belk, 2007). The next step in this stream of research is to explore how such mythologizing activity affects brand personality perception and how these perceptions relate to consumers. These are the questions that our study aimed to address. We used in-depth interviews to help overcome the limitations of quantitative studies. Our convenience sample was formed with the objective of providing demographic and psychographic diversity in order to elicit variations in consumer reflections to food packaging stories. Our informants represent middle-class residents of the US and do not exhibit extreme alternative lifestyles described by Thompson as "cultural creatives" (2004). Nine people were individually interviewed on their food consumption preferences and behavior. Participants were asked to have a look at the twelve displayed food product packages and read all the textual information on the package, after which we continued with questions that focused on the consumer interpretations of the reading material (Scott and Batra, 2003). On average, each participant reflected on 4-5 packages. Our in-depth interviews lasted one to one and a half hours each. The interviews were tape recorded and transcribed, providing 140 pages of text. The products came from local grocery stores on the West Coast of the US and represented a basic range of food product categories, including snacks, canned foods, cereals, baby foods, and tea. The data were analyzed using procedures for developing grounded theory delineated by Strauss and Corbin (1998). As a result, our study does not support the notion of one brand/one personality as assumed by prior work. Thus, we reveal multiple brand personalities peacefully cohabiting in the same brand as seen by different consumers, despite marketer attempts to create more singular brand personalities. We extend Fournier's (1998) proposition, that one's life projects shape the intensity and nature of brand relationships. We find that these life projects also affect perceived brand personifications and meanings. While Fournier provides a conceptual framework that links together consumers’ life themes (Mick and Buhl, 1992) and relational roles assigned to anthropomorphized brands, we find that consumer life projects mold both the ways in which brands are rendered humanlike and the ways in which brands connect to consumers' existential concerns. We find two modes through which brands are anthropomorphized by our participants. First, brand personalities are created by seeing them through perceived demographic, psychographic, and social characteristics that are to some degree shared by consumers. Second, brands in our study further relate to consumers' existential concerns by either being blended with consumer personalities in order to connect to them (the brand as a friend, a family member, a next door neighbor) or by distancing themselves from the brand personalities and estranging them (the brand as a used car salesman, a "bunch of executives.") By focusing on food product packages, we illuminate a very specific, widely-used, but little-researched vehicle of marketing communication: brand storytelling. Recent work that has approached packages as mythmakers, finds it increasingly challenging for marketers to produce textual stories that link the personalities of products to the personalities of those consuming them, and suggests that "a multiplicity of building material for creating desired consumer myths is what a postmodern consumer arguably needs" (Kniazeva and Belk, 2007). Used as vehicles for storytelling, food packages can exploit both rational and emotional approaches, offering consumers either a "lecture" or "drama" (Randazzo, 2006), myths (Kniazeva and Belk, 2007; Holt, 2004; Thompson, 2004), or meanings (McCracken, 2005) as necessary building blocks for anthropomorphizing their brands. The craft of giving birth to brand personalities is in the hands of writers/marketers and in the minds of readers/consumers who individually and sometimes idiosyncratically put a meaningful human face on a brand.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Chinese Communist Party's Management of Records & Archives during the Chinese Revolution Period (혁명시기 중국공산당의 문서당안관리)

  • Lee, Won-Kyu
    • The Korean Journal of Archival Studies
    • /
    • no.22
    • /
    • pp.157-199
    • /
    • 2009
  • The organization for managing records and archives did not emerge together with the founding of the Chinese Communist Party. Such management became active with the establishment of the Department of Documents (文書科) and its affiliated offices overseeing reading and safekeeping of official papers, after the formation of the Central Secretariat(中央秘書處) in 1926. Improving the work of the Secretariat's organization became the focus of critical discussions in the early 1930s. The main criticism was that the Secretariat had failed to be cognizant of its political role and degenerated into a mere "functional organization." The solution to this was the "politicization of the Secretariat's work." Moreover, influenced by the "Rectification Movement" in the 1940s, the party emphasized the responsibility of the Resources Department (材料科) that extended beyond managing documents to collecting, organizing and providing various kinds of important information data. In the mean time, maintaining security with regard to composing documents continued to be emphasized through such methods as using different names for figures and organizations or employing special inks for document production. In addition, communications between the central political organs and regional offices were emphasized through regular reports on work activities and situations of the local areas. The General Secretary not only composed the drafts of the major official documents but also handled the reading and examination of all documents, and thus played a central role in record processing. The records, called archives after undergoing document processing, were placed in safekeeping. This function was handled by the "Document Safekeeping Office(文件保管處)" of the Central Secretariat's Department of Documents. Although the Document Safekeeping Office, also called the "Central Repository(中央文庫)", could no longer accept, beginning in the early 1930s, additional archive transfers, the Resources Department continued to strengthen throughout the 1940s its role of safekeeping and providing documents and publication materials. In particular, collections of materials for research and study were carried out, and with the recovery of regions which had been under the Japanese rule, massive amounts of archive and document materials were collected. After being stipulated by rules in 1931, the archive classification and cataloguing methods became actively systematized, especially in the 1940s. Basically, "subject" classification methods and fundamental cataloguing techniques were adopted. The principle of assuming "importance" and "confidentiality" as the criteria of management emerged from a relatively early period, but the concept or process of evaluation that differentiated preservation and discarding of documents was not clear. While implementing a system of secure management and restricted access for confidential information, the critical view on providing use of archive materials was very strong, as can be seen in the slogan, "the unification of preservation and use." Even during the revolutionary movement and wars, the Chinese Communist Party continued their efforts to strengthen management and preservation of records & archives. The results were not always desirable nor were there any reasons for such experiences to lead to stable development. The historical conditions in which the Chinese Communist Party found itself probably made it inevitable. The most pronounced characteristics of this process can be found in the fact that they not only pursued efficiency of records & archives management at the functional level but, while strengthening their self-awareness of the political significance impacting the Chinese Communist Party's revolution movement, they also paid attention to the value possessed by archive materials as actual evidence for revolutionary policy research and as historical evidence of the Chinese Communist Party.