• Title/Summary/Keyword: Trade Model

Search Result 1,789, Processing Time 0.028 seconds

An Investigation on the Optimal Ship Size for Chemical Tankers by Main Shipping Routes (케미컬 탱커선 운항노선별 최적선형에 관한 연구)

  • Kim, Jae-Ho;Kim, Taek-Won;Woo, Su-Han
    • Journal of Navigation and Port Research
    • /
    • v.39 no.6
    • /
    • pp.439-450
    • /
    • 2015
  • This study objects to find characteristics in chemical tanker markets and to determine optimal chemical tanker size using a total shipping cost in main trading route of asia chemical tankers .Precedent studies of determination of the optimal ship size and case studies about chemical tankers was carried out and tried to introduce a cost model which is applicable to chemical tanker. This study is dependant on numerical analysis and involves scenario analysis to minimize sensitivity of results. This analysis shows as follows. First, 12,000DWT tanker is an optimal size on the 'Far East-Middle East' services, 9,000DWT tanker is a most competitive on the 'Far East-South East Asia' services and 3,000DWT tanker is a most economic size on the 'Inner Far East' services at average market situation. Second, the bigger size of chemical tanker, the more competitive advantage the tanker will obtain when bunker fuel prices rise. Small size ship gets more competitive during bunker prices down. Third, market fluctuation of time charter rate for chemical tanker is less than 20% against its average time charter hire which means less volatile. And tanker's competitiveness per each size is remained mostly same when time charterer rates rise at same proportion. Fourth, bigger size chemical tankers have cost advantages when tanker's quantity of each part cargo increase. And small-sized tanks are more competitive when part cargo scales decrease. For the last, ship's port stay strongly influences on the determination of the optical tanker size. When vessel has shorter port stay, bigger-sized tanker will be more competitive and even can be competitive if applies in short voyage as well.

Relationship of Carbohydrate and Fat Intake with Metabolic Syndrome in Korean Women: The Korea National Health and Nutrition Examination Survey (2007-2016) (한국 여성의 탄수화물/지질 섭취가 대사증후군에 미치는 영향: 국민건강영양조사(2007-2016)를 중심으로)

  • Lee, Jaesang;Kim, Yookyung;Shin, Woo-Kyoung
    • Journal of Korean Home Economics Education Association
    • /
    • v.35 no.1
    • /
    • pp.1-14
    • /
    • 2023
  • The objective of the study was to examine the associations of dietary carbohydrate and fat intake with the prevalence of metabolic syndrome in Korean women. A cross-sectional study was employed based on data from the Korea National Health and Nutrition Examination (2007-2016). A total of 22,850 women aged 19 to 69 years were studied after excluding responses from pregnant or lactating women and those with missing metabolic values. Dietary intake data were collected with a 24-hour recall method. Dietary carbohydrate and fat intakes were divided into quintiles. After controlling for confounding variables, a multivariable logistic regression and general linear model were used. The findings indicated that HDL cholesterol levels were lower (p for trend<0.01), while triglyceride levels (p for trend=0.04), waist circumference (p for trend<0.01), and systolic blood pressure (p for trend<0.01) were higher among participants in the highest quintile of carbohydrate intake compared to those in the lowest quintile. Participants in the highest quintile of fat intake had lower waist circumference (p for trend=0.02), triglyceride level (p for trend<0.01), and systolic blood pressure (p for trend<0.01), while higher HDL cholesterol level (p for trend<0.01) compared to those in the lowest fat intake quintile. Metabolic syndrome was more likely to be present in the highest quintile of carbohydrates intake than in the lowest quintile (5th quintile vs. 1st quintile, OR: 1.32; 95% CI: 1.11 to 1.57). However, metabolic syndrome was less likely to be present in the highest quintile of fat intake than in the lowest quintile (5th quintile vs. 1st quintile, OR: 0.73; 95% CI: 0.61 to 0.86). This study revealed that high dietary carbohydrate intake and low dietary fat intake were associated with metabolic syndrome in Korean women.

Implications of Shared Growth of Public Enterprises: Korea Hydro & Nuclear Power Case (공공기관의 동반성장 현황과 시사점: 한국수력원자력(주) 사례를 중심으로)

  • Jeon, Young-tae;Hwang, Seung-ho;Kim, Young-woo
    • Journal of Venture Innovation
    • /
    • v.4 no.2
    • /
    • pp.57-75
    • /
    • 2021
  • KHNP's shared growth activities are based on such public good. Reflecting the characteristics of a comprehensive energy company, a high-tech plant company, and a leading company for shared growth, it presents strategies to link performance indicators with its partners and implements various measures. Key tasks include maintaining the nuclear power plant ecosystem, improving management conditions for partner companies, strengthening future capabilities of the nuclear power plant industry, and supporting a virtuous cycle of regional development. This is made by reflecting the specificity of nuclear power generation as much as possible, and is designed to reflect the spirit of shared growth through win-win and cooperation in order to solve the challenges of the times while considering the characteristics as much as possible as possible. KHNP's shared growth activities can be said to be the practice of the spirit of the times(Zeitgeist). The spirit of the times given to us now is that companies should strive for sustainable growth as social air. KHNP has been striving to establish a creative and leading shared growth ecosystem. In particular, considering the positions of partners, it has been promoting continuous system improvement to establish a fair trade culture and deregulation. In addition, it has continuously discovered and implemented new customized support projects that are effective for partner companies and local communities. To this end, efforts have been made for shared growth through organic collaboration with partners and stakeholders. As detailed tasks, it also presents fostering new markets and new industries, maintaining supply chains, and emergency support for COVID-19 to maintain the nuclear power plant ecosystem. This reflects the social public good after the recent COVID-19 incident. In order to improve the management conditions of partner companies, productivity improvement, human resources enhancement, and customized funding are being implemented as detailed tasks. This is a plan to practice win-win growth with partner companies emphasized by corporate social responsibility (CSR) and ISO 26000 while being faithful to the main job. Until now, ESG management has focused on the environmental field to cope with the catastrophe of climate change. According to KHNP is presenting a public enterprise-type model in the environmental field. In order to strengthen the future capabilities of the nuclear power plant industry as a state-of-the-art energy company, it has set tasks to attract investment from partner companies, localization and new technologies R&D, and commercialization of innovative technologies. This is an effort to develop advanced nuclear power plant technology as a concrete practical measure of eco-friendly development. Meanwhile, the EU is preparing a social taxonomy to focus on the social sector, another important axis in ESG management, following the Green Taxonomy, a classification system in the environmental sector. KHNP includes enhancing local vitality, increasing income for the underprivileged, and overcoming the COVID-19 crisis as part of its shared growth activities, which is a representative social taxonomy field. The draft social taxonomy being promoted by the EU was announced in July, and the contents promoted by KHNP are consistent with this, leading the practice of social taxonomy

The Development of an Aggregate Power Resource Configuration Model Based on the Renewable Energy Generation Forecasting System (재생에너지 발전량 예측제도 기반 집합전력자원 구성모델 개발)

  • Eunkyung Kang;Ha-Ryeom Jang;Seonuk Yang;Sung-Byung Yang
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.229-256
    • /
    • 2023
  • The increase in telecommuting and household electricity demand due to the pandemic has led to significant changes in electricity demand patterns. This has led to difficulties in identifying KEPCO's PPA (power purchase agreements) and residential solar power generation and has added to the challenges of electricity demand forecasting and grid operation for power exchanges. Unlike other energy resources, electricity is difficult to store, so it is essential to maintain a balance between energy production and consumption. A shortage or overproduction of electricity can cause significant instability in the energy system, so it is necessary to manage the supply and demand of electricity effectively. Especially in the Fourth Industrial Revolution, the importance of data has increased, and problems such as large-scale fires and power outages can have a severe impact. Therefore, in the field of electricity, it is crucial to accurately predict the amount of power generation, such as renewable energy, along with the exact demand for electricity, for proper power generation management, which helps to reduce unnecessary power production and efficiently utilize energy resources. In this study, we reviewed the renewable energy generation forecasting system, its objectives, and practical applications to construct optimal aggregated power resources using data from 169 power plants provided by the Ministry of Trade, Industry, and Energy, developed an aggregation algorithm considering the settlement of the forecasting system, and applied it to the analytical logic to synthesize and interpret the results. This study developed an optimal aggregation algorithm and derived an aggregation configuration (Result_Number 546) that reached 80.66% of the maximum settlement amount and identified plants that increase the settlement amount (B1783, B1729, N6002, S5044, B1782, N6006) and plants that decrease the settlement amount (S5034, S5023, S5031) when aggregating plants. This study is significant as the first study to develop an optimal aggregation algorithm using aggregated power resources as a research unit, and we expect that the results of this study can be used to improve the stability of the power system and efficiently utilize energy resources.

The Effect of Common Features on Consumer Preference for a No-Choice Option: The Moderating Role of Regulatory Focus (재몰유선택적정황하공동특성대우고객희호적영향(在没有选择的情况下共同特性对于顾客喜好的影响): 조절초점적조절작용(调节焦点的调节作用))

  • Park, Jong-Chul;Kim, Kyung-Jin
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.1
    • /
    • pp.89-97
    • /
    • 2010
  • This study researches the effects of common features on a no-choice option with respect to regulatory focus theory. The primary interest is in three factors and their interrelationship: common features, no-choice option, and regulatory focus. Prior studies have compiled vast body of research in these areas. First, the "common features effect" has been observed bymany noted marketing researchers. Tversky (1972) proposed the seminal theory, the EBA model: elimination by aspect. According to this theory, consumers are prone to focus only on unique features during comparison processing, thereby dismissing any common features as redundant information. Recently, however, more provocative ideas have attacked the EBA model by asserting that common features really do affect consumer judgment. Chernev (1997) first reported that adding common features mitigates the choice gap because of the increasing perception of similarity among alternatives. Later, however, Chernev (2001) published a critically developed study against his prior perspective with the proposition that common features may be a cognitive load to consumers, and thus consumers are possible that they are prone to prefer the heuristic processing to the systematic processing. This tends to bring one question to the forefront: Do "common features" affect consumer choice? If so, what are the concrete effects? This study tries to answer the question with respect to the "no-choice" option and regulatory focus. Second, some researchers hold that the no-choice option is another best alternative of consumers, who are likely to avoid having to choose in the context of knotty trade-off settings or mental conflicts. Hope for the future also may increase the no-choice option in the context of optimism or the expectancy of a more satisfactory alternative appearing later. Other issues reported in this domain are time pressure, consumer confidence, and alternative numbers (Dhar and Nowlis 1999; Lin and Wu 2005; Zakay and Tsal 1993). This study casts the no-choice option in yet another perspective: the interactive effects between common features and regulatory focus. Third, "regulatory focus theory" is a very popular theme in recent marketing research. It suggests that consumers have two focal goals facing each other: promotion vs. prevention. A promotion focus deals with the concepts of hope, inspiration, achievement, or gain, whereas prevention focus involves duty, responsibility, safety, or loss-aversion. Thus, while consumers with a promotion focus tend to take risks for gain, the same does not hold true for a prevention focus. Regulatory focus theory predicts consumers' emotions, creativity, attitudes, memory, performance, and judgment, as documented in a vast field of marketing and psychology articles. The perspective of the current study in exploring consumer choice and common features is a somewhat creative viewpoint in the area of regulatory focus. These reviews inspire this study of the interaction possibility between regulatory focus and common features with a no-choice option. Specifically, adding common features rather than omitting them may increase the no-choice option ratio in the choice setting only to prevention-focused consumers, but vice versa to promotion-focused consumers. The reasoning is that when prevention-focused consumers come in contact with common features, they may perceive higher similarity among the alternatives. This conflict among similar options would increase the no-choice ratio. Promotion-focused consumers, however, are possible that they perceive common features as a cue of confirmation bias. And thus their confirmation processing would make their prior preference more robust, then the no-choice ratio may shrink. This logic is verified in two experiments. The first is a $2{\times}2$ between-subject design (whether common features or not X regulatory focus) using a digital cameras as the relevant stimulus-a product very familiar to young subjects. Specifically, the regulatory focus variable is median split through a measure of eleven items. Common features included zoom, weight, memory, and battery, whereas the other two attributes (pixel and price) were unique features. Results supported our hypothesis that adding common features enhanced the no-choice ratio only to prevention-focus consumers, not to those with a promotion focus. These results confirm our hypothesis - the interactive effects between a regulatory focus and the common features. Prior research had suggested that including common features had a effect on consumer choice, but this study shows that common features affect choice by consumer segmentation. The second experiment was used to replicate the results of the first experiment. This experimental study is equal to the prior except only two - priming manipulation and another stimulus. For the promotion focus condition, subjects had to write an essay using words such as profit, inspiration, pleasure, achievement, development, hedonic, change, pursuit, etc. For prevention, however, they had to use the words persistence, safety, protection, aversion, loss, responsibility, stability etc. The room for rent had common features (sunshine, facility, ventilation) and unique features (distance time and building state). These attributes implied various levels and valence for replication of the prior experiment. Our hypothesis was supported repeatedly in the results, and the interaction effects were significant between regulatory focus and common features. Thus, these studies showed the dual effects of common features on consumer choice for a no-choice option. Adding common features may enhance or mitigate no-choice, contradictory as it may sound. Under a prevention focus, adding common features is likely to enhance the no-choice ratio because of increasing mental conflict; under the promotion focus, it is prone to shrink the ratio perhaps because of a "confirmation bias." The research has practical and theoretical implications for marketers, who may need to consider common features carefully in a practical display context according to consumer segmentation (i.e., promotion vs. prevention focus.) Theoretically, the results suggest some meaningful moderator variable between common features and no-choice in that the effect on no-choice option is partly dependent on a regulatory focus. This variable corresponds not only to a chronic perspective but also a situational perspective in our hypothesis domain. Finally, in light of some shortcomings in the research, such as overlooked attribute importance, low ratio of no-choice, or the external validity issue, we hope it influences future studies to explore the little-known world of the "no-choice option."

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Consumer's Negative Brand Rumor Acceptance and Rumor Diffusion (소비자의 부정적 브랜드 루머의 수용과 확산)

  • Lee, Won-jun;Lee, Han-Suk
    • Asia Marketing Journal
    • /
    • v.14 no.2
    • /
    • pp.65-96
    • /
    • 2012
  • Brand has received much attention from considerable marketing research. When consumers consume product or services, they are exposed to a lot of brand related stimuli. These contain brand personality, brand experience, brand identity, brand communications and so on. A special kind of new crisis occasionally confronting companies' brand management today is the brand related rumor. An important influence on consumers' purchase decision making is the word-of-mouth spread by other consumers and most decisions are influenced by other's recommendations. In light of this influence, firms have reasonable reason to study and understand consumer-to-consumer communication such as brand rumor. The importance of brand rumor to marketers is increasing as the number of internet user and SNS(social network service) site grows. Due to the development of internet technology, people can spread rumors without the limitation of time, space and place. However relatively few studies have been published in marketing journals and little is known about brand rumors in the marketplace. The study of rumor has a long history in all major social science. But very few studies have dealt with the antecedents and consequences of any kind of brand rumor. Rumor has been generally described as a story or statement in general circulation without proper confirmation or certainty as to fact. And it also can be defined as an unconfirmed proposition, passed along from people to people. Rosnow(1991) claimed that rumors were transmitted because people needed to explain ambiguous and uncertain events and talking about them reduced associated anxiety. Especially negative rumors are believed to have the potential to devastate a company's reputation and relations with customers. From the perspective of marketer, negative rumors are considered harmful and extremely difficult to control in general. It is becoming a threat to a company's sustainability and sometimes leads to negative brand image and loss of customers. Thus there is a growing concern that these negative rumors can damage brands' reputations and lead them to financial disaster too. In this study we aimed to distinguish antecedents of brand rumor transmission and investigate the effects of brand rumor characteristics on rumor spread intention. We also found key components in personal acceptance of brand rumor. In contextualist perspective, we tried to unify the traditional psychological and sociological views. In this unified research approach we defined brand rumor's characteristics based on five major variables that had been found to influence the process of rumor spread intention. The five factors of usefulness, source credibility, message credibility, worry, and vividness, encompass multi level elements of brand rumor. We also selected product involvement as a control variable. To perform the empirical research, imaginary Korean 'Kimch' brand and related contamination rumor was created and proposed. Questionnaires were collected from 178 Korean samples. Data were collected from college students who have been experienced the focal product. College students were regarded as good subjects because they have a tendency to express their opinions in detail. PLS(partial least square) method was adopted to analyze the relations between variables in the equation model. The most widely adopted causal modeling method is LISREL. However it is poorly suited to deal with relatively small data samples and can yield not proper solutions in some cases. PLS has been developed to avoid some of these limitations and provide more reliable results. To test the reliability using SPSS 16 s/w, Cronbach alpha was examined and all the values were appropriate showing alpha values between .802 and .953. Subsequently, confirmatory factor analysis was conducted successfully. And structural equation modeling has been used to analyze the research model using smartPLS(ver. 2.0) s/w. Overall, R2 of adoption of rumor is .476 and R2 of intention of rumor transmission is .218. The overall model showed a satisfactory fit. The empirical results can be summarized as follows. According to the results, the variables of brand rumor characteristic such as source credibility, message credibility, worry, and vividness affect argument strength of rumor. And argument strength of rumor also affects rumor intention. On the other hand, the relationship between perceived usefulness and argument strength of rumor is not significant. The moderating effect of product involvement on the relations between argument strength of rumor and rumor W.O.M intention is not supported neither. Consequently this study suggests some managerial and academic implications. We consider some implications for corporate crisis management planning, PR and brand management. This results show marketers that rumor is a critical factor for managing strong brand assets. Also for researchers, brand rumor should become an important thesis of their interests to understand the relationship between consumer and brand. Recently many brand managers and marketers have focused on the short-term view. They just focused on strengthen the positive brand image. According to this study we suggested that effective brand management requires managing negative brand rumors with a long-term view of marketing decisions.

  • PDF

A Study on Modernization of International Conventions Relating to Aviation Security and Implementation of National Legislation (항공보안 관련 국제협약의 현대화와 국내입법의 이행 연구)

  • Lee, Kang-Bin
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.30 no.2
    • /
    • pp.201-248
    • /
    • 2015
  • In Korea the number of unlawful interference act on board aircrafts has been increased continuously according to the growth of aviation demand, and there were 55 incidents in 2000, followed by 354 incidents in 2014, and an average of 211 incidents a year over the past five years. In 1963, a number of states adopted the Convention on Offences and Certain Other Acts Committed on Board Aircraft (the Tokyo Convention 1963) as the first worldwide international legal instrument on aviation security. The Tokyo Convention took effect in 1969 and, shortly afterward, in 1970 the Convention for the Suppression of Unlawful Seizure of Aircraft(the Hague Convention 1970) was adopted, and the Convention for the Suppression of Unlawful Acts Against the Safety of Civil Aviation(the Montreal Convention 1971) was adopted in 1971. After 9/11 incidents in 2001, to amend and supplement the Montreal Convention 1971, the Convention on the Suppression of Unlawful Acts Relating to International Civil Aviation(the Beijing Convention 2010) was adopted in 2010, and to supplement the Hague Convention 1970, the Protocol Supplementary to the Convention for the Suppression of Unlawful Seizure of Aircraft(the Beijing Protocol 2010) was adopted in 2010. Since then, in response to increased cases of unruly behavior on board aircrafts which escalated in both severity and frequency,, the Montreal Protocol which is seen as an amendment to the Convention on Offences and Certain Other Acts Committed on Board Aircraft(the Tokyo Convention 1963) was adopted in 2014. Korea ratified the Tokyo Convention 1963, the Hague Convention 1970, the Montreal Convention 1971, the Montreal Supplementary Protocol 1988, and the Convention on the Marking of Plastic Explosive 1991 which have proven to be effective. Under the Tokyo Convention ratified in 1970, Korea further enacted the Aircraft Navigation Safety Act in 1974, as well as the Aviation Safety and Security Act that replaced the Aircraft Navigation Safety Act in August 2002. Meanwhile, the title of the Aviation Safety and Security Act was changed to the Aviation Security Act in April 2014. The Aviation Security Act is essentially an implementing legislation of the Tokyo Convention and Hague Convention. Also the language of the Aviation Security Act is generally broader than the unruly and disruptive behavior in Sections 1-3 of the model legislation in ICAO Circular 288. The Aviation Security Act has reflected the considerable parts of the implementation of national legislation under the Beijing Convention and Beijing Protocol 2010, and the Montreal Protocol 2014 that are the modernized international conventions relating to aviation security. However, in future, when these international conventions would come into effect and Korea would ratify them, the national legislation that should be amended or provided newly in the Aviation Security Act are as followings : The jurisdiction, the definition of 'in flight', the immunity from the actions against the aircraft commander, etc., the compulsory delivery of the offender by the aircraft commander, etc., the strengthening of penalty on the person breaking the law, the enlargement of application to the accomplice, and the observance of international convention. Among them, particularly the Korean legislation is silent on the scope of the jurisdiction. Therefore, in order for jurisdiction to be extended to the extra-territorial cases of unruly and disruptive offences, it is desirable that either the Aviation Security Act or the general Crime Codes should be revised. In conclusion, in order to meet the intelligent and diverse aviation threats, the Korean government should review closely the contents of international conventions relating to aviation security and the current ratification status of international conventions by each state, and make effort to improve the legislation relating to aviation security and the aviation security system for the ratification of international conventions and the implementation of national legislation under international conventions.

The Framework of Research Network and Performance Evaluation on Personal Information Security: Social Network Analysis Perspective (개인정보보호 분야의 연구자 네트워크와 성과 평가 프레임워크: 소셜 네트워크 분석을 중심으로)

  • Kim, Minsu;Choi, Jaewon;Kim, Hyun Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.177-193
    • /
    • 2014
  • Over the past decade, there has been a rapid diffusion of electronic commerce and a rising number of interconnected networks, resulting in an escalation of security threats and privacy concerns. Electronic commerce has a built-in trade-off between the necessity of providing at least some personal information to consummate an online transaction, and the risk of negative consequences from providing such information. More recently, the frequent disclosure of private information has raised concerns about privacy and its impacts. This has motivated researchers in various fields to explore information privacy issues to address these concerns. Accordingly, the necessity for information privacy policies and technologies for collecting and storing data, and information privacy research in various fields such as medicine, computer science, business, and statistics has increased. The occurrence of various information security accidents have made finding experts in the information security field an important issue. Objective measures for finding such experts are required, as it is currently rather subjective. Based on social network analysis, this paper focused on a framework to evaluate the process of finding experts in the information security field. We collected data from the National Discovery for Science Leaders (NDSL) database, initially collecting about 2000 papers covering the period between 2005 and 2013. Outliers and the data of irrelevant papers were dropped, leaving 784 papers to test the suggested hypotheses. The co-authorship network data for co-author relationship, publisher, affiliation, and so on were analyzed using social network measures including centrality and structural hole. The results of our model estimation are as follows. With the exception of Hypothesis 3, which deals with the relationship between eigenvector centrality and performance, all of our hypotheses were supported. In line with our hypothesis, degree centrality (H1) was supported with its positive influence on the researchers' publishing performance (p<0.001). This finding indicates that as the degree of cooperation increased, the more the publishing performance of researchers increased. In addition, closeness centrality (H2) was also positively associated with researchers' publishing performance (p<0.001), suggesting that, as the efficiency of information acquisition increased, the more the researchers' publishing performance increased. This paper identified the difference in publishing performance among researchers. The analysis can be used to identify core experts and evaluate their performance in the information privacy research field. The co-authorship network for information privacy can aid in understanding the deep relationships among researchers. In addition, extracting characteristics of publishers and affiliations, this paper suggested an understanding of the social network measures and their potential for finding experts in the information privacy field. Social concerns about securing the objectivity of experts have increased, because experts in the information privacy field frequently participate in political consultation, and business education support and evaluation. In terms of practical implications, this research suggests an objective framework for experts in the information privacy field, and is useful for people who are in charge of managing research human resources. This study has some limitations, providing opportunities and suggestions for future research. Presenting the difference in information diffusion according to media and proximity presents difficulties for the generalization of the theory due to the small sample size. Therefore, further studies could consider an increased sample size and media diversity, the difference in information diffusion according to the media type, and information proximity could be explored in more detail. Moreover, previous network research has commonly observed a causal relationship between the independent and dependent variable (Kadushin, 2012). In this study, degree centrality as an independent variable might have causal relationship with performance as a dependent variable. However, in the case of network analysis research, network indices could be computed after the network relationship is created. An annual analysis could help mitigate this limitation.