• Title/Summary/Keyword: requirements analysis

Search Result 4,385, Processing Time 0.032 seconds

An Empirical Study on the Determinants of Supply Chain Management Systems Success from Vendor's Perspective (참여자관점에서 공급사슬관리 시스템의 성공에 영향을 미치는 요인에 관한 실증연구)

  • Kang, Sung-Bae;Moon, Tae-Soo;Chung, Yoon
    • Asia pacific journal of information systems
    • /
    • v.20 no.3
    • /
    • pp.139-166
    • /
    • 2010
  • The supply chain management (SCM) systems have emerged as strong managerial tools for manufacturing firms in enhancing competitive strength. Despite of large investments in the SCM systems, many companies are not fully realizing the promised benefits from the systems. A review of literature on adoption, implementation and success factor of IOS (inter-organization systems), EDI (electronic data interchange) systems, shows that this issue has been examined from multiple theoretic perspectives. And many researchers have attempted to identify the factors which influence the success of system implementation. However, the existing studies have two drawbacks in revealing the determinants of systems implementation success. First, previous researches raise questions as to the appropriateness of research subjects selected. Most SCM systems are operating in the form of private industrial networks, where the participants of the systems consist of two distinct groups: focus companies and vendors. The focus companies are the primary actors in developing and operating the systems, while vendors are passive participants which are connected to the system in order to supply raw materials and parts to the focus companies. Under the circumstance, there are three ways in selecting the research subjects; focus companies only, vendors only, or two parties grouped together. It is hard to find researches that use the focus companies exclusively as the subjects probably due to the insufficient sample size for statistic analysis. Most researches have been conducted using the data collected from both groups. We argue that the SCM success factors cannot be correctly indentified in this case. The focus companies and the vendors are in different positions in many areas regarding the system implementation: firm size, managerial resources, bargaining power, organizational maturity, and etc. There are no obvious reasons to believe that the success factors of the two groups are identical. Grouping the two groups also raises questions on measuring the system success. The benefits from utilizing the systems may not be commonly distributed to the two groups. One group's benefits might be realized at the expenses of the other group considering the situation where vendors participating in SCM systems are under continuous pressures from the focus companies with respect to prices, quality, and delivery time. Therefore, by combining the system outcomes of both groups we cannot measure the system benefits obtained by each group correctly. Second, the measures of system success adopted in the previous researches have shortcoming in measuring the SCM success. User satisfaction, system utilization, and user attitudes toward the systems are most commonly used success measures in the existing studies. These measures have been developed as proxy variables in the studies of decision support systems (DSS) where the contribution of the systems to the organization performance is very difficult to measure. Unlike the DSS, the SCM systems have more specific goals, such as cost saving, inventory reduction, quality improvement, rapid time, and higher customer service. We maintain that more specific measures can be developed instead of proxy variables in order to measure the system benefits correctly. The purpose of this study is to find the determinants of SCM systems success in the perspective of vendor companies. In developing the research model, we have focused on selecting the success factors appropriate for the vendors through reviewing past researches and on developing more accurate success measures. The variables can be classified into following: technological, organizational, and environmental factors on the basis of TOE (Technology-Organization-Environment) framework. The model consists of three independent variables (competition intensity, top management support, and information system maturity), one mediating variable (collaboration), one moderating variable (government support), and a dependent variable (system success). The systems success measures have been developed to reflect the operational benefits of the SCM systems; improvement in planning and analysis capabilities, faster throughput, cost reduction, task integration, and improved product and customer service. The model has been validated using the survey data collected from 122 vendors participating in the SCM systems in Korea. To test for mediation, one should estimate the hierarchical regression analysis on the collaboration. And moderating effect analysis should estimate the moderated multiple regression, examines the effect of the government support. The result shows that information system maturity and top management support are the most important determinants of SCM system success. Supply chain technologies that standardize data formats and enhance information sharing may be adopted by supply chain leader organization because of the influence of focal company in the private industrial networks in order to streamline transactions and improve inter-organization communication. Specially, the need to develop and sustain an information system maturity will provide the focus and purpose to successfully overcome information system obstacles and resistance to innovation diffusion within the supply chain network organization. The support of top management will help focus efforts toward the realization of inter-organizational benefits and lend credibility to functional managers responsible for its implementation. The active involvement, vision, and direction of high level executives provide the impetus needed to sustain the implementation of SCM. The quality of collaboration relationships also is positively related to outcome variable. Collaboration variable is found to have a mediation effect between on influencing factors and implementation success. Higher levels of inter-organizational collaboration behaviors such as shared planning and flexibility in coordinating activities were found to be strongly linked to the vendors trust in the supply chain network. Government support moderates the effect of the IS maturity, competitive intensity, top management support on collaboration and implementation success of SCM. In general, the vendor companies face substantially greater risks in SCM implementation than the larger companies do because of severe constraints on financial and human resources and limited education on SCM systems. Besides resources, Vendors generally lack computer experience and do not have sufficient internal SCM expertise. For these reasons, government supports may establish requirements for firms doing business with the government or provide incentives to adopt, implementation SCM or practices. Government support provides significant improvements in implementation success of SCM when IS maturity, competitive intensity, top management support and collaboration are low. The environmental characteristic of competition intensity has no direct effect on vendor perspective of SCM system success. But, vendors facing above average competition intensity will have a greater need for changing technology. This suggests that companies trying to implement SCM systems should set up compatible supply chain networks and a high-quality collaboration relationship for implementation and performance.

Analysis of Metadata Standards of Record Management for Metadata Interoperability From the viewpoint of the Task model and 5W1H (메타데이터 상호운용성을 위한 기록관리 메타데이터 표준 분석 5W1H와 태스크 모델의 관점에서)

  • Baek, Jae-Eun;Sugimoto, Shigeo
    • The Korean Journal of Archival Studies
    • /
    • no.32
    • /
    • pp.127-176
    • /
    • 2012
  • Metadata is well recognized as one of the foundational factors in archiving and long-term preservation of digital resources. There are several metadata standards for records management, archives and preservation, e.g. ISAD(G), EAD, AGRkMs, PREMIS, and OAIS. Consideration is important in selecting appropriate metadata standards in order to design metadata schema that meet the requirements of a particular archival system. Interoperability of metadata with other systems should be considered in schema design. In our previous research, we have presented a feature analysis of metadata standards by identifying the primary resource lifecycle stages where each standard is applied. We have clarified that any single metadata standard cannot cover the whole records lifecycle for archiving and preservation. Through this feature analysis, we analyzed the features of metadata in the whole records lifecycle, and we clarified the relationships between the metadata standards and the stages of the lifecycle. In the previous study, more detailed analysis was left for future study. This paper proposes to analyze the metadata schemas from the viewpoint of tasks performed in the lifecycle. Metadata schemas are primarily defined to describe properties of a resource in accordance with the purposes of description, e.g. finding aids, records management, preservation and so forth. In other words, the metadata standards are resource- and purpose-centric, and the resource lifecycle is not explicitly reflected in the standards. There are no systematic methods for mapping between different metadata standards in accordance with the lifecycle. This paper proposes a method for mapping between metadata standards based on the tasks contained in the resource lifecycle. We first propose a Task Model to clarify tasks applied to resources in each stage of the lifecycle. This model is created as a task-centric model to identify features of metadata standards and to create mappings among elements of those standards. It is important to categorize the elements in order to limit the semantic scope of mapping among elements and decrease the number of combinations of elements for mapping. This paper proposes to use 5W1H (Who, What, Why, When, Where, How) model to categorize the elements. 5W1H categories are generally used for describing events, e.g. news articles. As performing a task on a resource causes an event and metadata elements are used in the event, we consider that the 5W1H categories are adequate to categorize the elements. By using these categories, we determine the features of every element of metadata standards which are AGLS, AGRkMS, PREMIS, EAD, OAIS and an attribute set extracted from DPC decision flow. Then, we perform the element mapping between the standards, and find the relationships between the standards. In this study, we defined a set of terms for each of 5W1H categories, which typically appear in the definition of an element, and used those terms to categorize the elements. For example, if the definition of an element includes the terms such as person and organization that mean a subject which contribute to create, modify a resource the element is categorized into the Who category. A single element can be categorized into one or more 5W1H categories. Thus, we categorized every element of the metadata standards using the 5W1H model, and then, we carried out mapping among the elements in each category. We conclude that the Task Model provides a new viewpoint for metadata schemas and is useful to help us understand the features of metadata standards for records management and archives. The 5W1H model, which is defined based on the Task Model, provides us a core set of categories to semantically classify metadata elements from the viewpoint of an event caused by a task.

Major Class Recommendation System based on Deep learning using Network Analysis (네트워크 분석을 활용한 딥러닝 기반 전공과목 추천 시스템)

  • Lee, Jae Kyu;Park, Heesung;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.95-112
    • /
    • 2021
  • In university education, the choice of major class plays an important role in students' careers. However, in line with the changes in the industry, the fields of major subjects by department are diversifying and increasing in number in university education. As a result, students have difficulty to choose and take classes according to their career paths. In general, students choose classes based on experiences such as choices of peers or advice from seniors. This has the advantage of being able to take into account the general situation, but it does not reflect individual tendencies and considerations of existing courses, and has a problem that leads to information inequality that is shared only among specific students. In addition, as non-face-to-face classes have recently been conducted and exchanges between students have decreased, even experience-based decisions have not been made as well. Therefore, this study proposes a recommendation system model that can recommend college major classes suitable for individual characteristics based on data rather than experience. The recommendation system recommends information and content (music, movies, books, images, etc.) that a specific user may be interested in. It is already widely used in services where it is important to consider individual tendencies such as YouTube and Facebook, and you can experience it familiarly in providing personalized services in content services such as over-the-top media services (OTT). Classes are also a kind of content consumption in terms of selecting classes suitable for individuals from a set content list. However, unlike other content consumption, it is characterized by a large influence of selection results. For example, in the case of music and movies, it is usually consumed once and the time required to consume content is short. Therefore, the importance of each item is relatively low, and there is no deep concern in selecting. Major classes usually have a long consumption time because they have to be taken for one semester, and each item has a high importance and requires greater caution in choice because it affects many things such as career and graduation requirements depending on the composition of the selected classes. Depending on the unique characteristics of these major classes, the recommendation system in the education field supports decision-making that reflects individual characteristics that are meaningful and cannot be reflected in experience-based decision-making, even though it has a relatively small number of item ranges. This study aims to realize personalized education and enhance students' educational satisfaction by presenting a recommendation model for university major class. In the model study, class history data of undergraduate students at University from 2015 to 2017 were used, and students and their major names were used as metadata. The class history data is implicit feedback data that only indicates whether content is consumed, not reflecting preferences for classes. Therefore, when we derive embedding vectors that characterize students and classes, their expressive power is low. With these issues in mind, this study proposes a Net-NeuMF model that generates vectors of students, classes through network analysis and utilizes them as input values of the model. The model was based on the structure of NeuMF using one-hot vectors, a representative model using data with implicit feedback. The input vectors of the model are generated to represent the characteristic of students and classes through network analysis. To generate a vector representing a student, each student is set to a node and the edge is designed to connect with a weight if the two students take the same class. Similarly, to generate a vector representing the class, each class was set as a node, and the edge connected if any students had taken the classes in common. Thus, we utilize Node2Vec, a representation learning methodology that quantifies the characteristics of each node. For the evaluation of the model, we used four indicators that are mainly utilized by recommendation systems, and experiments were conducted on three different dimensions to analyze the impact of embedding dimensions on the model. The results show better performance on evaluation metrics regardless of dimension than when using one-hot vectors in existing NeuMF structures. Thus, this work contributes to a network of students (users) and classes (items) to increase expressiveness over existing one-hot embeddings, to match the characteristics of each structure that constitutes the model, and to show better performance on various kinds of evaluation metrics compared to existing methodologies.

The Safety and Immunogenicity of a Trivalent, Live, Attenuated MMR Vaccine, PriorixTM (MMR(Measles-Mumps-Rubella) 약독화 생백신인 프리오릭스주를 접종한 후 안전성과 유효성의 평가에 관한 연구)

  • Ahn, Seung-In;Chung, Min-Kook;Yoo, Jung-Suk;Chung, Hye-Jeon;Hur, Jae-Kyun;Shin, Young-Kyu;Chang, Jin-Keun;Cha, Sung-Ho
    • Clinical and Experimental Pediatrics
    • /
    • v.48 no.9
    • /
    • pp.960-968
    • /
    • 2005
  • Purpose : This multi-center, open-label, clinical study was designed to evaluate the safety and immunogenicity of a trivalent, live, attenuated measles-mumps-rubella(MMR) vaccine, $Priorix^{TM}$ in Korean children. Methods : From July 2002 to February 2003, a total of 252 children, aged 12-15 months or 4-6 years, received $Priorix^{TM}$ at four centers : Han-il General Hospital, Kyunghee University Hospital, St. Paul's Hospital at the Catholic Medical College in Seoul, and Korea University Hospital in Ansan, Korea. Only subjects who fully met protocol requirements were included in the final analysis. The occurrence of local and systemic adverse events after vaccination was evaluated from diary cards and physical examination for 42 days after vaccination. Serum antibody levels were measured prior to and 42 days post-vaccination using IgG ELISA assays at GlaxoSmithKline Biologicals (GSK) in Belgium. Results : Of the 252 enrolled subjects, a total of 199 were included in the safety analysis, including 103 from the 12-15 month age group and 96 from the 4-6 year age group. The occurrence of local reactions related to the study drug was 10.1 percent, and the occurrence of systemic reactions was 6.5 percent. There were no episodes of aseptic meningitis or febrile convulsions, nor any other serious adverse reaction. In immunogenicity analysis, the seroconversion rate of previously seronegative subjects was 99 percent for measles, 93 percent for mumps and 100 percent for rubella. Both age groups showed similar seroconversion rates. The geometric mean titers achieved, 42 days pos-tvaccination, were : For measles, in the age group 12-15 months, 3,838.6 mIU/mL [3,304.47, 4,458.91]; in the age group 4-6 years, 1,886.2 mIU/mL [825.83, 4,308.26]. For mumps, in the age group 12-15 months, 956.3 U/mL [821.81, 1,112.71]; in the age group 4-6 years, 2,473.8 U/mL [1,518.94, 4,028.92]. For rubella, in the age group 12-15 months, 94.5 IU/mL [79.56, 112.28]; in the age group 4-6 years, 168.9 IU/mL [108.96, 261.90]. Conclusion : When Korean children in the age groups of 12-15 months or 4-6 years were vaccinated with GlaxoSmithKline Biologicals' live attenuated MMR vaccine ($Priorix^{TM}$), adverse events were limited to those generally expected with any live vaccine. $Priorix^{TM}$ demonstrated excellent immunogenicity in this population.

Analysis of the Time-dependent Relation between TV Ratings and the Content of Microblogs (TV 시청률과 마이크로블로그 내용어와의 시간대별 관계 분석)

  • Choeh, Joon Yeon;Baek, Haedeuk;Choi, Jinho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.163-176
    • /
    • 2014
  • Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Establishment of an Analytical Method for Prometryn Residues in Clam Using GC-MS (GC-MS를 이용한 바지락 중 prometryn 잔류분석법 확립)

  • Chae, Young-Sik;Cho, Yoon-Jae;Jang, Kyung-Joo;Kim, Jae-Young;Lee, Sang-Mok;Chang, Moon-Ik
    • Korean Journal of Food Science and Technology
    • /
    • v.45 no.5
    • /
    • pp.531-536
    • /
    • 2013
  • We developed a simple, sensitive, and specific analytical method for prometryn using gas chromatography-mass spectrometry (GC-MS). Prometryn is a selective herbicide used for the control of annual grasses and broadleaf weeds in cotton and celery crops. On the basis of high specificity, sensitivity, and reproducibility, combined with simple analytical operation, we propose that our newly developed method is suitable for use as a Ministry of Food and Drug Safety (MFDS, Korea) official method in the routine analysis of individual pesticide residues. Further, the method is applicable in clams. The separation condition for GC-MS was optimized by using a DB-5MS capillary column ($30m{\times}0.25mm$, 0.25 ${\mu}m$) with helium as the carrier gas, at a flow rate of 0.9 mL/min. We achieved high linearity over the concentration range 0.02-0.5 mg/L (correlation coefficient, $r^2$ >0.998). Our method is specific and sensitive, and has a quantitation limit of 0.04 mg/kg. The average recovery in clams ranged from 84.0% to 98.0%. The reproducibility of measurements expressed as the coefficient of variation (CV%) ranged from 3.0% to 7.1%. Our analytical procedure showed high accuracy and acceptable sensitivity regarding the analytical requirements for prometryn in fishery products. Finally, we successfully applied our method to the determination of residue levels in fishery products, and showed that none of the analyzed samples contained detectable amounts of residues.

Change of photosynthetic efficiency and yield by low light intensity on ripening stage in japonica rice (등숙기의 차광 처리에 의한 광합성능 및 쌀 수량 변화)

  • Lee, Min Hee;Kang, Shin-Gu;Sang, Wan-Gyu;Ku, Bon-Il;Kim, Young-Doo;Park, Hong-Kyu;Lee, Jeom-Ho
    • Korean Journal of Agricultural Science
    • /
    • v.41 no.4
    • /
    • pp.327-334
    • /
    • 2014
  • Light intensity is one of the most important requirements for plant growth, affecting growth, development, survival, and crop productivity. Sunlight is the main energy source on Earth which is energy used by photosynthesis to convert light energy to chemical energy. In this study, the light use efficiency and photosynthetic characteristics of high-quality rice cultivars were evaluated after shading on ripening stage. For the study, we treated of three levels of shade (0, 50 and 70%) on rice at ripening stage and two levels of nitrogen (9 and 18 kg/10a) used three high yielding rice cultivars, such as Boramchan, Hopum, and Honong. The shade was given for the respective plots from heading up to harvesting. We were performed to determine growth survey, SPAD and chlorophyll fluorescence every 10 days interval after shading on ripening stage. At harvest stage, grain yield and yield components were determined. Results of analysis of the results representing the maximum photosynthetic efficiency of PSII, Fv/Fm, and SPAD were decreased by depending on the time at full sunlight. But shade treatments were not changed and a significant difference among cultivars did not appear. Compared with the full sunlight, shade treatments significantly delayed ripening rate and decreased rice quality of cultivated rice. Therefore, rice yield, can be reduced in proportion to the shading density is apparent, the rate of decrease was not observed difference between varieties, when protected from light 70%, and decreased to less than 50%. The adverse effects of low light intensity on the yield and yield components were not able to significantly minimize by the nitrogen level.

A study on the exemption of liability of air carriers (항공운송인의 손해배상책임 면제에 관한 법적 고찰)

  • So, Jae-Seon;Lee, Chang-Kyu
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.30 no.1
    • /
    • pp.95-116
    • /
    • 2015
  • Air transport agreement can be divided into air passenger contract of carriage and aviation also of the contract of carriage. And air carriers for damages greater (1) cause reason, of (2) limit reason, (3) exemption reason. Exemption reason for the extinction of the liability for damages in our Commercial Code, the Convention and domestic law are mixed. Convention on the Commercial Code and air transport, air transport people, if it is proved and that it has taken all the measures that are needed for the prevention of damage to overdue damage of passengers, liability is waived. So what was to achieve the requirements of all the actions that are reasonably necessary in any case is a problem. Amendment has the feature that the treaty for the International Air Transport reflect in accordance with the domestic situation, while being struck by international standards encompassing land, sea and air transport, even on the system. However, Commercial Code while mainly reflect the Montreal Convention governing air carrier's liability issues on the contract of carriage, a problem which the Convention had also began to occur together. So the problem due to accept the treaty to fit the domestic situation occurs. There is a need for analysis of all of the actions that are "reasonably necessary, which is defined in the Commercial Code. If there is no claim within Value Date rotor two years to air carriers on the court for the damage caused by air transport, the responsibility of air carriers disappear, sued the period of such two years, what kind of meaning on domestic law extension and stop to be whether it is interpreted, it should be determined to do their aggressive measures for the reasonable care and accident prevention.

Comparison of the Legislation Applicable to Compare the use of Diagnostic Radiation Devices (진단용 방사선발생장치 이용에 적용되는 법제의 비교)

  • Ko, Jong-Kyung;Jeon, Yeo-Ryeong;Han, Eun-Ok;Cho, Pyong-Kon;Kim, Yong-Min
    • Journal of radiological science and technology
    • /
    • v.38 no.3
    • /
    • pp.277-286
    • /
    • 2015
  • Diagnostic radiation devices that is used in the country has reached to 78,000 units. When used for human subjects diagnostic purposes, it is subject to Medical Service Act, when used in diagnostic purposes in animal subjects, the subject to Veterinarians Act. When used for other purposes are subject to the Nuclear Safety Act. Even the same radiation devices varies the legislation that is applied depending on the intended use and object. Diversified been p rovisions a re necessary compared to t he analysis o f l egal content in o rder t o prevent confusion of the legislation is a matter to be applied. It is a qualitative study that Nuclear Safety Act, Medical Service Act and Veterinarians Act administrative procedures for the introduction of the applied diagnostic radiation devices, safety inspection, human resources management, area management and the content related to administrative punishment. The Nuclear Safety Act sub-provisions, the introduction of diagnostic radiation generating devices, there are many complex and complete requirements administrative procedures on the concept of a permit. Inspection of safety associated with the use, would be subject to periodic inspection auditing characteristics over the entire field of radiation safety management. It must receive court regular education for the safety administrator and workers. Unlike the reference of the radiation dose rate to specify the radiation controlled area there is a measurement obligation of radiation dose rate. Unlike the reference of the radiation dose rate to specify the radiation controlled area there is a measurement obligation of radiation dose rate. Quantitative difference of administrative punishment that is imposed when legislation violation has reached up to 10 times, over the entire field, the largest burden of radiation safety management at the time of application of the Nuclear Safety Act sub provisions. And it is applied differently depending on the purpose and the imaging target using the same diagnostic radiation devices. Depending on the use mainly under the current legal system, radiation can be lacking in fairness of the contents of the legislation for safety management, there is a risk of confusion. Alternatives such as centralized and standardization of legislation by diagnostic radiation devices use is expected to be necessary.