• Title/Summary/Keyword: operations concept

Search Result 732, Processing Time 0.027 seconds

Analysis and Implication on the International Regulations related to Unmanned Aircraft -with emphasis on ICAO, U.S.A., Germany, Australia- (세계 무인항공기 운용 관련 규제 분석과 시사점 - ICAO, 미국, 독일, 호주를 중심으로 -)

  • Kim, Dong-Uk;Kim, Ji-Hoon;Kim, Sung-Mi;Kwon, Ky-Beom
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.32 no.1
    • /
    • pp.225-285
    • /
    • 2017
  • In regard to the regulations related to the RPA(Remotely Piloted Aircraft), which is sometimes called in other countries as UA(Unmanned Aircraft), ICAO stipulates the regulations in the 'RPAS manual (2015)' in detail based on the 'Chicago Convention' in 1944, and enacts provisions for the Rules of UAS or RPAS. Other contries stipulates them such as the Federal Airline Rules (14 CFR), Public Law (112-95) in the United States, the Air Transport Act, Air Transport Order, Air Transport Authorization Order (through revision in "Regulations to operating Rules on unmanned aerial System") based on EASA Regulation (EC) No.216/2008 in the case of unmanned aircaft under 150kg in Germany, and Civil Aviation Act (CAA 1998), Civil Aviation Act 101 (CASR Part 101) in Australia. Commonly, these laws exclude the model aircraft for leisure purpose and require pilots on the ground, not onboard aricraft, capable of controlling RPA. The laws also require that all managements necessary to operate RPA and pilots safely and efficiently under the structure of the unmanned aircraft system within the scope of the regulations. Each country classifies the RPA as an aircraft less than 25kg. Australia and Germany further break down the RPA at a lower weight. ICAO stipulates all general aviation operations, including commercial operation, in accordance with Annex 6 of the Chicago Convention, and it also applies to RPAs operations. However, passenger transportation using RPAs is excluded. If the operational scope of the RPAs includes the airspace of another country, the special permission of the relevant country shall be required 7 days before the flight date with detail flight plan submitted. In accordance with Federal Aviation Regulation 107 in the United States, a small non-leisure RPA may be operated within line-of-sight of a responsible navigator or observer during the day in the speed range up to 161 km/hr (87 knots) and to the height up to 122 m (400 ft) from surface or water. RPA must yield flight path to other aircraft, and is prohibited to load dangerous materials or to operate more than two RPAs at the same time. In Germany, the regulations on UAS except for leisure and sports provide duty to avoidance of airborne collisions and other provisions related to ground safety and individual privacy. Although commercial UAS of 5 kg or less can be freely operated without approval by relaxing the existing regulatory requirements, all the UAS regardless of the weight must be operated below an altitude of 100 meters with continuous monitoring and pilot control. Australia was the first country to regulate unmanned aircraft in 2001, and its regulations have impacts on the unmanned aircraft laws of ICAO, FAA, and EASA. In order to improve the utiliity of unmanned aircraft which is considered to be low risk, the regulation conditions were relaxed through the revision in 2016 by adding the concept "Excluded RPA". In the case of excluded RPA, it can be operated without special permission even for commercial purpose. Furthermore, disscussions on a new standard manual is being conducted for further flexibility of the current regulations.

  • PDF

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

Retail Product Development and Brand Management Collaboration between Industry and University Student Teams (산업여대학학생단대지간적령수산품개발화품패관리협작(产业与大学学生团队之间的零售产品开发和品牌管理协作))

  • Carroll, Katherine Emma
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.3
    • /
    • pp.239-248
    • /
    • 2010
  • This paper describes a collaborative project between academia and industry which focused on improving the marketing and product development strategies for two private label apparel brands of a large regional department store chain in the southeastern United States. The goal of the project was to revitalize product lines of the two brands by incorporating student ideas for new solutions, thereby giving the students practical experience with a real-life industry situation. There were a number of key players involved in the project. A privately-owned department store chain based in the southeastern United States which was seeking an academic partner had recognized a need to update two existing private label brands. They targeted middle-aged consumers looking for casual, moderately priced merchandise. The company was seeking to change direction with both packaging and presentation, and possibly product design. The branding and product development divisions of the company contacted professors in an academic department of a large southeastern state university. Two of the professors agreed that the task would be a good fit for their classes - one was a junior-level Intermediate Brand Management class; the other was a senior-level Fashion Product Development class. The professors felt that by working collaboratively on the project, students would be exposed to a real world scenario, within the security of an academic learning environment. Collaboration within an interdisciplinary team has the advantage of providing experiences and resources beyond the capabilities of a single student and adds "brainpower" to problem-solving processes (Lowman 2000). This goal of improving the capabilities of students directed the instructors in each class to form interdisciplinary teams between the Branding and Product Development classes. In addition, many universities are employing industry partnerships in research and teaching, where collaboration within temporal (semester) and physical (classroom/lab) constraints help to increase students' knowledge and experience of a real-world situation. At the University of Tennessee, the Center of Industrial Services and UT-Knoxville's College of Engineering worked with a company to develop design improvements in its U.S. operations. In this study, Because should be lower case b with a private label retail brand, Wickett, Gaskill and Damhorst's (1999) revised Retail Apparel Product Development Model was used by the product development and brand management teams. This framework was chosen because it addresses apparel product development from the concept to the retail stage. Two classes were involved in this project: a junior level Brand Management class and a senior level Fashion Product Development class. Seven teams were formed which included four students from Brand Management and two students from Product Development. The classes were taught the same semester, but not at the same time. At the beginning of the semester, each class was introduced to the industry partner and given the problem. Half the teams were assigned to the men's brand and half to the women's brand. The teams were responsible for devising approaches to the problem, formulating a timeline for their work, staying in touch with industry representatives and making sure that each member of the team contributed in a positive way. The objective for the teams was to plan, develop, and present a product line using merchandising processes (following the Wickett, Gaskill and Damhorst model) and develop new branding strategies for the proposed lines. The teams performed trend, color, fabrication and target market research; developed sketches for a line; edited the sketches and presented their line plans; wrote specifications; fitted prototypes on fit models, and developed final production samples for presentation to industry. The branding students developed a SWOT analysis, a Brand Measurement report, a mind-map for the brands and a fully integrated Marketing Report which was presented alongside the ideas for the new lines. In future if the opportunity arises to work in this collaborative way with an existing company who wishes to look both at branding and product development strategies, classes will be scheduled at the same time so that students have more time to meet and discuss timelines and assigned tasks. As it was, student groups had to meet outside of each class time and this proved to be a challenging though not uncommon part of teamwork (Pfaff and Huddleston, 2003). Although the logistics of this exercise were time-consuming to set up and administer, professors felt that the benefits to students were multiple. The most important benefit, according to student feedback from both classes, was the opportunity to work with industry professionals, follow their process, and see the results of their work evaluated by the people who made the decisions at the company level. Faculty members were grateful to have a "real-world" case to work with in the classroom to provide focus. Creative ideas and strategies were traded as plans were made, extending and strengthening the departmental links be tween the branding and product development areas. By working not only with students coming from a different knowledge base, but also having to keep in contact with the industry partner and follow the framework and timeline of industry practice, student teams were challenged to produce excellent and innovative work under new circumstances. Working on the product development and branding for "real-life" brands that are struggling gave students an opportunity to see how closely their coursework ties in with the real-world and how creativity, collaboration and flexibility are necessary components of both the design and business aspects of company operations. Industry personnel were impressed by (a) the level and depth of knowledge and execution in the student projects, and (b) the creativity of new ideas for the brands.

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.

A Study on the Relationship between Business Plan Components and Corporate Performance (사업계획서의 구성요소와 기업성과와의 관계에 관한 연구)

  • Koh, In-Kon;Lee, Sang-Seok;Kim, Dae-Ho
    • 한국벤처창업학회:학술대회논문집
    • /
    • 2006.04a
    • /
    • pp.45-75
    • /
    • 2006
  • How much influence does a business plan have on a corporate performance? Whilst previous studies and literatures all assert a strong correlation between the two, very few have actually conducted practical analyses to support that. This study takes an empirical approach in its analysis of Korea' s small and medium-sized enterprises (SME) with the view to finding an answer to the question. A business plan' s components, which have to date been suggested only in theory and in concept, have been selected through the study of literatures and preliminary examination. The selected components were then narrowed down into five factors of productivity, implementation, operational direction, product/service and customer accessibility by applying factor analysis. With which items to measure corporate performance is also an important question as results differ depending on which measurement items were used. For the purpose of this study, corporate performance was classified into effectiveness, adaptability and efficiency to measure how greatly each is influenced by the components of a business plan. Results show that effectiveness and adaptability have a positive (+) influence on corporate performance. The regression model seems to explain effectiveness particularly well. However, different directions of influences were showed in explain power of the research model were not high. And it can be interpreted that implementation of the plan is as important as the establishment of it. Thus a good corporate performance is to be had only under an excellent plan and following an excellent implementation. In most of the companies surveyed, business plans were established regularly led by the intense involvement of the CEO. Such plans were then used in internal operations, such as guiding operational direction and measuring corporate performance. Unlike general expectations, relatively few companies used them in financing from external sources such as banks or venture capitals. These findings are different from previous studies conducted in this field. Also, as market uncertainty was pointed out as the biggest obstacle to business planning. a manager must pay more attention to acquiring external information and knowledge so as to minimize it.

  • PDF

Analysis and Performance Evaluation of Pattern Condensing Techniques used in Representative Pattern Mining (대표 패턴 마이닝에 활용되는 패턴 압축 기법들에 대한 분석 및 성능 평가)

  • Lee, Gang-In;Yun, Un-Il
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.77-83
    • /
    • 2015
  • Frequent pattern mining, which is one of the major areas actively studied in data mining, is a method for extracting useful pattern information hidden from large data sets or databases. Moreover, frequent pattern mining approaches have been actively employed in a variety of application fields because the results obtained from them can allow us to analyze various, important characteristics within databases more easily and automatically. However, traditional frequent pattern mining methods, which simply extract all of the possible frequent patterns such that each of their support values is not smaller than a user-given minimum support threshold, have the following problems. First, traditional approaches have to generate a numerous number of patterns according to the features of a given database and the degree of threshold settings, and the number can also increase in geometrical progression. In addition, such works also cause waste of runtime and memory resources. Furthermore, the pattern results excessively generated from the methods also lead to troubles of pattern analysis for the mining results. In order to solve such issues of previous traditional frequent pattern mining approaches, the concept of representative pattern mining and its various related works have been proposed. In contrast to the traditional ones that find all the possible frequent patterns from databases, representative pattern mining approaches selectively extract a smaller number of patterns that represent general frequent patterns. In this paper, we describe details and characteristics of pattern condensing techniques that consider the maximality or closure property of generated frequent patterns, and conduct comparison and analysis for the techniques. Given a frequent pattern, satisfying the maximality for the pattern signifies that all of the possible super sets of the pattern must have smaller support values than a user-specific minimum support threshold; meanwhile, satisfying the closure property for the pattern means that there is no superset of which the support is equal to that of the pattern with respect to all the possible super sets. By mining maximal frequent patterns or closed frequent ones, we can achieve effective pattern compression and also perform mining operations with much smaller time and space resources. In addition, compressed patterns can be converted into the original frequent pattern forms again if necessary; especially, the closed frequent pattern notation has the ability to convert representative patterns into the original ones again without any information loss. That is, we can obtain a complete set of original frequent patterns from closed frequent ones. Although the maximal frequent pattern notation does not guarantee a complete recovery rate in the process of pattern conversion, it has an advantage that can extract a smaller number of representative patterns more quickly compared to the closed frequent pattern notation. In this paper, we show the performance results and characteristics of the aforementioned techniques in terms of pattern generation, runtime, and memory usage by conducting performance evaluation with respect to various real data sets collected from the real world. For more exact comparison, we also employ the algorithms implementing these techniques on the same platform and Implementation level.

A study on Categorized type and range for the Aircraft and the LSA (우리나라 항공기 및 경량항공기의 종류 및 범위에 대한 법적 고찰)

  • Kim, Woong-Yi;Shin, Dai-Won
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.28 no.1
    • /
    • pp.55-71
    • /
    • 2013
  • By aircraft from Aviation regulations and institutional regulatory framework for ensuring the safety is secured. State-of-the-art aircraft, according to the type of development and diversification, modernization and new types of aircraft are operated. In particular, light aircraft and ultralight flying device such as the gyro-plane and unmanned flying devices is introduced a new device, and the device operates at these flight in accordance with the standards of the Aviation Act regulations may not occur often. Variety of light aircraft and ultra-light aircraft assembly, can be adapted for a person engaged in the business of aviation safety management and to perform the legal basis was established. Depending on the classification of newly introduced aircraft, the biggest change is the introduction of the concept of the LSA. In Korea, the various light aircraft are operating, but these aircraft range not clearly Aviation Regulations had difficulty in ensuring safety. This study examined the differences between international rules and regulations of Korea about the classification of aircraft. The LSA are included in aircraft categories internationally, but LSA will not be included in the aircraft categories, which is one of a range of powered flight device exists in Korea Aviation Act. Limit for maximum continuous power speed in a LSA, it is a limit on the right of the people who want using the high-performance plane. Also it is an international trend does not fit in, and is consistent with the intent of LSA manufacturer. Delete the content from a range of future aviation law revisions and light aircraft-related provisions to limit the maximum continuous power speed is considered to be suitable for the purpose of introducing the light aircraft industry. The laws and regulations set up in order to ensure the safety of ultralight aircraft categories existing in ultralight aircraft that exceeds the purpose of the introduction of LSA technology development at home and abroad, and is intended to reflect. These standards complement of aircraft operation is not appropriate for the situation unless the country is difficult to ensure the safety of operations. Also developed in other countries, the introduction of aircraft operating in the country, so many problems occur early revision is required.

  • PDF

A study of SCM strategic plan: Focusing on the case of LG electronics (공급사슬 관리 구축전략에 관한 연구: LG전자 사례 중심으로)

  • Lee, Gi-Wan;Lee, Sang-Youn
    • Journal of Distribution Science
    • /
    • v.9 no.3
    • /
    • pp.83-94
    • /
    • 2011
  • Most domestic companies, with the exclusion of major firms, are reluctant to implement a supply chain management (SCM) network into their operations. Most small- and medium-sized enterprises are not even aware of SCM. Due to the inherent total-systems efficiency of SCM, it coordinates domestic manufacturers, subcontractors, distributors, and physical distributors and cuts down on cost of inventory control, as well as demand management. Furthermore, a lack of SCM causes a decrease in competitiveness for domestic companies. The reason lies in the fundamentality of SCM, which is the characteristic of information sharing, process innovation throughout SCM, and the vast range of problems the SCM management tool is able to address. This study suggests the contemplation and reformation of the current SCM situation by analyzing the SCM strategic plan, discourses and logical discussions on the topic, and a successful case for adapting SCM; hence, the study plans to productively "process" SCM. First, it is necessary to contemplate the theoretical background of SCM before discussing how to successfully process SCM. I will describe the concept and background of SCM in Chapter 2, with a definition of SCM, types of SCM promotional activities, fields of SCM, necessity of applying SCM, and the effects of SCM. All of the defects in currently processing SCM will be introduced in Chapter 3. Discussion items include the following: the Bullwhip Effect; the breakdown in supply chain and sales networks due to e-business; the issue that even though the key to a successful SCM is cooperation between the production and distribution company, during the process of SCM, the companies, many times, put their profits first, resulting in a possible defect in demands estimation. Furthermore, the problems of processing SCM in a domestic distribution-production company concern Information Technology; for example, the new system introduced to the company is not compatible with the pre-existing document architecture. Second, for effective management, distribution and production companies should cooperate and enhance their partnership in the aspect of the corporation; however, in reality, this seldom occurs. Third, in the aspect of the work process, introducing SCM could provoke corporations during the integration of the distribution-production process. Fourth, to increase the achievement of the SCM strategy process, they need to set up a cross-functional team; however, many times, business partners lack the cooperation and business-information sharing tools necessary to effect the transition to SCM. Chapter 4 will address an SCM strategic plan and a case study of LG Electronics. The purpose of the strategic plan, strategic plans for types of business, adopting SCM in a distribution company, and the global supply chain process of LG Electronics will be introduced. The conclusion of the study is located in Chapter 5, which addresses the issue of the fierce competition that companies currently face in the global market environment and their increased investment in SCM, in order to better cope with short product life cycle and high customer expectations. The SCM management system has evolved through the adaptation of improved information, communication, and transportation technologies; now, it demands the utilization of various strategic resources. The introduction of SCM provides benefits to the management of a network of interconnected businesses by securing customer loyalty with cost and time savings, derived through the consolidation of many distribution systems; additionally, SCM helps enterprises form a wide range of marketing strategies. Thus, we could conclude that not only the distributors but all types of businesses should adopt the systems approach to supply chain strategies. SCM deals with the basic stream of distribution and increases the value of a company by replacing physical distribution with information. By the company obtaining and sharing ready information, it is able to create customer satisfaction at the end point of delivery to the consumer.

  • PDF

Neurotechnologies and civil law issues (뇌신경과학 연구 및 기술에 대한 민사법적 대응)

  • SooJeong Kim
    • The Korean Society of Law and Medicine
    • /
    • v.24 no.2
    • /
    • pp.147-196
    • /
    • 2023
  • Advances in brain science have made it possible to stimulate the brain to treat brain disorder or to connect directly between the neuron activity and an external devices. Non-invasive neurotechnologies already exist, but invasive neurotechnologies can provide more precise stimulation or measure brainwaves more precisely. Nowadays deep brain stimulation (DBS) is recognized as an accepted treatment for Parkinson's disease and essential tremor. In addition DBS has shown a certain positive effect in patients with Alzheimer's disease and depression. Brain-computer interfaces (BCI) are in the clinical stage but help patients in vegetative state can communicate or support rehabilitation for nerve-damaged people. The issue is that the people who need these invasive neurotechnologies are those whose capacity to consent is impaired or who are unable to communicate due to disease or nerve damage, while DBS and BCI operations are highly invasive and require informed consent of patients. Especially in areas where neurotechnology is still in clinical trials, the risks are greater and the benefits are uncertain, so more explanation should be provided to let patients make an informed decision. If the patient is under guardianship, the guardian is able to substitute for the patient's consent, if necessary with the authorization of court. If the patient is not under guardianship and the patient's capacity to consent is impaired or he is unable to express the consent, korean healthcare institution tend to rely on the patient's near relative guardian(de facto guardian) to give consent. But the concept of a de facto guardian is not provided by our civil law system. In the long run, it would be more appropriate to provide that a patient's spouse or next of kin may be authorized to give consent for the patient, if he or she is neither under guardianship nor appointed enduring power of attorney. If the patient was not properly informed of the risks involved in the neurosurgery, he or she may be entitled to compensation of intangible damages. If there is a causal relation between the malpractice and the side effects, the patient may also be able to recover damages for those side effects. In addition, both BCI and DBS involve the implantation of electrodes or microchips in the brain, which are controlled by an external devices. Since implantable medical devices are subject to product liability laws, the patient may be able to sue the manufacturer for damages if the defect caused the adverse effects. Recently, Korea's medical device regulation mandated liability insurance system for implantable medical devices to strengthen consumer protection.

A Study on the Effect of the Introduction Characteristics of Cloud Computing Services on the Performance Expectancy and the Intention to Use: From the Perspective of the Innovation Diffusion Theory (클라우드 컴퓨팅 서비스의 도입특성이 조직의 성과기대 및 사용의도에 미치는 영향에 관한 연구: 혁신확산 이론 관점)

  • Lim, Jae Su;Oh, Jay In
    • Asia pacific journal of information systems
    • /
    • v.22 no.3
    • /
    • pp.99-124
    • /
    • 2012
  • Our society has long been talking about necessity for innovation. Since companies in particular need to carry out business innovation in their overall processes, they have attempted to apply many innovation factors on sites and become to pay more attention to their innovation. In order to achieve this goal, companies has applied various information technologies (IT) on sites as a means of innovation, and consequently IT have been greatly developed. It is natural for the field of IT to have faced another revolution which is called cloud computing, which is expected to result in innovative changes in software application via the Internet, data storing, the use of devices, and their operations. As a vehicle of innovation, cloud computing is expected to lead the changes and advancement of our society and the business world. Although many scholars have researched on a variety of topics regarding the innovation via IT, few studies have dealt with the issue of could computing as IT. Thus, the purpose of this paper is to set the variables of innovation attributes based on the previous articles as the characteristic variables and clarify how these variables affect "Performance Expectancy" of companies and the intention of using cloud computing. The result from the analysis of data collected in this study is as follows. The study utilized a research model developed on the innovation diffusion theory to identify influences on the adaptation and spreading IT for cloud computing services. Second, this study summarized the characteristics of cloud computing services as a new concept that introduces innovation at its early stage of adaptation for companies. Third, a theoretical model is provided that relates to the future innovation by suggesting variables for innovation characteristics to adopt cloud computing services. Finally, this study identified the factors affecting expectation and the intention to use the cloud computing service for the companies that consider adopting the cloud computing service. As the parameter and dependent variable respectively, the study deploys the independent variables that are aligned with the characteristics of the cloud computing services based on the innovation diffusion model, and utilizes the expectation for performance and Intention to Use based on the UTAUT theory. Independent variables for the research model include Relative Advantage, Complexity, Compatibility, Cost Saving, Trialability, and Observability. In addition, 'Acceptance for Adaptation' is applied as an adjustment variable to verify the influences on the expected performances from the cloud computing service. The validity of the research model was secured by performing factor analysis and reliability analysis. After confirmatory factor analysis is conducted using AMOS 7.0, the 20 hypotheses are verified through the analysis of the structural equation model, accepting 12 hypotheses among 20. For example, Relative Advantage turned out to have the positive effect both on Individual Performance and on Strategic Performance from the verification of hypothesis, while it showed meaningful correlation to affect Intention to Use directly. This indicates that many articles on the diffusion related Relative Advantage as the most important factor to predict the rate to accept innovation. From the viewpoint of the influence on Performance Expectancy among Compatibility and Cost Saving, Compatibility has the positive effect on both Individual Performance and on Strategic Performance, while it showed meaningful correlation with Intention to Use. However, the topic of the cloud computing service has become a strategic issue for adoption in companies, Cost Saving turns out to affect Individual Performance without a significant influence on Intention to Use. This indicates that companies expect practical performances such as time and cost saving and financial improvements through the adoption of the cloud computing service in the environment of the budget squeezing from the global economic crisis from 2008. Likewise, this positively affects the strategic performance in companies. In terms of effects, Trialability is proved to give no effects on Performance Expectancy. This indicates that the participants of the survey are willing to afford the risk from the high uncertainty caused by innovation, because they positively pursue information about new ideas as innovators and early adopter. In addition, they believe it is unnecessary to test the cloud computing service before the adoption, because there are various types of the cloud computing service. However, Observability positively affected both Individual Performance and Strategic Performance. It also showed meaningful correlation with Intention to Use. From the analysis of the direct effects on Intention to Use by innovative characteristics for the cloud computing service except the parameters, the innovative characteristics for the cloud computing service showed the positive influence on Relative Advantage, Compatibility and Observability while Complexity, Cost saving and the likelihood for the attempt did not affect Intention to Use. While the practical verification that was believed to be the most important factor on Performance Expectancy by characteristics for cloud computing service, Relative Advantage, Compatibility and Observability showed significant correlation with the various causes and effect analysis. Cost Saving showed a significant relation with Strategic Performance in companies, which indicates that the cost to build and operate IT is the burden of the management. Thus, the cloud computing service reflected the expectation as an alternative to reduce the investment and operational cost for IT infrastructure due to the recent economic crisis. The cloud computing service is not pervasive in the business world, but it is rapidly spreading all over the world, because of its inherited merits and benefits. Moreover, results of this research regarding the diffusion innovation are more or less different from those of the existing articles. This seems to be caused by the fact that the cloud computing service has a strong innovative factor that results in a new paradigm shift while most IT that are based on the theory of innovation diffusion are limited to companies and organizations. In addition, the participants in this study are believed to play an important role as innovators and early adapters to introduce the cloud computing service and to have competency to afford higher uncertainty for innovation. In conclusion, the introduction of the cloud computing service is a critical issue in the business world.

  • PDF