• Title/Summary/Keyword: process rate

Search Result 12,519, Processing Time 0.061 seconds

A Study on Hoslital Nurses' Preferred Duty Shift and Duty Hours (병원 간호사의 선호근무시간대에 관한 연구)

  • Lee, Gyeong-Sik;Jeong, Geum-Hui
    • The Korean Nurse
    • /
    • v.36 no.1
    • /
    • pp.77-96
    • /
    • 1997
  • The duty shifts of hospital nurses not only affect nurses' physical and mental health but also present various personnel management problems which often result in high turnover rates. In this context a study was carried out from October to November 1995 for a period of two months to find out the status of hospital nurses' duty shift patterns, and preferred duty hours and fixed duty shifts. The study population was 867 RNs working in five general hospitals located in Seoul and its vicinity. The questionnaire developed by the writer was used for data collection. The response rate was 85.9 percent or 745 returns. The SAS program was used for data analysis with the computation of frequencies, percentages and Chi square test. The findings of the study are as follows: 1. General characteristics of the study population: 56 percent of respondents was (25 years group and 76.5 percent were "single": the predominant proportion of respondents was junior nursing college graduates(92.2%) and have less than 5 years nursing experience in hospitals(65.5%). For their future working plan in nursing profession, nearly 50% responded as uncertain The reasons given for their career plan was predominantly 'personal growth and development' rather than financial reasons. 2. The interval for rotations of duty stations was found to be mostly irregular(56.4%) while others reported as weekly(16.1%), monthly(12.9%), and fixed terms(4.6%). 3. The main problems related to duty shifts particularly the evening and night duty nurses reported were "not enough time for the family, " "afraid of security problems after the work when returning home late at night." and "lack of leisure time". "problems in physical and physiological adjustment." "problems in family life." "lack of time for interactions with fellow nurses" etc. 4. The forty percent of respondents reported to have '1-2 times' of duty shift rotations while all others reported that '0 time'. '2-3 times'. 'more than 3 times' etc. which suggest the irregularity in duty shift rotations. 5. The majority(62.8%) of study population found to favor the rotating system of duty stations. The reasons for favoring the rotation system were: the opportunity for "learning new things and personal development." "better human relations are possible. "better understanding in various duty stations." "changes in monotonous routine job" etc. The proportion of those disfavor the rotating 'system was 34.7 percent. giving the reasons of"it impedes development of specialization." "poor job performances." "stress factors" etc. Furthermore. respondents made the following comments in relation to the rotation of duty stations: the nurses should be given the opportunity to participate in the. decision making process: personal interest and aptitudes should be considered: regular intervals for the rotations or it should be planned in advance. etc. 6. For the future career plan. the older. married group with longer nursing experiences appeared to think the nursing as their lifetime career more likely than the younger. single group with shorter nursing experiences ($x^2=61.19.{\;}p=.000;{\;}x^2=41.55.{\;}p=.000$). The reason given for their future career plan regardless of length of future service, was predominantly "personal growth and development" rather than financial reasons. For further analysis, the group those with the shorter career plan appeared to claim "financial reasons" for their future career more readily than the group who consider the nursing job as their lifetime career$(x^2$= 11.73, p=.003) did. This finding suggests the need for careful .considerations in personnel management of nursing administration particularly when dealing with the nurses' career development. The majority of respondents preferred the fixed day shift. However, further analysis of those preferred evening shift by age and civil status, "< 25 years group"(15.1%) and "single group"(13.2) were more likely to favor the fixed evening shift than > 25 years(6.4%) and married(4.8%)groups. This differences were statistically significant ($x^2=14.54, {\;}p=.000;{\;}x^2=8.75, {\;}p=.003$). 7. A great majority of respondents(86.9% or n=647) found to prefer the day shifts. When the four different types of duty shifts(Types A. B. C, D) were presented, 55.0 percent of total respondents preferred the A type or the existing one followed by D type(22.7%). B type(12.4%) and C type(8.2%). 8. When the condition of monetary incentives for the evening(20% of salary) and night shifts(40% of. salary) of the existing duty type was presented. again the day shift appeared to be the most preferred one although the rate was slightly lower(66.4% against 86.9%). In the case of evening shift, with the same incentive, the preference rates for evening and night shifts increased from 11.0 to 22.4 percent and from 0.5 to 3.0 percent respectively. When the age variable was controlled. < 25 yrs group showed higher rates(31.6%. 4.8%) than those of > 25 yrs group(15.5%. 1.3%) respectively preferring the evening and night shifts(p=.000). The civil status also seemed to operate on the preferences of the duty shifts as the single group showed lower rate(69.0%) for day duty against 83. 6% of the married group. and higher rates for evening and night duties(27.2%. 15.1%) respectively against those of the married group(3.8%. 1.8%) while a higher proportion of the married group(83. 6%) preferred the day duties than the single group(69.0%). These differences were found to be statistically all significant(p=.001). 9. The findings on preferences of three different types of fixed duty hours namely, B, C. and D(with additional monetary incentives) are as follows in order of preference: B type(12hrs a day, 3days a wk): day shift(64.1%), evening shift(26.1%). night shift(6.5%) C type(12hrs a day. 4days a wk) : evening shift(49.2%). day shift(32.8%), night shift(11.5%) D type(10hrs a day. 4days a wk): showed the similar trend as B type. The findings of higher preferences on the evening and night duties when the incentives are given. as shown above, suggest the need for the introductions of different patterns of duty hours and incentive measures in order to overcome the difficulties in rostering the nursing duties. However, the interpretation of the above data, particularly the C type, needs cautions as the total number of respondents is very small(n=61). It requires further in-depth study. In conclusion. it seemed to suggest that the patterns of nurses duty hours and shifts in the most hospitals in the country have neither been tried for different duty types nor been flexible. The stereotype rostering system of three shifts and insensitiveness for personal life aspect of nurses seemed to be prevailing. This study seems to support that irregular and frequent rotations of duty shifts may be contributing factors for most nurses' maladjustment problems in physical and mental health. personal and family life which eventually may result in high turnover rates. In order to overcome the increasing problems in personnel management of hospital nurses particularly in rostering of evening and night duty shifts, which may related to eventual high turnover rates, the findings of this study strongly suggest the need for an introduction of new rostering systems including fixed duties and appropriate incentive measures for evenings and nights which the most nurses want to avoid, In considering the nursing care of inpatients is the round-the clock business. the practice of the nursing duty shift system is inevitable. In this context, based on the findings of this study. the following are recommended: 1. The further in-depth studies on duty shifts and hours need to be undertaken for the development of appropriate and effective rostering systems for hospital nurses. 2. An introduction of appropriate incentive measures for evening and night duty shifts along with organizational considerations such as the trials for preferred duty time bands, duty hours, and fixed duty shifts should be considered if good quality of care for the patients be maintained for the round the clock. This may require an initiation of systematic research and development activities in the field of hospital nursing administration as a part of permanent system in the hospital. 3. Planned and regular intervals, orientation and training, and professional and personal growth should be considered for the rotation of different duty stations or units. 4. In considering the higher degree of preferences in the duty type of "10hours a day, 4days a week" shown in this study, it would be worthwhile to undertake the R&D type studies in large hospital settings.

  • PDF

Factors Affecting International Transfer Pricing of Multinational Enterprises in Korea (외국인투자기업의 국제이전가격 결정에 영향을 미치는 환경 및 기업요인)

  • Jun, Tae-Young;Byun, Yong-Hwan
    • Korean small business review
    • /
    • v.31 no.2
    • /
    • pp.85-102
    • /
    • 2009
  • With the continued globalization of world markets, transfer pricing has become one of the dominant sources of controversy in international taxation. Transfer pricing is the process by which a multinational corporation calculates a price for goods and services that are transferred to affiliated entities. Consider a Korean electronic enterprise that buys supplies from its own subsidiary located in China. How much the Korean parent company pays its subsidiary will determine how much profit the Chinese unit reports in local taxes. If the parent company pays above normal market prices, it may appear to have a poor profit, even if the group as a whole shows a respectable profit margin. In this way, transfer prices impact the taxable income reported in each country in which the multinational enterprise operates. It's importance lies in that around 60% of international trade involves transactions between two related parts of multinationals, according to the OECD. Multinational enterprises (hereafter MEs) exert much effort into utilizing organizational advantages to make global investments. MEs wish to minimize their tax burden. So MEs spend a fortune on economists and accountants to justify transfer prices that suit their tax needs. On the contrary, local governments are not prepared to cope with MEs' powerful financial instruments. Tax authorities in each country wish to ensure that the tax base of any ME is divided fairly. Thus, both tax authorities and MEs have a vested interest in the way in which a transfer price is determined, and this is why MEs' international transfer prices are at the center of disputes concerned with taxation. Transfer pricing issues and practices are sometimes difficult to control for regulators because the tax administration does not have enough staffs with the knowledge and resources necessary to understand them. The authors examine transfer pricing practices to provide relevant resources useful in designing tax incentives and regulation schemes for policy makers. This study focuses on identifying the relevant business and environmental factors that could influence the international transfer pricing of MEs. In this perspective, we empirically investigate how the management perception of related variables influences their choice of international transfer pricing methods. We believe that this research is particularly useful in the design of tax policy. Because it can concentrate on a few selected factors in consideration of the limited budget of the tax administration with assistance of this research. Data is composed of questionnaire responses from foreign firms in Korea with investment balances exceeding one million dollars in the end of 2004. We mailed questionnaires to 861 managers in charge of the accounting departments of each company, resulting in 121 valid responses. Seventy six percent of the sample firms are classified as small and medium sized enterprises with assets below 100 billion Korean won. Reviewing transfer pricing methods, cost-based transfer pricing is most popular showing that 60 firms have adopted it. The market-based method is used by 31 firms, and 13 firms have reported the resale-pricing method. Regarding the nationalities of foreign investors, the Japanese and the Americans constitute most of the sample. Logistic regressions have been performed for statistical analysis. The dependent variable is binary in that whether the method of international transfer pricing is a market-based method or a cost-based method. This type of binary classification is founded on the belief that the market-based method is evaluated as the relatively objective way of pricing compared with the cost-based methods. Cost-based pricing is assumed to give mangers flexibility in transfer pricing decisions. Therefore, local regulatory agencies are thought to prefer market-based pricing over cost-based pricing. Independent variables are composed of eight factors such as corporate tax rate, tariffs, relations with local tax authorities, tax audit, equity ratios of local investors, volume of internal trade, sales volume, and product life cycle. The first four variables are included in the model because taxation lies in the center of transfer pricing disputes. So identifying the impact of these variables in Korean business environments is much needed. Equity ratio is included to represent the interest of local partners. Volume of internal trade was sometimes employed in previous research to check the pricing behavior of managers, so we have followed these footsteps in this paper. Product life cycle is used as a surrogate of competition in local markets. Control variables are firm size and nationality of foreign investors. Firm size is controlled using dummy variables in that whether or not the specific firm is small and medium sized. This is because some researchers report that big firms show different behaviors compared with small and medium sized firms in transfer pricing. The other control variable is also expressed in dummy variable showing if the entrepreneur is the American or not. That's because some prior studies conclude that the American management style is different in that they limit branch manger's freedom of decision. Reviewing the statistical results, we have found that managers prefer the cost-based method over the market-based method as the importance of corporate taxes and tariffs increase. This result means that managers need flexibility to lessen the tax burden when they feel taxes are important. They also prefer the cost-based method as the product life cycle matures, which means that they support subsidiaries in local market competition using cost-based transfer pricing. On the contrary, as the relationship with local tax authorities becomes more important, managers prefer the market-based method. That is because market-based pricing is a better way to maintain good relations with the tax officials. Other variables like tax audit, volume of internal transactions, sales volume, and local equity ratio have shown only insignificant influence. Additionally, we have replaced two tax variables(corporate taxes and tariffs) with the data showing top marginal tax rate and mean tariff rates of each country, and have performed another regression to find if we could get different results compared with the former one. As a consequence, we have found something different on the part of mean tariffs, that shows only an insignificant influence on the dependent variable. We guess that each company in the sample pays tariffs with a specific rate applied only for one's own company, which could be located far from mean tariff rates. Therefore we have concluded we need a more detailed data that shows the tariffs of each company if we want to check the role of this variable. Considering that the present paper has heavily relied on questionnaires, an effort to build a reliable data base is needed for enhancing the research reliability.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Enhanced Transport and Risk of a Highly Nonpolar Pollutant in the Presence of LNAPL in Soil-groundwater System: In Case of p-xylene and benz[a]anthracene (LNAPL에 의한 소수성 유기오염물질의 지하환경 내 이동성 변화가 위해성 증가에 미치는 영향: p-xylene과 benz[a]anthracene의 경우)

  • Ryu, Hye-Rim;Han, Joon-Kyoung;Kim, Young-Jin;Nam, Kyoung-Phile
    • Journal of Soil and Groundwater Environment
    • /
    • v.12 no.4
    • /
    • pp.25-31
    • /
    • 2007
  • Characterizing the risk posed by a mixture of chemicals is a challenging task due to the chemical interactions of individual components that may affect their physical behavior and hence alter their exposure to receptors. In this study, cell tests that represent subsurface environment were carried out using benz[a]anthracene (BaA) and p-xylene focusing on phasetransforming interaction to verify increased mobility and risk of highly sorbed pollutants in the presence of less sorbed, mobile liquid pollutants. A transport model was also developed to interpret results and to simulate the same process on a field scale. The experimental results showed that BaA had far greater mobility in the presence of p-xylene than in the absence of that. The main transport mechanisms in the vadose zone were by dissolution to p-xylene or water. The transport model utilizing Defined Time Steps (DTS) was developed and tested with the experimental results. The predicted and observed values showed similar tendency, but the more work is needed in the future study for more precise modeling. The field-scale simulation results showed that transport of BaA to groundwater table was significantly faster in the presence of NAPL, and the oral carcinogenic risk of BaA calculated with the concentration in groundwater was 15${\sim}$87 times larger when mixed with NAPL than when solely contaminated. Since transport rate of PAHs is very slow in the subsurface without NAPL and no degradation of PAHs was considered in this simulation during the transport, the increase of risk in the presence of NAPL is expected to be greater for the actual contaminated site.

A Study on Chloride Threshold Level of Blended Cement Mortar Using Polarization Resistance Method (분극저항 측정기법을 이용한 혼합 시멘트 모르타르의 임계 염화물 농도에 대한 연구)

  • Song, Ha-Won;Lee, Chang-Hong;Lee, Kewn-Chu;Ann, Ki-Yong
    • Journal of the Korea Concrete Institute
    • /
    • v.21 no.3
    • /
    • pp.245-253
    • /
    • 2009
  • The importance of chloride ions in the corrosion of steel in concrete has led to the concept for chloride threshold level (CTL). The CTL can be defined as the content of chlorides at the steel depth that is necessary to sustain local passive film breakdown and hence initiate the corrosion process. Despite the importance of the CTL, due to the uncertainty determining the actual limits in various environments for chloride-induced corrosion, conservative values such as 0.4% by weight of cement or 1.2 kg in 1 $m^3$ concrete have been used in predicting the corrosion-free service life of reinforced concrete structures. The paper studies the CTL for blended cement concrete by comparing the resistance of cementitious binder to the onset of chloride-induced corrosion of steel. Mortar specimens were cast with centrally located steel rebar of 10 mm in diameter using cementitious mortars with ordinary Portland cement (OPC) and mixed mortars replaced with 30% pulverized fuel ash (PFA), 60% ground granulated blast furnace slag (GGBS) and 10% silica fume (SF), respectively, at 0.4 of a free W/B ratio. Chlorides were admixed in mixing water ranging 0.0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 2.5 and 3.0% by weight of binder(Based on $C1^-$). Specimens were curd 28 days at the room temperature, wrapped in polyethylene film to avoid leaching out of chloride and hydroxyl ions. Then the corrosion rate was measured using the polarization resistance method and the order of CTL for binder was determined. Thus, CTL of OPC, 60%GGBS, 30%PFA and 10%SF were determined by 1.6%, 0.45%, 0.8% and 2.15%, respectively.

The Effects of Sentiment and Readability on Useful Votes for Customer Reviews with Count Type Review Usefulness Index (온라인 리뷰의 감성과 독해 용이성이 리뷰 유용성에 미치는 영향: 가산형 리뷰 유용성 정보 활용)

  • Cruz, Ruth Angelie;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.43-61
    • /
    • 2016
  • Customer reviews help potential customers make purchasing decisions. However, the prevalence of reviews on websites push the customer to sift through them and change the focus from a mere search to identifying which of the available reviews are valuable and useful for the purchasing decision at hand. To identify useful reviews, websites have developed different mechanisms to give customers options when evaluating existing reviews. Websites allow users to rate the usefulness of a customer review as helpful or not. Amazon.com uses a ratio-type helpfulness, while Yelp.com uses a count-type usefulness index. This usefulness index provides helpful reviews to future potential purchasers. This study investigated the effects of sentiment and readability on useful votes for customer reviews. Similar studies on the relationship between sentiment and readability have focused on the ratio-type usefulness index utilized by websites such as Amazon.com. In this study, Yelp.com's count-type usefulness index for restaurant reviews was used to investigate the relationship between sentiment/readability and usefulness votes. Yelp.com's online customer reviews for stores in the beverage and food categories were used for the analysis. In total, 170,294 reviews containing information on a store's reputation and popularity were used. The control variables were the review length, store reputation, and popularity; the independent variables were the sentiment and readability, while the dependent variable was the number of helpful votes. The review rating is the moderating variable for the review sentiment and readability. The length is the number of characters in a review. The popularity is the number of reviews for a store, and the reputation is the general average rating of all reviews for a store. The readability of a review was calculated with the Coleman-Liau index. The sentiment is a positivity score for the review as calculated by SentiWordNet. The review rating is a preference score selected from 1 to 5 (stars) by the review author. The dependent variable (i.e., usefulness votes) used in this study is a count variable. Therefore, the Poisson regression model, which is commonly used to account for the discrete and nonnegative nature of count data, was applied in the analyses. The increase in helpful votes was assumed to follow a Poisson distribution. Because the Poisson model assumes an equal mean and variance and the data were over-dispersed, a negative binomial distribution model that allows for over-dispersion of the count variable was used for the estimation. Zero-inflated negative binomial regression was used to model count variables with excessive zeros and over-dispersed count outcome variables. With this model, the excess zeros were assumed to be generated through a separate process from the count values and therefore should be modeled as independently as possible. The results showed that positive sentiment had a negative effect on gaining useful votes for positive reviews but no significant effect on negative reviews. Poor readability had a negative effect on gaining useful votes and was not moderated by the review star ratings. These findings yield considerable managerial implications. The results are helpful for online websites when analyzing their review guidelines and identifying useful reviews for their business. Based on this study, positive reviews are not necessarily helpful; therefore, restaurants should consider which type of positive review is helpful for their business. Second, this study is beneficial for businesses and website designers in creating review mechanisms to know which type of reviews to highlight on their websites and which type of reviews can be beneficial to the business. Moreover, this study highlights the review systems employed by websites to allow their customers to post rating reviews.

Current Status and Success Strategies of Crowdfunding for Start-up in Korea (국내 창업분야 크라우드펀딩(Crowdfunding) 현황과 성공전략)

  • Yoo, Younggeul;Jang, Ikhoon;Choe, Youngchan
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.9 no.4
    • /
    • pp.1-12
    • /
    • 2014
  • It is essential factor for business operation to raise funds effectively. However, in Korea, many start-ups and small businesses have difficulties in fund-raising. In recent years, crowdfunding, a new method for funding a project of individuals or organizations by raising monetary contributions from a large number of people, has been growing up simultaneously with diffusion of social media. Crowdfunding is on early stage in Korea, and the majority of projects are focused on cultural or art categories. There is high proportion of projects that have social value in start-up sector. Crowdfunding in Korea has great potential because success rate of it is much higher than its of advanced countries, although market size is much smaller than them. The purpose of this paper is to propose success strategies of crowdfunding for start-up through case study. 5 crowdfunding platforms of Korea and Kickstarter, the platform of United States were investigated. Then we checked the figures related to the operation of the whole Korean projects on start-up. Finally, we made comparison between the cases of success and failure by analyzing 8 project characteristics. The study shows that it were the differences in trustworthiness and activeness of project creator, value of reward and efforts for interactivity that have great effects on success of the project. Whereas there was no significant influence of societal contribution and sponsor engagement. The thesis provides success strategies of crowdfunding for start-up as follows. Firstly, creator of the project should make support base by enthusiastic activites before launching funding project. Secondly, there should be contents that can easily show the process of business development in the project information. Thirdly, there must be appropriate design of rewards for each amounts of support money. Finally, efforts for interactivity, such as frequent updates, response for comments and SNS posting, should be followed after the launch of the project.

  • PDF

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

A Study on the Effect of Outsourced to Management Performance (아웃소싱이 기업성과에 미치는 영향)

  • Bae, Ha Jin;Kwak, Soon Jin;Kim, Kwang Soo
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.9 no.5
    • /
    • pp.83-94
    • /
    • 2014
  • Before economical crisis in 1997, domestic company focused on increasing the size of outward experience which including organization. The result of increasing outward experience without substance was economical crisis, so after that time, many companies have been changing their focus from insourcing to strengthen the core competence to secure global market. This is becoming a cause of following that companies are reducing their outward experience. Especially, to process tasks more effectively and to cope with rapid change of business environment, such as incoming raw material from overseas/high raising of salary/rising property prices, many companies decided outsourcing method. At most of hypothesis, the result was that outsourcing can affect positively to the business. First, introducing of outsourcing during focusing on core competence can be positive effect for company performance such as business management /productivity /procurement /administration /product competitiveness /technology. Second, the result that analyzed based on a point of view of population statics after outsourcing was positive effect at the most of research. Third, result of effectiveness for every outsourcing type classified by 4M was also can be positive at the most of research. Fourth, the characteristic of population statics can be positive effect at the most of category when select outsourcing companies. Research result of outsourcing was various based on the goal of outsourcing. It is revealed by investigation of domestic/overseas treatise that there are opposite two opinions. In this research, there is no consistent result that the outsourcing can give effects on business performance, but most of hypothesis indicates that outsourcing can give positive effect on the business performance.In this research, based on the outsourcing intensity, mutual relation was analyzed. The assumption of the reason of outsourcing is economical and organizational. First, sampling numbers of research was too small so it is too difficult to get significant business performance result. (Sampling : 150, Replied : 106, Rate of Reply : 71%) Second, tried to compare significant differences of outsourcing methods which were divided based on 4M, but the there is gaps between the number of Cell and too difficult to make replier understand. Third, it is tried to find the degrees of effect that the point of view of popular statics can effect on business performances and selection of outsourced companies.

  • PDF

A Case Study to Estimate the Greenhouse-Gas Mitigation Potential on Conventional Rice Production System

  • Ryu, Jong-Hee;Lee, Jong-Sik;Kim, Kye-Hoon;Kim, Gun-Yeob;Choi, Eun-Jung
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.46 no.6
    • /
    • pp.502-509
    • /
    • 2013
  • To estimate greenhouse gas (GHG) emission, we established inventory of conventional rice cultivation from farmers in Gunsan and Iksan, Jeonbuk province in 2011~2012. This study was to calculate carbon footprint and to analyse the major factor of GHGs. We carried out a sensitivity analysis using the analyzed main factors of GHGs and estimated the mitigation potential of GHGs. Also we tried to suggest agricultural methods to reduce GHGs that farmers of this case study can apply. Carbon footprint of rice production unit of 1 kg was 2.21 kg $CO_2.-eq.kg^{-1}$. Although amount of $CO_2$ emissions is largest among GHGs, methane had the highest contribution of carbon footprint on rice production system after methane was converted to carbon dioxide equivalent ($CO_2$-eq.) multiplied by the global warming potential (GWP). Source of $CO_2$ in the cultivation of rice farming is incomplete combustion of fossil fuels used by agricultural machinery. Most of the $CH_4$ emitted during rice cultivation and major factor of $CH_4$ emission is flooded paddy field in anaerobic condition. Most of the $N_2O$ emitted from rice cultivation process and major sources of $N_2O$ emission is application of fertilizer such as compound fertilizer, urea, orgainc fertilizer, etc. As a result of sensitivity analysis due to the variation in energy consumption, diesel had the highest sensitivity among the energies inputs. If diesel consumption is reduced by 10%, it could be estimated that $CO_2$ potential reduction is about 2.5%. When application rate of compound fertilizer reduces by 10%, the potential reduction is calculated to be approximately 1% for $CO_2$ and approximately 1.8% for $N_2O$. When drainage duration is decreased until 10 days, methane emissions is reduced by approximately 4.5%. That is to say drainage days, tillage, and reducing diesel consumption were the main sources having the largest effect of GHG reduction due to changing amount of inputs. Accordingly, proposed methods to decrease GHG emissions were no-tillage, midsummer drainage, etc.