• Title/Summary/Keyword: Procedure

Search Result 25,600, Processing Time 0.054 seconds

The Effects of Evaluation Attributes of Cultural Tourism Festivals on Satisfaction and Behavioral Intention (문화관광축제 방문객의 평가속성 만족과 행동의도에 관한 연구 - 2006 광주김치대축제를 중심으로 -)

  • Kim, Jung-Hoon
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.2
    • /
    • pp.55-73
    • /
    • 2007
  • Festivals are an indispensable feature of cultural tourism(Formica & Uysal, 1998). Cultural tourism festivals are increasingly being used as instruments promoting tourism and boosting the regional economy. So much research related to festivals is undertaken from a variety of perspectives. Plans to revisit a particular festival have been viewed as an important research topic both in academia and the tourism industry. Therefore festivals have frequently been leveled as cultural events. Cultural tourism festivals have become a crucial component in constituting the attractiveness of tourism destinations(Prentice, 2001). As a result, a considerable number of tourist studies have been carried out in diverse cultural tourism festivals(Backman et al., 1995; Crompton & Mckay, 1997; Park, 1998; Clawson & Knetch, 1996). Much of previous literature empirically shows the close linkage between tourist satisfaction and behavioral intention in festivals. The main objective of this study is to investigate the effects of evaluation attributes of cultural tourism festivals on satisfaction and behavioral intention. accomplish the research objective, to find out evaluation items of cultural tourism festivals through the literature study an empirical study. Using a varimax rotation with Kaiser normalization, the research obtained four factors in the 18 evaluation attributes of cultural tourism festivals. Some empirical studies have examined the relationship between behavioral intention and actual behavior. To understand between tourist satisfaction and behavioral intention, this study suggests five hypotheses and hypothesized model. In this study, the analysis is based on primary data collected from visitors who participated in '2006 Gwangju Kimchi Festival'. In total, 700 self-administered questionnaires were distributed and 561 usable questionnaires were obtained. Respondents were presented with the 18 satisfactions item on a scale from 1(strongly disagree) to 7(strongly agree). Dimensionality and stability of the scale were evaluated by a factor analysis with varimax rotation. Four factors emerged with eigenvalues greater than 1, which explained 66.40% of the total variance and Cronbach' alpha raging from 0.876 to 0.774. And four factors named: advertisement and guides, programs, food and souvenirs, and convenient facilities. To test and estimate the hypothesized model, a two-step approach with an initial measurement model and a subsequent structural model for Structural Equation Modeling was used. The AMOS 4.0 analysis package was used to conduct the analysis. In estimating the model, the maximum likelihood procedure was used.In this study Chi-square test is used, which is the most common model goodness-of-fit test. In addition, considering the literature about the Structural Equation Modeling, this study used, besides Chi-square test, more model fit indexes to determine the tangibility of the suggested model: goodness-of-fit index(GFI) and root mean square error of approximation(RMSEA) as absolute fit indexes; normed-fit index(NFI) and non-normed-fit index(NNFI) as incremental fit indexes. The results of T-test and ANOVAs revealed significant differences(0.05 level), therefore H1(Tourist Satisfaction level should be different from Demographic traits) are supported. According to the multiple Regressions analysis and AMOS, H2(Tourist Satisfaction positively influences on revisit intention), H3(Tourist Satisfaction positively influences on word of mouth), H4(Evaluation Attributes of cultural tourism festivals influences on Tourist Satisfaction), and H5(Tourist Satisfaction positively influences on Behavioral Intention) are also supported. As the conclusion of this study are as following: First, there were differences in satisfaction levels in accordance with the demographic information of visitors. Not all visitors had the same degree of satisfaction with their cultural tourism festival experience. Therefore it is necessary to understand the satisfaction of tourists if the experiences that are provided are to meet their expectations. So, in making festival plans, the organizer should consider the demographic variables in explaining and segmenting visitors to cultural tourism festival. Second, satisfaction with attributes of evaluation cultural tourism festivals had a significant direct impact on visitors' intention to revisit such festivals and the word of mouth publicity they shared. The results indicated that visitor satisfaction is a significant antecedent of their intention to revisit such festivals. Festival organizers should strive to forge long-term relationships with the visitors. In addition, it is also necessary to understand how the intention to revisit a festival changes over time and identify the critical satisfaction factors. Third, it is confirmed that behavioral intention was enhanced by satisfaction. The strong link between satisfaction and behavioral intentions of visitors areensured by high quality advertisement and guides, programs, food and souvenirs, and convenient facilities. Thus, examining revisit intention from a time viewpoint may be of a great significance for both practical and theoretical reasons. Additionally, festival organizers should give special attention to visitor satisfaction, as satisfied visitors are more likely to return sooner. The findings of this research have several practical implications for the festivals managers. The promotion of cultural festivals should be based on the understanding of tourist satisfaction for the long- term success of tourism. And this study can help managers carry out this task in a more informed and strategic manner by examining the effects of demographic traits on the level of tourist satisfaction and the behavioral intention. In other words, differentiated marketing strategies should be stressed and executed by relevant parties. The limitations of this study are as follows; the results of this study cannot be generalized to other cultural tourism festivals because we have not explored the many different kinds of festivals. A future study should be a comparative analysis of other festivals of different visitor segments. Also, further efforts should be directed toward developing more comprehensive temporal models that can explain behavioral intentions of tourists.

  • PDF

A study on the second edition of Koryo Dae-Jang-Mock-Lock (고려재조대장목록고)

  • Jeong Pil-mo
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.17
    • /
    • pp.11-47
    • /
    • 1989
  • This study intends to examine the background and the procedure of the carving of the tablets of the second edition of Dae-Jang-Mock­Lock(재조대장목록). the time and the route of the moving of the tablets. into Haein-sa, and the contents and the system of it. This study is mainly based on the second edition of Dae-Jang-Mock-Lock. But the other closely related materials such as restored first. edition of the Dae- Jang-Mock-Lock, Koryo Sin-Jo-Dae-Jang-Byeol-Lock (고려신조대장교정별록). Kae-Won-Seok-Kyo-Lock (개원석교록). Sok-Kae­Won-Seok-Kyo-Lock (속개원석교록). Jeong-Won-Sin-Jeong-Seok-Kyo­Lock(정원신정석교록), Sok-Jeong-Won-Seok-Kyo-Lock(속정원석교록), Dea-Jung-Sang-Bu-Beob-Bo-Lock(대중상부법보록), and Kyeong-Woo-Sin-Su-Beob-Bo-Lock(경우신수법보록), are also analysed and closely examined. The results of this study can be summarized as follows: 1. The second edition of Tripitaka Koreana(고려대장경) was carved for the purpose of defending the country from Mongolia with the power of Buddhism, after the tablets of the first edition in Buin-sa(부이사) was destroyed by fire. 2. In 1236. Dae-Jang-Do-Gam(대장도감) was established, and the preparation for the recarving of the tablets such as comparison between the content, of the first edition of Tripitalk Koreana, Gal-Bo-Chik-Pan-Dae­Jang-Kyeong and Kitan Dae- Jang-Kyeong, transcription of the original copy and the preparation of the wood, etc. was started. 3. In 1237 after the announcement of Dae-Jang-Gyeong-Gak-Pan-Gun­Sin-Gi-Go-Mun(대장경핵판군신석고문), the carving was started on a full scale. And seven years later (1243), Bun-Sa-Dae-Jang-Do-Gam(분사대장도감) was established in the area of the South to expand and hasten the work. And a large number of the tablets were carved in there. 4. It took 16 years to carve the main text and the supplements of the second edition of Tripitaka Koreana, the main text being carved from 1237 to 1248 and the supplement from 1244 to 1251. 5. It can be supposed that the tablets of the second edition of Tripitaka Koreana, stored in Seon-Won-Sa(선원사), Kang-Wha(강화), for about 140 years, was moved to Ji-Cheon-Sa(지천사), Yong-San(용산), and to Hae-In-Sa(해인사) again, through the west and the south sea and Jang-Gyeong-Po(장경포), Go-Ryeong(고령), in the autumn of the same year. 6. The second edition of Tripitaka Koreana was carved mainly based on the first edition, comparing with Gae-Bo-Chik-Pan-Dae-Jang-Kyeong(개보판대장경) and Kitan Dae-Jang-Kyeong(계단대장경). And the second edition of Dae-Jang-Mock-Lock also compiled mainly based on the first edition with the reference to Kae-Won-Seok-Kyo-Lock and Sok-Jeong-Won-Seok-Kyo-Lock. 7. Comparing with the first edition of Dae-Jang-Mock-Lock, in the second edition 7 items of 9 volumes of Kitan text such as Weol-Deung­Sam-Mae-Gyeong-Ron(월증삼매경론) are added and 3 items of 60 volumes such as Dae-Jong-Ji-Hyeon-Mun-Ron(대종지현문논) are substituted into others from Cheon chest(천함) to Kaeng chest(경함), and 92 items of 601 volumes such as Beob-Won-Ju-Rim-Jeon(법원주임전) are added after Kaeng chest. And 4 items of 50 volumes such as Yuk-Ja-Sin-Ju-Wang-Kyeong(육자신주왕경) are ommitted in the second edition. 8. Comparing with Kae-Won-Seok-Kyo-Lock, Cheon chest to Young chest (영함) of the second edition is compiled according to Ib-Jang-Lock(입장록) of Kae-Won-Seok-Kyo-Lock. But 15 items of 43 vol­umes such as Bul-Seol-Ban-Ju-Sam-Mae-Kyeong(불설반주삼매경) are ;added and 7 items of 35 volumes such as Dae-Bang-Deung-Dae-Jib-Il­Jang-Kyeong(대방등대집일장경) are ommitted. 9. Comparing with Sok-Jeong-Won-Seok-Kyo-Lock, 3 items of the 47 volumes (or 49 volumes) are ommitted and 4 items of 96 volumes are ;added in Caek chest(책함) to Mil chest(밀함) of the second edition. But the items are arranged in the same order. 10. Comparing with Dae- Jung-Sang-Bo-Beob-Bo-Lock, the arrangement of the second edition is entirely different from it. But 170 items of 329 volumes are also included in Doo chest(두함) to Kyeong chest(경함) of the second edition, and 53 items of 125 volumes in Jun chest(존함) to Jeong chest(정함). And 10 items of 108 volumes in the last part of Dae-Jung-Sang-Bo-Beob-Bo-Lock are ommitted and 3 items of 131 volumes such as Beob-Won-Ju-Rim-Jeon(법원주임전) are added in the second edition. 11. Comparing with Kyeong-Woo-Sin-Su-Beob-Bo-Lock, all of the items (21 items of 161 volumes) are included in the second edition without ;any classificatory system. And 22 items of 172 volumes in the Seong­Hyeon-Jib-Jeon(성현집전) part such as Myo-Gak-Bi-Cheon(묘각비전) are ommitted. 12. The last part of the second edition, Joo chest(주함) to Dong chest (동함), includes 14 items of 237 volumes. But these items cannot be found in any other former Buddhist catalog. So it might be supposed as the Kitan texts. 13. Besides including almost all items in Kae-Won-Seok-Kyo-Lock and all items in Sok-Jeong-Won-Seok-Kyo-Lock, Dae-Jung-Sang-Bo­Beob-Bo-Lock, and Kyeong-Woo-Sin-Su-Beob-Bo-Lock, the second edition of Dae-Jang-Mock-Lock includes more items, at least 20 items of about 300 volumes of Kitan Tripitaka and 15 items of 43 volumes of traditional Korean Tripitake that cannot be found any others. Therefore, Tripitaka Koreana can be said as a comprehensive Tripitaka covering all items of Tripitakas translated in Chinese character.

  • PDF

Effects of firm strategies on customer acquisition of Software as a Service (SaaS) providers: A mediating and moderating role of SaaS technology maturity (SaaS 기업의 차별화 및 가격전략이 고객획득성과에 미치는 영향: SaaS 기술성숙도 수준의 매개효과 및 조절효과를 중심으로)

  • Chae, SeongWook;Park, Sungbum
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.151-171
    • /
    • 2014
  • Firms today have sought management effectiveness and efficiency utilizing information technologies (IT). Numerous firms are outsourcing specific information systems functions to cope with their short of information resources or IT experts, or to reduce their capital cost. Recently, Software-as-a-Service (SaaS) as a new type of information system has become one of the powerful outsourcing alternatives. SaaS is software deployed as a hosted and accessed over the internet. It is regarded as the idea of on-demand, pay-per-use, and utility computing and is now being applied to support the core competencies of clients in areas ranging from the individual productivity area to the vertical industry and e-commerce area. In this study, therefore, we seek to quantify the value that SaaS has on business performance by examining the relationships among firm strategies, SaaS technology maturity, and business performance of SaaS providers. We begin by drawing from prior literature on SaaS, technology maturity and firm strategy. SaaS technology maturity is classified into three different phases such as application service providing (ASP), Web-native application, and Web-service application. Firm strategies are manipulated by the low-cost strategy and differentiation strategy. Finally, we considered customer acquisition as a business performance. In this sense, specific objectives of this study are as follows. First, we examine the relationships between customer acquisition performance and both low-cost strategy and differentiation strategy of SaaS providers. Secondly, we investigate the mediating and moderating effects of SaaS technology maturity on those relationships. For this purpose, study collects data from the SaaS providers, and their line of applications registered in the database in CNK (Commerce net Korea) in Korea using a questionnaire method by the professional research institution. The unit of analysis in this study is the SBUs (strategic business unit) in the software provider. A total of 199 SBUs is used for analyzing and testing our hypotheses. With regards to the measurement of firm strategy, we take three measurement items for differentiation strategy such as the application uniqueness (referring an application aims to differentiate within just one or a small number of target industry), supply channel diversification (regarding whether SaaS vendor had diversified supply chain) as well as the number of specialized expertise and take two items for low cost strategy like subscription fee and initial set-up fee. We employ a hierarchical regression analysis technique for testing moderation effects of SaaS technology maturity and follow the Baron and Kenny's procedure for determining if firm strategies affect customer acquisition through technology maturity. Empirical results revealed that, firstly, when differentiation strategy is applied to attain business performance like customer acquisition, the effects of the strategy is moderated by the technology maturity level of SaaS providers. In other words, securing higher level of SaaS technology maturity is essential for higher business performance. For instance, given that firms implement application uniqueness or a distribution channel diversification as a differentiation strategy, they can acquire more customers when their level of SaaS technology maturity is higher rather than lower. Secondly, results indicate that pursuing differentiation strategy or low cost strategy effectively works for SaaS providers' obtaining customer, which means that continuously differentiating their service from others or making their service fee (subscription fee or initial set-up fee) lower are helpful for their business success in terms of acquiring their customers. Lastly, results show that the level of SaaS technology maturity mediates the relationships between low cost strategy and customer acquisition. That is, based on our research design, customers usually perceive the real value of the low subscription fee or initial set-up fee only through the SaaS service provide by vender and, in turn, this will affect their decision making whether subscribe or not.

A Study on Developing a VKOSPI Forecasting Model via GARCH Class Models for Intelligent Volatility Trading Systems (지능형 변동성트레이딩시스템개발을 위한 GARCH 모형을 통한 VKOSPI 예측모형 개발에 관한 연구)

  • Kim, Sun-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.19-32
    • /
    • 2010
  • Volatility plays a central role in both academic and practical applications, especially in pricing financial derivative products and trading volatility strategies. This study presents a novel mechanism based on generalized autoregressive conditional heteroskedasticity (GARCH) models that is able to enhance the performance of intelligent volatility trading systems by predicting Korean stock market volatility more accurately. In particular, we embedded the concept of the volatility asymmetry documented widely in the literature into our model. The newly developed Korean stock market volatility index of KOSPI 200, VKOSPI, is used as a volatility proxy. It is the price of a linear portfolio of the KOSPI 200 index options and measures the effect of the expectations of dealers and option traders on stock market volatility for 30 calendar days. The KOSPI 200 index options market started in 1997 and has become the most actively traded market in the world. Its trading volume is more than 10 million contracts a day and records the highest of all the stock index option markets. Therefore, analyzing the VKOSPI has great importance in understanding volatility inherent in option prices and can afford some trading ideas for futures and option dealers. Use of the VKOSPI as volatility proxy avoids statistical estimation problems associated with other measures of volatility since the VKOSPI is model-free expected volatility of market participants calculated directly from the transacted option prices. This study estimates the symmetric and asymmetric GARCH models for the KOSPI 200 index from January 2003 to December 2006 by the maximum likelihood procedure. Asymmetric GARCH models include GJR-GARCH model of Glosten, Jagannathan and Runke, exponential GARCH model of Nelson and power autoregressive conditional heteroskedasticity (ARCH) of Ding, Granger and Engle. Symmetric GARCH model indicates basic GARCH (1, 1). Tomorrow's forecasted value and change direction of stock market volatility are obtained by recursive GARCH specifications from January 2007 to December 2009 and are compared with the VKOSPI. Empirical results indicate that negative unanticipated returns increase volatility more than positive return shocks of equal magnitude decrease volatility, indicating the existence of volatility asymmetry in the Korean stock market. The point value and change direction of tomorrow VKOSPI are estimated and forecasted by GARCH models. Volatility trading system is developed using the forecasted change direction of the VKOSPI, that is, if tomorrow VKOSPI is expected to rise, a long straddle or strangle position is established. A short straddle or strangle position is taken if VKOSPI is expected to fall tomorrow. Total profit is calculated as the cumulative sum of the VKOSPI percentage change. If forecasted direction is correct, the absolute value of the VKOSPI percentage changes is added to trading profit. It is subtracted from the trading profit if forecasted direction is not correct. For the in-sample period, the power ARCH model best fits in a statistical metric, Mean Squared Prediction Error (MSPE), and the exponential GARCH model shows the highest Mean Correct Prediction (MCP). The power ARCH model best fits also for the out-of-sample period and provides the highest probability for the VKOSPI change direction tomorrow. Generally, the power ARCH model shows the best fit for the VKOSPI. All the GARCH models provide trading profits for volatility trading system and the exponential GARCH model shows the best performance, annual profit of 197.56%, during the in-sample period. The GARCH models present trading profits during the out-of-sample period except for the exponential GARCH model. During the out-of-sample period, the power ARCH model shows the largest annual trading profit of 38%. The volatility clustering and asymmetry found in this research are the reflection of volatility non-linearity. This further suggests that combining the asymmetric GARCH models and artificial neural networks can significantly enhance the performance of the suggested volatility trading system, since artificial neural networks have been shown to effectively model nonlinear relationships.

Calculation of Unit Hydrograph from Discharge Curve, Determination of Sluice Dimension and Tidal Computation for Determination of the Closure curve (단위유량도와 비수갑문 단면 및 방조제 축조곡선 결정을 위한 조속계산)

  • 최귀열
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.7 no.1
    • /
    • pp.861-876
    • /
    • 1965
  • During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula $h= \frac{V^2}{2g}$ and must be equal to the velocity detertnined from the current. If there is a difference in velocity, a new estimate of the inner water level must be made and entire procedure should be repeated. When the higher water level is equal to or more than 2/3 times the difference between the lower water level and the crest of the dam, we speak of a "free weir." The flow over the weir is then dependent upon the higher water level and not on the difference between high and low water levels. When the weir is "submerged", that is, the higher water level is less than 2/3 times the difference between the lower water and the crest of the dam, the difference between the high and low levels being decisive. The free weir normally occurs first during ebb, and is due to. the fact that mean level in the estuary is higher than the mean level of . the tide in building dams with barges the maximum velocity in the closing gap may not be more than 3m/sec. As the maximum velocities are higher than this limit we must use other construction methods in closing the gap. This can be done by dump-cars from each side or by using a cable way.e or by using a cable way.

  • PDF

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.

NFC-based Smartwork Service Model Design (NFC 기반의 스마트워크 서비스 모델 설계)

  • Park, Arum;Kang, Min Su;Jun, Jungho;Lee, Kyoung Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.157-175
    • /
    • 2013
  • Since Korean government announced 'Smartwork promotion strategy' in 2010, Korean firms and government organizations have started to adopt smartwork. However, the smartwork has been implemented only in a few of large enterprises and government organizations rather than SMEs (small and medium enterprises). In USA, both Yahoo! and Best Buy have stopped their flexible work because of its reported low productivity and job loafing problems. In addition, according to the literature on smartwork, we could draw obstacles of smartwork adoption and categorize them into the three types: institutional, organizational, and technological. The first category of smartwork adoption obstacles, institutional, include the difficulties of smartwork performance evaluation metrics, the lack of readiness of organizational processes, limitation of smartwork types and models, lack of employee participation in smartwork adoption procedure, high cost of building smartwork system, and insufficiency of government support. The second category, organizational, includes limitation of the organization hierarchy, wrong perception of employees and employers, a difficulty in close collaboration, low productivity with remote coworkers, insufficient understanding on remote working, and lack of training about smartwork. The third category, technological, obstacles include security concern of mobile work, lack of specialized solution, and lack of adoption and operation know-how. To overcome the current problems of smartwork in reality and the reported obstacles in literature, we suggest a novel smartwork service model based on NFC(Near Field Communication). This paper suggests NFC-based Smartwork Service Model composed of NFC-based Smartworker networking service and NFC-based Smartwork space management service. NFC-based smartworker networking service is comprised of NFC-based communication/SNS service and NFC-based recruiting/job seeking service. NFC-based communication/SNS Service Model supplements the key shortcomings that existing smartwork service model has. By connecting to existing legacy system of a company through NFC tags and systems, the low productivity and the difficulty of collaboration and attendance management can be overcome since managers can get work processing information, work time information and work space information of employees and employees can do real-time communication with coworkers and get location information of coworkers. Shortly, this service model has features such as affordable system cost, provision of location-based information, and possibility of knowledge accumulation. NFC-based recruiting/job-seeking service provides new value by linking NFC tag service and sharing economy sites. This service model has features such as easiness of service attachment and removal, efficient space-based work provision, easy search of location-based recruiting/job-seeking information, and system flexibility. This service model combines advantages of sharing economy sites with the advantages of NFC. By cooperation with sharing economy sites, the model can provide recruiters with human resource who finds not only long-term works but also short-term works. Additionally, SMEs (Small Medium-sized Enterprises) can easily find job seeker by attaching NFC tags to any spaces at which human resource with qualification may be located. In short, this service model helps efficient human resource distribution by providing location of job hunters and job applicants. NFC-based smartwork space management service can promote smartwork by linking NFC tags attached to the work space and existing smartwork system. This service has features such as low cost, provision of indoor and outdoor location information, and customized service. In particular, this model can help small company adopt smartwork system because it is light-weight system and cost-effective compared to existing smartwork system. This paper proposes the scenarios of the service models, the roles and incentives of the participants, and the comparative analysis. The superiority of NFC-based smartwork service model is shown by comparing and analyzing the new service models and the existing service models. The service model can expand scope of enterprises and organizations that adopt smartwork and expand the scope of employees that take advantages of smartwork.

The Recovery of Left Ventricular Function after Coronary Artery Bypass Grafting in Patients with Severe Ischemic Left Ventricular Dysfunction: Off-pump Versus On-pump (심한 허혈성 좌심실 기능부전 환자에서 관상동맥우회술시 체외순환 여부에 따른 좌심실 기능 회복력 비교)

  • Kim Jae Hyun;Kim Gun Gyk;Baek Man Jong;Oh Sam Sae;Kim Chong Whan;Na Chan-Young
    • Journal of Chest Surgery
    • /
    • v.38 no.2 s.247
    • /
    • pp.116-122
    • /
    • 2005
  • Background: Adverse effects of cardiopulmonary bypass can be avoided by 'Off-pump' coronary artery bypass (OPCAB) surgery. Recent studies have reported that OPCAB had the most beneficial impact on patients at highest risk by reducing bypass-related complications. The purpose of this study is to compare the outcome of OPCAB and conventional coronary artery bypass grafting (CCAB) in patients with poor left ventricular (LV) function. Material and Method: From March 1997 to February 2004, seventy five patients with left ventricular ejection fraction (LVEF) of $35\%$ or less underwent isolated coronary artery bypass grafting at our institute. Of these patients, 33 patients underwent OPCAB and 42 underwent CCAB. Preoperative risk factors, operative and postoperative outcomes, including LV functional change, were compared and analysed. Result: Patients undergoing CCAB were more likely to have unstable angina, three vessel disease and acute myocardial infarction among the preoperative factors. OPCAB group had significantly lower mean operation time, less numbers of total distal anastomoses per patient and less numbers of distal anastomoses per patient in the circumflex territory than the CCAB group. There was no difference between the groups in regard to in-hospital mortality $(OPCAB\; 9.1\%\;(n=3)\;Vs.\;CCAB\;9.5\%\;(n=4)),$ intubation time, the length of stay in intensive care unit and in hospital postoperatively. Postoperative complication occurred more in CCAB group but did not show statistical difference. On follow-up echocardiography, OPCAB group showed $9.1\%$ improvement in mean LVEF, 4.3 mm decrease in mean left ventricular end-diastolic dimension (LVEDD) and 4.2 mm decrease in mean left ventricular end-systolic dimension (LVESD). CCAB group showed $11.0\%$ improvement in mean LVEF, 5.1 mm decrease in mean LVEDD and 5.5 mm decrease in mean LVESD. But there was no statistically significant difference between the two groups. Conclusion: This study showed that LV function improves postoperatively in patients with severe ischemic LV dysfunction, but failed to show any difference in the degree of improvement between OPCAB and CCAB. In terms of operative mortality rate and LV functional recovery, the results of OPCAB were as good as those of CCAB in patients with poor LV function. But, OPCAB procedure was advantageous in shortening of operative time and in decrease of complications. We recommend OPCAB as the first surgical option for patients with severe LV dysfunction.

The Recent Outcomes after Repair of Tetralogy of Fallot Associated with Pulmonary Atresia and Major Aortopulmonary Collateral Arteries (폐동맥폐쇄와 주대동맥폐동맥부행혈관을 동반한 활로씨사징증 교정의 최근 결과)

  • Kim Jin-Hyun;Kim Woong-Han;Kim Dong-Jung;Jung Eui-Suk;Jeon Jae-Hyun;Min Sun-Kyung;Hong Jang-Mee;Lee Jeong-Ryul;Rho Joon-Ryuang;Kim Yong-Jin
    • Journal of Chest Surgery
    • /
    • v.39 no.4 s.261
    • /
    • pp.269-274
    • /
    • 2006
  • Background: Tetralogy of Fallot (TOF) with pulmonary atresia and major aortopulmonary collateral arteries (MAPCAS) is complex lesion with marked heterogeneity of pulmonary blood supply and arborization anomalies. Patients with TOF with PA and MAPCAS have traditionally required multiple staged unifocalization of pulmonary blood supply before undergoing complete repair. In this report, we describe recent change of strategy and the results in our institution. Material and Method: We established surgical stratagies: early correction, central mediastinal approach, initial RV-PA conduit interposition, and aggressive intervention. Between July 1998 and August 2004, 23 patients were surgically treated at our institution. We divided them into 3 groups by initial operation method; group I: one stage total correction, group II: RV-PA conduit and unifocalization, group III: RV-PA conduit interposition only. Result: Mean ages at initial operation in each group were $13.9{\pm}16.0$ months (group 1), $10.4{\pm}15.6$ months (group II), and $7.9{\pm}7.7$ months (group III). True pulmonary arteries were not present in f patient and the pulmonary arteries were confluent in 22 patients. The balloon angioplasty was done in average 1.3 times (range: $1{\sim}6$). There were 4 early deaths relating initial operation, and 1 late death due to incracranial hemorrhage after definitive repair. The operative mortalities of initial procedures in each group were 25.0% (1/4: group I), 20.0% (2/10: group II), and 12.2% (1/9: group III). The causes of operative mortality were hypoxia (2), low cardiac output (1) and sudden cardiac arrest (1). Definitive repair rates in each group were 75% (3/4) in group I, 20% (2/10, fenestration: 2) in group II, and 55.0% (5/9, fenestration: 1) in group III. Conclusion: In patients of TOF with PA and MAPCAS, RV-PA connection as a initial procedure could be performed with relatively low risk, and high rate of definitive repair can be obtained in the help of balloon pulmonary angioplasty. One stage RV-PA connection and unifocalization appeared to be successful in selected patients.

Long Term Results of Bronchial Sleeve Resection for Primary Lung Cancer (원발성 폐암 환자에서의 기관지 소매 절제술의 장기 성적)

  • Cho, Suk-Ki;Sung, Ki-Ick;Lee, Cheul;Lee, Jae-Ik;Kim, Joo-Hyun;Kim, Young-Tae;Sung, Sook-Whan
    • Journal of Chest Surgery
    • /
    • v.34 no.12
    • /
    • pp.917-923
    • /
    • 2001
  • Background : Bronchial sleeve resection for centrally located primary lung cancer is a lung-parenchyma-sparing operation in patients whose predicted postoperative lung function is expected to diminished markedly. Because of its potential bronchial anastomotic complications, it is considered to be an alternative to pneumonectomy. However, since sleeve lobectomy yielded survival results equal to at least those of pneumonectomy, as well as better functional results, it became and accepted standard procedure for patients with lung cancer who have anatomically suitable tumors, regardless of lung function. In this study, from analyzing of occurrence rate of postoperative complication and survival rate, we wish to investigate the validity of sleeve resection for primary lung cancer. Material and Method : From January 1989 to December 1998, 45 bronchial sleeve resections were carried out in the Department of Thoracic Surgery of Seoul National University Hospital. We included 40 men and 5 women, whose ages ranged from 23 to 72 years with mean age of 57 years. Histologic type was squamous cell carcinoma in 35 patients, adenocarcinoma in 7, and adenosquamous cell carcinoma in 1 patients. Right upper lobectomy was peformed in 24 patients, left upper lobectomy in 11, left lower lobectomy in 3, right lower lobectomy in 1, right middle lobecomy and right lower lobectomy in 3, right upper lobectomy and right middle lobecomy in 2, and left pneumonectomy in 1 patient. Postoperative stage was Ib in 11, IIa in 3, IIb in 16, IIIa in 13, and IIIb in 2 patients. Result: Postoperative complications were as follows; atelectasis in 9, persistent air leakage for more than 7 days was in 7 patients, prolonged pleural effusion for more than 2 weeks in 7, pneumonia in 2, chylothorax in 1, and disruption of anastomosis in 1. Hospital mortality was in 3 patients. During follow-up period, bronchial stricture at anastomotic site were found in 7 patients under bronchoscopy, Average follow-up duration of survivals(n=42) was 35.5$\pm$29 months. All of stage I patients were survived, and 3 year survival rate of stage II and III patients were 63%, 21%, respectively. According to Nstage, all of N0 patients were survived and 3 year survival rates of Nl and N2 were 63% and 28% respectively. Conclusion: We suggest that this sleeve resection, which is technically demanding, should be considered in patients with centrally located lung cancer, because ttlis lung-saving operation is safer than pneumonectomy and is equally curative.

  • PDF