• 제목/요약/키워드: Partial Key

검색결과 407건 처리시간 0.029초

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 대한교통학회 1995년도 제27회 학술발표회
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

A PLS Path Modeling Approach on the Cause-and-Effect Relationships among BSC Critical Success Factors for IT Organizations (PLS 경로모형을 이용한 IT 조직의 BSC 성공요인간의 인과관계 분석)

  • Lee, Jung-Hoon;Shin, Taek-Soo;Lim, Jong-Ho
    • Asia pacific journal of information systems
    • /
    • 제17권4호
    • /
    • pp.207-228
    • /
    • 2007
  • Measuring Information Technology(IT) organizations' activities have been limited to mainly measure financial indicators for a long time. However, according to the multifarious functions of Information System, a number of researches have been done for the new trends on measurement methodologies that come with financial measurement as well as new measurement methods. Especially, the researches on IT Balanced Scorecard(BSC), concept from BSC measuring IT activities have been done as well in recent years. BSC provides more advantages than only integration of non-financial measures in a performance measurement system. The core of BSC rests on the cause-and-effect relationships between measures to allow prediction of value chain performance measures to allow prediction of value chain performance measures, communication, and realization of the corporate strategy and incentive controlled actions. More recently, BSC proponents have focused on the need to tie measures together into a causal chain of performance, and to test the validity of these hypothesized effects to guide the development of strategy. Kaplan and Norton[2001] argue that one of the primary benefits of the balanced scorecard is its use in gauging the success of strategy. Norreklit[2000] insist that the cause-and-effect chain is central to the balanced scorecard. The cause-and-effect chain is also central to the IT BSC. However, prior researches on relationship between information system and enterprise strategies as well as connection between various IT performance measurement indicators are not so much studied. Ittner et al.[2003] report that 77% of all surveyed companies with an implemented BSC place no or only little interest on soundly modeled cause-and-effect relationships despite of the importance of cause-and-effect chains as an integral part of BSC. This shortcoming can be explained with one theoretical and one practical reason[Blumenberg and Hinz, 2006]. From a theoretical point of view, causalities within the BSC method and their application are only vaguely described by Kaplan and Norton. From a practical consideration, modeling corporate causalities is a complex task due to tedious data acquisition and following reliability maintenance. However, cause-and effect relationships are an essential part of BSCs because they differentiate performance measurement systems like BSCs from simple key performance indicator(KPI) lists. KPI lists present an ad-hoc collection of measures to managers but do not allow for a comprehensive view on corporate performance. Instead, performance measurement system like BSCs tries to model the relationships of the underlying value chain in cause-and-effect relationships. Therefore, to overcome the deficiencies of causal modeling in IT BSC, sound and robust causal modeling approaches are required in theory as well as in practice for offering a solution. The propose of this study is to suggest critical success factors(CSFs) and KPIs for measuring performance for IT organizations and empirically validate the casual relationships between those CSFs. For this purpose, we define four perspectives of BSC for IT organizations according to Van Grembergen's study[2000] as follows. The Future Orientation perspective represents the human and technology resources needed by IT to deliver its services. The Operational Excellence perspective represents the IT processes employed to develop and deliver the applications. The User Orientation perspective represents the user evaluation of IT. The Business Contribution perspective captures the business value of the IT investments. Each of these perspectives has to be translated into corresponding metrics and measures that assess the current situations. This study suggests 12 CSFs for IT BSC based on the previous IT BSC's studies and COBIT 4.1. These CSFs consist of 51 KPIs. We defines the cause-and-effect relationships among BSC CSFs for IT Organizations as follows. The Future Orientation perspective will have positive effects on the Operational Excellence perspective. Then the Operational Excellence perspective will have positive effects on the User Orientation perspective. Finally, the User Orientation perspective will have positive effects on the Business Contribution perspective. This research tests the validity of these hypothesized casual effects and the sub-hypothesized causal relationships. For the purpose, we used the Partial Least Squares approach to Structural Equation Modeling(or PLS Path Modeling) for analyzing multiple IT BSC CSFs. The PLS path modeling has special abilities that make it more appropriate than other techniques, such as multiple regression and LISREL, when analyzing small sample sizes. Recently the use of PLS path modeling has been gaining interests and use among IS researchers in recent years because of its ability to model latent constructs under conditions of nonormality and with small to medium sample sizes(Chin et al., 2003). The empirical results of our study using PLS path modeling show that the casual effects in IT BSC significantly exist partially in our hypotheses.

Problems of administrative area system in Korea and reforming direction (한국 행정구역체계의 문제점과 개편의 방향)

  • ;Yim, Seok-Hoi
    • Journal of the Korean Geographical Society
    • /
    • 제29권1호
    • /
    • pp.65-83
    • /
    • 1994
  • Sevaral problems of administrative area sysem in Korea have been brought up for a long time. Because its frame has remained since Chosun and Japanese colonial period in spite of changing local administrative environment in accordance with rapid industrialization and urbanization. Recent reform of city (Shi)- county (Gun) integration is derived from this argument. But problems which permeate deeply overall system cannot be solved by partial reorganization of Shi-Gun. They may be rationalized only through the reform of the whole system. The aims of this study are to analyze problems of administrative area system entirelr and to discuss the direction of its reform from that point of view. Major problems of administrative area system are summed up into the followings. Firstly, it is found that administrative hierarchies are too many levels. Contemporary administrative hierarchical structure is 4 levels: regional autonomous government (Tukpyolshi, Jik'halshi, Do), local autonomous government (Shi, Gun), two leveis of auxiliary administrative area (Up, Myun and Ri). These hierarchies were established in late period of Chosun which transportation was undeveloped and residential activity space was confined. But today developing transportion and expanding sphere of life don't need administrative hierarchical structurl with many levels. Besides developing administrative technology reduces administrative space by degrees. Many levels of contemporary administrative hierarchical structure are main factor of administrative inefficency, discording with settlement system. Second problem is that Tukpyolshi and Jik'halshi - cities under direct control of the central government as metropolitan area - underbounded cities. Underbounded city discomforts residential life and increases external elects of local pulic services. Especially this problem is Seoul, Pusan and Daegu. Third problem is that Do-areas are mostly two larger in integrating into single sphere of life. In fact each of them consistes of two or three sphere of life. Fourth Problem is metropolitan government system that central city is seperated from complementary area, i.e. Do. It brings about weakening the economic force of Do. Fifth problem is that several cities divided single sphere of life. It is main factor of finantial inefficency and facing difficult regional administration. Finally necessity of rural parish (Myun.) is diminished gradually with higher order center oriented activty of rural residents. First of all administrative area system should corresponds with substantial sphere of life in order to solve these problems. Followings are some key directions this study proposes on the reform of administrative area system from that standpoint. 1. Principles of reorgnization -- integration of central dty with complementary area. -- correspondence of administrative hierarchical structure with settlement system. -- correspondence of boundary of administrative area with sphere of life. 2. Reform strategy -- Jik'halshi is integrated with Do and is under the contol of Do. -- Small Seoul shi (city) which have special functions as captal is demarcated in Seoul tukpyolshi and 22 autonomous distrcts of Seoul tukpyolshi is integrated into 3-4 cities. -- Neighboring cities (Shies) in single sphere of life are intrgrated into single city (Shj). -- Myun and Ri are abolished in rural region and new unit of local administrative area on the basis of lowest order sphere of life into which 3-4 Ries are integrated replaces them.

  • PDF

First Report of Soybean Dwarf Virus on Soybean(Glycine max) in Korea (콩(Glycine max)에서 콩위축바이러스(Soybean dwarf virus)의 최초 발생보고)

  • Kim, Sang-Mok;Lee, Jae-Bong;Lee, Yeong-Hoon;Choi, Se-Hoon;Choi, Hong-Soo;Park, Jin-Woo;Lee, Jun-Seong;Lee, Gwan-Seok;Moon, Jung-Kyung;Moon, Jae-Sun;Lee, Key-Woon;Lee, Su-Heon
    • Research in Plant Disease
    • /
    • 제12권3호
    • /
    • pp.213-220
    • /
    • 2006
  • In year 2003, a soybean(Glycine max) sample showing severe dwarfing symptom was collected from a farmers' field in Cheongsong in Korea. The results from the diagnosis of the sample by RT-PCR revealed that it was infected by Soybean dwarf virus(SbDV), SbDV-L81. This study could be the first report of the occurrence of the virus in Korea. To further characterize the virus, the partial nucleotide sequence of the genomic RNA of SbDV-L81 was determined by RT-PCR using species-specific primers. The sequences were analyzed and subsequently compared to previously characterized strains of SbDV based on the pattern of symptom expression and vector specificities. The intergenic region between ORF 2 and 3 and the coding regions of ORF 2, 3 and 4 were relatively similar to those of dwarfing strains(SbDV-DS and DP) rather than those of yellowing strains(SbDV-YS and YP). Likewise, the result from the analysis of 5'-half of the coding region of ORF5 indicated that SbDV-L81 was closely related to strains(SbDV-YP and DP) transmitted by Acyrthosiphon pisum. These data from the natural symptom and the comparisons of five regions of nucleotide sequences of SbDV suggested that SbDV-L81 might be closely related SbDV-DP.

Factors Influencing the Adoption of Location-Based Smartphone Applications: An Application of the Privacy Calculus Model (스마트폰 위치기반 어플리케이션의 이용의도에 영향을 미치는 요인: 프라이버시 계산 모형의 적용)

  • Cha, Hoon S.
    • Asia pacific journal of information systems
    • /
    • 제22권4호
    • /
    • pp.7-29
    • /
    • 2012
  • Smartphone and its applications (i.e. apps) are increasingly penetrating consumer markets. According to a recent report from Korea Communications Commission, nearly 50% of mobile subscribers in South Korea are smartphone users that accounts for over 25 million people. In particular, the importance of smartphone has risen as a geospatially-aware device that provides various location-based services (LBS) equipped with GPS capability. The popular LBS include map and navigation, traffic and transportation updates, shopping and coupon services, and location-sensitive social network services. Overall, the emerging location-based smartphone apps (LBA) offer significant value by providing greater connectivity, personalization, and information and entertainment in a location-specific context. Conversely, the rapid growth of LBA and their benefits have been accompanied by concerns over the collection and dissemination of individual users' personal information through ongoing tracking of their location, identity, preferences, and social behaviors. The majority of LBA users tend to agree and consent to the LBA provider's terms and privacy policy on use of location data to get the immediate services. This tendency further increases the potential risks of unprotected exposure of personal information and serious invasion and breaches of individual privacy. To address the complex issues surrounding LBA particularly from the user's behavioral perspective, this study applied the privacy calculus model (PCM) to explore the factors that influence the adoption of LBA. According to PCM, consumers are engaged in a dynamic adjustment process in which privacy risks are weighted against benefits of information disclosure. Consistent with the principal notion of PCM, we investigated how individual users make a risk-benefit assessment under which personalized service and locatability act as benefit-side factors and information privacy risks act as a risk-side factor accompanying LBA adoption. In addition, we consider the moderating role of trust on the service providers in the prohibiting effects of privacy risks on user intention to adopt LBA. Further we include perceived ease of use and usefulness as additional constructs to examine whether the technology acceptance model (TAM) can be applied in the context of LBA adoption. The research model with ten (10) hypotheses was tested using data gathered from 98 respondents through a quasi-experimental survey method. During the survey, each participant was asked to navigate the website where the experimental simulation of a LBA allows the participant to purchase time-and-location sensitive discounted tickets for nearby stores. Structural equations modeling using partial least square validated the instrument and the proposed model. The results showed that six (6) out of ten (10) hypotheses were supported. On the subject of the core PCM, H2 (locatability ${\rightarrow}$ intention to use LBA) and H3 (privacy risks ${\rightarrow}$ intention to use LBA) were supported, while H1 (personalization ${\rightarrow}$ intention to use LBA) was not supported. Further, we could not any interaction effects (personalization X privacy risks, H4 & locatability X privacy risks, H5) on the intention to use LBA. In terms of privacy risks and trust, as mentioned above we found the significant negative influence from privacy risks on intention to use (H3), but positive influence from trust, which supported H6 (trust ${\rightarrow}$ intention to use LBA). The moderating effect of trust on the negative relationship between privacy risks and intention to use LBA was tested and confirmed by supporting H7 (privacy risks X trust ${\rightarrow}$ intention to use LBA). The two hypotheses regarding to the TAM, including H8 (perceived ease of use ${\rightarrow}$ perceived usefulness) and H9 (perceived ease of use ${\rightarrow}$ intention to use LBA) were supported; however, H10 (perceived effectiveness ${\rightarrow}$ intention to use LBA) was not supported. Results of this study offer the following key findings and implications. First the application of PCM was found to be a good analysis framework in the context of LBA adoption. Many of the hypotheses in the model were confirmed and the high value of $R^2$ (i.,e., 51%) indicated a good fit of the model. In particular, locatability and privacy risks are found to be the appropriate PCM-based antecedent variables. Second, the existence of moderating effect of trust on service provider suggests that the same marginal change in the level of privacy risks may differentially influence the intention to use LBA. That is, while the privacy risks increasingly become important social issues and will negatively influence the intention to use LBA, it is critical for LBA providers to build consumer trust and confidence to successfully mitigate this negative impact. Lastly, we could not find sufficient evidence that the intention to use LBA is influenced by perceived usefulness, which has been very well supported in most previous TAM research. This may suggest that more future research should examine the validity of applying TAM and further extend or modify it in the context of LBA or other similar smartphone apps.

  • PDF

An Examination of Knowledge Sourcing Strategies Effects on Corporate Performance in Small Enterprises (소규모 기업에 있어서 지식소싱 전략이 기업성과에 미치는 영향 고찰)

  • Choi, Byoung-Gu
    • Asia pacific journal of information systems
    • /
    • 제18권4호
    • /
    • pp.57-81
    • /
    • 2008
  • Knowledge is an essential strategic weapon for sustaining competitive advantage and is the key determinant for organizational growth. When knowledge is shared and disseminated throughout the organization, it increases an organization's value by providing the ability to respond to new and unusual situations. The growing importance of knowledge as a critical resource has forced executives to pay attention to their organizational knowledge. Organizations are increasingly undertaking knowledge management initiatives and making significant investments. Knowledge sourcing is considered as the first important step in effective knowledge management. Most firms continue to make an effort to realize the benefits of knowledge management by using various knowledge sources effectively. Appropriate knowledge sourcing strategies enable organizations to create, acquire, and access knowledge in a timely manner by reducing search and transfer costs, which result in better firm performance. In response, the knowledge management literature has devoted substantial attention to the analysis of knowledge sourcing strategies. Many studies have categorized knowledge sourcing strategies into intemal- and external-oriented. Internal-oriented sourcing strategy attempts to increase firm performance by integrating knowledge within the boundary of the firm. On the contrary, external-oriented strategy attempts to bring knowledge in from outside sources via either acquisition or imitation, and then to transfer that knowledge across to the organization. However, the extant literature on knowledge sourcing strategies focuses primarily on large organizations. Although many studies have clearly highlighted major differences between large and small firms and the need to adopt different strategies for different firm sizes, scant attention has been given to analyzing how knowledge sourcing strategies affect firm performance in small firms and what are the differences between small and large firms in the patterns of knowledge sourcing strategies adoption. This study attempts to advance the current literature by examining the impact of knowledge sourcing strategies on small firm performance from a holistic perspective. By drawing on knowledge based theory from organization science and complementarity theory from the economics literature, this paper is motivated by the following questions: (1) what are the adoption patterns of different knowledge sourcing strategies in small firms (i,e., what sourcing strategies should be adopted and which sourcing strategies work well together in small firms)?; and (2) what are the performance implications of these adoption patterns? In order to answer the questions, this study developed three hypotheses. First hypothesis based on knowledge based theory is that internal-oriented knowledge sourcing is positively associated with small firm performance. Second hypothesis developed on the basis of knowledge based theory is that external-oriented knowledge sourcing is positively associated with small firm performance. The third one based on complementarity theory is that pursuing both internal- and external-oriented knowledge sourcing simultaneously is negatively or less positively associated with small firm performance. As a sampling frame, 700 firms were identified from the Annual Corporation Report in Korea. Survey questionnaires were mailed to owners or executives who were most erudite about the firm s knowledge sourcing strategies and performance. A total of 188 companies replied, yielding a response rate of 26.8%. Due to incomplete data, 12 responses were eliminated, leaving 176 responses for the final analysis. Since all independent variables were measured using continuous variables, supermodularity function was used to test the hypotheses based on the cross partial derivative of payoff function. The results indicated no significant impact of internal-oriented sourcing strategies while positive impact of external-oriented sourcing strategy on small firm performance. This intriguing result could be explained on the basis of various resource and capital constraints of small firms. Small firms typically have restricted financial and human resources. They do not have enough assets to always develop knowledge internally. Another possible explanation is competency traps or core rigidities. Building up a knowledge base based on internal knowledge creates core competences, but at the same time, excessive internal focused knowledge exploration leads to behaviors blind to other knowledge. Interestingly, this study found that Internal- and external-oriented knowledge sourcing strategies had a substitutive relationship, which was inconsistent with previous studies that suggested complementary relationship between them. This result might be explained using organizational identification theory. Internal organizational members may perceive external knowledge as a threat, and tend to ignore knowledge from external sources because they prefer to maintain their own knowledge, legitimacy, and homogeneous attitudes. Therefore, integrating knowledge from internal and external sources might not be effective, resulting in failure of improvements of firm performance. Another possible explanation is small firms resource and capital constraints and lack of management expertise and absorptive capacity. Although the integration of different knowledge sources is critical, high levels of knowledge sourcing in many areas are quite expensive and so are often unrealistic for small enterprises. This study provides several implications for research as well as practice. First this study extends the existing knowledge by examining the substitutability (and complementarity) of knowledge sourcing strategies. Most prior studies have tended to investigate the independent effects of these strategies on performance without considering their combined impacts. Furthermore, this study tests complementarity based on the productivity approach that has been considered as a definitive test method for complementarity. Second, this study sheds new light on knowledge management research by identifying the relationship between knowledge sourcing strategies and small firm performance. Most current literature has insisted complementary relationship between knowledge sourcing strategies on the basis of data from large firms. Contrary to the conventional wisdom, this study identifies substitutive relationship between knowledge sourcing strategies using data from small firms. Third, implications for practice highlight that managers of small firms should focus on knowledge sourcing from external-oriented strategies. Moreover, adoption of both sourcing strategies simultaneousiy impedes small firm performance.

Evaluation of Endothelium-dependent Myocardial Perfusion Reserve in Healthy Smokers; Cold Pressor Test using $H_2^{15}O\;PET$ (흡연자에서 관상동맥 내피세포 의존성 심근 혈류 예비능: $H_2^{15}O\;PET$ 찬물자극 검사에 의한 평가)

  • Hwang, Kyung-Hoon;Lee, Dong-Soo;Lee, Byeong-Il;Lee, Jae-Sung;Lee, Ho-Young;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • 제38권1호
    • /
    • pp.21-29
    • /
    • 2004
  • Purpose: Much evidence suggests long-term cigarette smoking alters coronary vascular endothelial response. On this study, we applied nonnegative matrix factorization (NMF), an unsupervised learning algorithm, to CO-less $H_2^{15}O-PET$ to investigate coronary endothelial dysfunction caused by smoking noninvasively. Materials and methods: This study enrolled eighteen young male volunteers consisting of 9 smokers $(23.8{\pm}1.1\;yr;\;6.5{\pm}2.5$ pack-years) and 9 nonsmokers $(23.8{\pm}2.9 yr)$. They do not have any cardiovascular risk factor or disease history. Myocardial $H_2^{15}O-PET$ was performed at rest, during cold ($5^{\circ}C$) pressor stimulation and during adenosine infusion. Left ventricular blood pool and myocardium were segmented on dynamic PET data by NMF method. Myocardial blood flow (MBF) was calculated from input and tissue functions by a single compartmental model with correction of partial volume and spillover effects. Results: There were no significant difference in resting MBF between the two groups (Smokers: 1.43 0.41 ml/g/min and non-smokers: $1.37{\pm}0.41$ ml/g/min p=NS). during cold pressor stimulation, MBF in smokers was significantly lower than 4hat in non-smokers ($1.25{\pm}0.34$ ml/g/min vs $1.59{\pm}0.29$ ml/gmin; p=0.019). The difference in the ratio of cold pressor MBF to resting MBF between the two groups was also significant (p=0.024; $90{\pm}24%$ in smokers and $122{\pm}28%$ in non-smokers.). During adenosine infusion, however, hyperemic MBF did not differ significantly between smokers and non-smokers ($5.81{\pm}1.99$ ml/g/min vs $5.11{\pm}1.31$ ml/g/min ; p=NS). Conclusion: in smokers, MBF during cold pressor stimulation was significantly lower compared wi4h nonsmokers, reflecting smoking-Induced endothelial dysfunction. However, there was no significant difference in MBF during adenosine-induced hyperemia between the two groups.

Viability Assessment with T1-201 Rest-24 hour Delay Redistribution SPECT before Coronary Artery Bypass Graft in Coronary Artery Diseases (관상동맥 질환에서 우회로 수술 전 T1-201 휴식-24시간 지연 심근 관류 SPECT를 이용한 심근생존능의 평가)

  • Yoon, Seok-Nam;Kim, Ki-Bong;Lee, Won-Woo;Chung, June-Key;Lee, Myung-Chul;Seo, Jung-Don;Koh, Chang-Soon;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine
    • /
    • 제30권4호
    • /
    • pp.493-501
    • /
    • 1996
  • To assess contribution of T1-201 rest-24 hour delay redistribution in detection of viable myocardium, we studied the predictive value of this redistribution in 17 patients who peformed rest-24 hour delay perfusion SPECT before bypass surgery. Regional wall motion was compared with gated SPECT in 10 patients and echocardiography in 7 patients before and after bypass surgery. Rest and 24 hour delayed uptakes were scored from 0 (normal perfusion) to 3 (defect). In rest SPECT, 56 segments showed perfusion decrease. Thirty four segments(61%) improved after surgery and were defined as viable Nineteen(34%) segments had more uptake of T1-201 at 24 hour delay, and the other 37 segments did not. In 81%(25/31) of segments with mildly decreased perfusion, wall motion after bypass surgery improved, 57% (8/14) of segments with severely decreased perfusion improved, and 9%(1/11) of segments with defects improved. In 14 among 19 segments which had more T1-201 uptakes at 24 hour delay, wall motion was improved(positive predictive value of redistribution: 74%). 20 among 37 segments which had persistent decreases in rest-24 hour redistribution improved and 17 did not(negative predictive value: 46%). Segments having severe perfusion decrease or defects showed improved wall motion after surgery in 64%(7/11), if it had redistribution at delay. Segments with either mildly decreased uptake in resting or rest-delayed redistribution showed improved wall motion in 76%(32/42). Among the 14 segments which showed improvement in wall motion, 10 had partial reversibility in stress-rest images and the other 4 had persistent perfusion defects in stress-rest images. These 4 segments were found viable only with rest-24 hour delayed perfusion SPECT. We concluded that rest T1-201 uptake or redistribution at 24 hour delay should be referred as an evidence to warrant postoperative improvement of abnormal wall motion and we could predict myocardial viability with preoperative rest-24 hour delay perfusion SPECT in the segments with rest perfusion decreases.

  • PDF

The Mediating Role of Perceived Risk in the Relationships Between Enduring Product Involvement and Trust Expectation (지속적 제품관여도와 소비자 요구신뢰수준 간의 영향관계: 인지된 위험의 매개 역할에 대한 실증분석을 중심으로)

  • Hong, Ilyoo B.;Kim, Taeha;Cha, Hoon S.
    • Asia pacific journal of information systems
    • /
    • 제23권4호
    • /
    • pp.103-128
    • /
    • 2013
  • When a consumer needs a product or service and multiple sellers are available online, the process of selecting a seller to buy online from is complex since the process involves many behavioral dimensions that have to be taken into account. As a part of this selection process, consumers may set minimum trust expectation that can be used to screen out less trustworthy sellers. In the previous research, the level of consumers' trust expectation has been anchored on two important factors: product involvement and perceived risk. Product involvement refers to the extent to which a consumer perceives a specific product important. Thus, the higher product involvement may result in the higher trust expectation in sellers. On the other hand, other related studies found that when consumers perceived a higher level of risk (e.g., credit card fraud risk), they set higher trust expectation as well. While abundant research exists addressing the relationship between product involvement and perceived risk, little attention has been paid to the integrative view of the link between the two constructs and their impacts on the trust expectation. The present paper is a step toward filling this research gap. The purpose of this paper is to understand the process by which a consumer chooses an online merchant by examining the relationships among product involvement, perceived risk, trust expectation, and intention to buy from an e-tailer. We specifically focus on the mediating role of perceived risk in the relationships between enduring product involvement and the trust expectation. That is, we question whether product involvement affects the trust expectation directly without mediation or indirectly mediated by perceived risk. The research model with four hypotheses was initially tested using data gathered from 635 respondents through an online survey method. The structural equation modeling technique with partial least square was used to validate the instrument and the proposed model. The results showed that three out of the four hypotheses formulated were supported. First, we found that the intention to buy from a digital storefront is positively and significantly influenced by the trust expectation, providing support for H4 (trust expectation ${\rightarrow}$ purchase intention). Second, perceived risk was found to be a strong predictor of trust expectation, supporting H2 as well (perceived risk ${\rightarrow}$ trust expectation). Third, we did not find any evidence of direct influence of product involvement, which caused H3 to be rejected (product involvement ${\rightarrow}$ trust expectation). Finally, we found significant positive relationship between product involvement and perceived risk (H1: product involvement ${\rightarrow}$ perceived risk), which suggests that the possibility of complete mediation of perceived risk in the relationship between enduring product involvement and the trust expectation. As a result, we conducted an additional test for the mediation effect by comparing the original model with the revised model without the mediator variable of perceived risk. Indeed, we found that there exists a strong influence of product involvement on the trust expectation (by intentionally eliminating the variable of perceived risk) that was suppressed (i.e., mediated) by the perceived risk in the original model. The Sobel test statistically confirmed the complete mediation effect. Results of this study offer the following key findings. First, enduring product involvement is positively related to perceived risk, implying that the higher a consumer is enduringly involved with a given product, the greater risk he or she is likely to perceive with regards to the online purchase of the product. Second, perceived risk is positively related to trust expectation. A consumer with great risk perceptions concerning the online purchase is likely to buy from a highly trustworthy online merchant, thereby mitigating potential risks. Finally, product involvement was found to have no direct influence on trust expectation, but the relationship between the two constructs was indirect and mediated by the perceived risk. This is perhaps an important theoretical integration of two separate streams of literature on product involvement and perceived risk. The present research also provides useful implications for practitioners as well as academicians. First, one implication for practicing managers in online retail stores is that they should invest in reducing the perceived risk of consumers in order to lower down the trust expectation and thus increasing the consumer's intention to purchase products or services. Second, an academic implication is that perceived risk mediates the relationship between enduring product involvement and trust expectation. Further research is needed to elaborate the theoretical relationships among the constructs under consideration.

Changes in the Incantations of the Daesoon Faith: Focusing on Historical Facts (대순 신앙의 주문 변화 -고증을 중심으로-)

  • Park Sang-kyu
    • Journal of the Daesoon Academy of Sciences
    • /
    • 제44집
    • /
    • pp.1-52
    • /
    • 2023
  • Incantations are reflected in the fundamentals of the Daesoon faith system and are, thus, key to its understanding. Jeungsan, the yeonwon (fountainhead, 淵源) of the Daesoon faith, created new incantations or transformed existing ones that had been used in old religious traditions such as Buddhism and Daoism. However, there has been no in-depth academic research on Jeungsan's incantations until now. This study aims to academically clarify the incantatory archetypes of Jeungsan's incantations based on documents published until the 1970s. Jeungsan's incantations are then compared to those of Mugeuk-do (Limitless Dao) in the 1920s and Taegeuk-do (Great-Ultimate Dao) in the 1950s. Jeongsan's transformed incantations are analyzed through this process. Jeongsan reflected the faith system in Jeungsan's incantations during the period of Mugeuk-do. He transformed the incantations to achieve his goal and realize his wishes by arranging terms that referred to himself before the optative words of the incantations. Jeongsan made several changes to the incantations in the 1950s. First, the majority of incantations used in Mugeuk-do were discarded. This meant making partial changes to the faith system by reflecting awareness because the corresponding incantations were no longer necessary as the Degree Number calibrated by Jeungsan had been realized. Second, Jeongsan organized the incantations in use and institutionalized their instructions. This reflected the essential doctrinal system of the Daesoon faith, namely the completion of the true dharma by Jeongsan. Considering this doctrine, that is, the Fifty Year Holy Work (五十年工夫), the true dharma can be presumed to have been realized before the death of Jeongsan. Accordingly, the institutionalizing and organizing of the incantations were indispensable until the mid-to-late 1950s. Jeongsan, the founder of the Daesoon order, posited himself as the successor of religious orthodox lineage and as the figure who would complete the true dharma by realizing the Degree Number calibrated by Jeungsan. Therefore, Jeongsan interpreted Jeungsan's incantations to be a rough sketch of the Daesoon faith system that had been drawn for him in advance by Jeungsan. Accordingly, Jeongsan transformed Jeungsan's incantations and used them to realize the Degree Number, which Jeungsan had planned. Simultaneously, Jeongsan declared that he would fulfill the Degree Number and establish the true dharma by changing those incantations.