• Title/Summary/Keyword: point to consider

Search Result 1,424, Processing Time 0.033 seconds

Derivation of the Synthetic Unit Hydrograph Based on the Watershed Characteristics (유역특성에 의한 합성단위도의 유도에 관한 연구)

  • 서승덕
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.17 no.1
    • /
    • pp.3642-3654
    • /
    • 1975
  • The purpose of this thesis is to derive a unit hydrograph which may be applied to the ungaged watershed area from the relations between directly measurable unitgraph properties such as peak discharge(qp), time to peak discharge (Tp), and lag time (Lg) and watershed characteristics such as river length(L) from the given station to the upstream limits of the watershed area in km, river length from station to centroid of gravity of the watershed area in km (Lca), and main stream slope in meter per km (S). Other procedure based on routing a time-area diagram through catchment storage named Instantaneous Unit Hydrograph(IUH). Dimensionless unitgraph also analysed in brief. The basic data (1969 to 1973) used in these studies are 9 recording level gages and rating curves, 41 rain gages and pluviographs, and 40 observed unitgraphs through the 9 sub watersheds in Nak Oong River basin. The results summarized in these studies are as follows; 1. Time in hour from start of rise to peak rate (Tp) generally occured at the position of 0.3Tb (time base of hydrograph) with some indication of higher values for larger watershed. The base flow is comparelatively higher than the other small watershed area. 2. Te losses from rainfall were divided into initial loss and continuing loss. Initial loss may be defined as that portion of storm rainfall which is intercepted by vegetation, held in deppression storage or infiltrated at a high rate early in the storm and continuing loss is defined as the loss which continues at a constant rate throughout the duration of the storm after the initial loss has been satisfied. Tis continuing loss approximates the nearly constant rate of infiltration (${\Phi}$-index method). The loss rate from this analysis was estimated 50 Per cent to the rainfall excess approximately during the surface runoff occured. 3. Stream slope seems approximate, as is usual, to consider the mainstreamonly, not giving any specific consideration to tributary. It is desirable to develop a single measure of slope that is representative of the who1e stream. The mean slope of channel increment in 1 meter per 200 meters and 1 meter per 1400 meters were defined at Gazang and Jindong respectively. It is considered that the slopes are low slightly in the light of other river studies. Flood concentration rate might slightly be low in the Nak Dong river basin. 4. It found that the watershed lag (Lg, hrs) could be expressed by Lg=0.253 (L.Lca)0.4171 The product L.Lca is a measure of the size and shape of the watershed. For the logarithms, the correlation coefficient for Lg was 0.97 which defined that Lg is closely related with the watershed characteristics, L and Lca. 5. Expression for basin might be expected to take form containing theslope as {{{{ { L}_{g }=0.545 {( { L. { L}_{ca } } over { SQRT {s} } ) }^{0.346 } }}}} For the logarithms, the correlation coefficient for Lg was 0.97 which defined that Lg is closely related with the basin characteristics too. It should be needed to take care of analysis which relating to the mean slopes 6. Peak discharge per unit area of unitgraph for standard duration tr, ㎥/sec/$\textrm{km}^2$, was given by qp=10-0.52-0.0184Lg with a indication of lower values for watershed contrary to the higher lag time. For the logarithms, the correlation coefficient qp was 0.998 which defined high sign ificance. The peak discharge of the unitgraph for an area could therefore be expected to take the from Qp=qp. A(㎥/sec). 7. Using the unitgraph parameter Lg, the base length of the unitgraph, in days, was adopted as {{{{ {T}_{b } =0.73+2.073( { { L}_{g } } over {24 } )}}}} with high significant correlation coefficient, 0.92. The constant of the above equation are fixed by the procedure used to separate base flow from direct runoff. 8. The width W75 of the unitgraph at discharge equal to 75 per cent of the peak discharge, in hours and the width W50 at discharge equal to 50 Per cent of the peak discharge in hours, can be estimated from {{{{ { W}_{75 }= { 1.61} over { { q}_{b } ^{1.05 } } }}}} and {{{{ { W}_{50 }= { 2.5} over { { q}_{b } ^{1.05 } } }}}} respectively. This provides supplementary guide for sketching the unitgraph. 9. Above equations define the three factors necessary to construct the unitgraph for duration tr. For the duration tR, the lag is LgR=Lg+0.2(tR-tr) and this modified lag, LgRis used in qp and Tb It the tr happens to be equal to or close to tR, further assume qpR=qp. 10. Triangular hydrograph is a dimensionless unitgraph prepared from the 40 unitgraphs. The equation is shown as {{{{ { q}_{p } = { K.A.Q} over { { T}_{p } } }}}} or {{{{ { q}_{p } = { 0.21A.Q} over { { T}_{p } } }}}} The constant 0.21 is defined to Nak Dong River basin. 11. The base length of the time-area diagram for the IUH routing is {{{{C=0.9 {( { L. { L}_{ca } } over { SQRT { s} } ) }^{1/3 } }}}}. Correlation coefficient for C was 0.983 which defined a high significance. The base length of the T-AD was set to equal the time from the midpoint of rain fall excess to the point of contraflexure. The constant K, derived in this studies is K=8.32+0.0213 {{{{ { L} over { SQRT { s} } }}}} with correlation coefficient, 0.964. 12. In the light of the results analysed in these studies, average errors in the peak discharge of the Synthetic unitgraph, Triangular unitgraph, and IUH were estimated as 2.2, 7.7 and 6.4 per cent respectively to the peak of observed average unitgraph. Each ordinate of the Synthetic unitgraph was approached closely to the observed one.

  • PDF

Business Relationships and Structural Bonding: A Study of American Metal Industry (산업재 거래관계와 구조적 결합: 미국 금속산업의 분석 연구)

  • Han, Sang-Lin;Kim, Yun-Tae;Oh, Chang-Yeob;Chung, Jae-Moon
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.3
    • /
    • pp.115-132
    • /
    • 2008
  • Metal industry is one of the most representative heavy industries and the median sales volume of steel and nonferrous metal companies is over one billion dollars in the case America [Forbes 2006]. As seen in the recent business market situation, an increasing number of industrial manufacturers and suppliers are moving from adversarial to cooperative exchange attitudes that support the long-term relationships with their customers. This article presents the results of an empirical study of the antecedent factors of business relationships in metal industry of the United States. Commitment has been reviewed as a significant and critical variable in research on inter-organizational relationships (Hong et al. 2007, Kim et al. 2007). The future stability of any buyer-seller relationship depends upon the commitment made by the interactants to their relationship. Commitment, according to Dwyer et al. [1987], refers to "an implicit or explicit pledge of relational continuity between exchange partners" and they consider commitment to be the most advanced phase of buyer-seller exchange relationship. Bonds are made because the members need their partners in order to do something and this integration on a task basis can be either symbiotic or cooperative (Svensson 2008). To the extent that members seek the same or mutually supporting ends, there will be strong bonds among them. In other words, the principle that affects the strength of bonds is 'economy of decision making' [Turner 1970]. These bonds provide an important idea to study the causes of business long-term relationships in a sense that organizations can be mutually bonded by a common interest in the economic matters. Recently, the framework of structural bonding has been used to study the buyer-seller relationships in industrial marketing [Han and Sung 2008, Williams et al. 1998, Wilson 1995] in that this structural bonding is a crucial part of the theoretical justification for distinguishing discrete transactions from ongoing long-term relationships. The major antecedent factors of buyer commitment such as technology, CLalt, transaction-specific assets, and importance were identified and explored from the perspective of structural bonding. Research hypotheses were developed and tested by using survey data from the middle managers in the metal industry. H1: Level of technology of the relationship partner is positively related to the level of structural bonding between the buyer and the seller. H2: Comparison level of alternatives is negatively related to the level of structural bonding between the buyer and the seller. H3: Amount of the transaction-specific assets is positively related to the level of structural bonding between the buyer and the seller. H4: Importance of the relationship partner is positively related to the level of structural bonding between the buyer and the seller. H5: Level of structural bonding is positively related to the level of commitment to the relationship. To examine the major antecedent factors of industrial buyer's structural bonding and long-term relationship, questionnaire was prepared, mailed out to the sample of 400 purchasing managers of the US metal industry (SIC codes 33 and 34). After a follow-up request, 139 informants returnedthe questionnaires, resulting in a response rate of 35 percent. 134 responses were used in the final analysis after dropping 5 incomplete questionnaires. All measures were analyzed for reliability and validity following the guidelines offered by Churchill [1979] and Anderson and Gerbing [1988]., the results of fitting the model to the data indicated that the hypothesized model provides a good fit to the data. Goodness-of-fit index (GFI = 0.94) and other indices ( chi-square = 78.02 with p-value = 0.13, Adjusted GFI = 0.90, Normed Fit Index = 0.92) indicated that a major proportion of variances and covariances in the data was accounted for by the model as a whole, and all the parameter estimates showed statistical significance as evidenced by large t-values. All the factor loadings were significantly different from zero. On these grounds we judged the hypothesized model to be a reasonable representation of the data. The results from the present study suggest several implications for buyer-seller relationships. Theoretically, we attempted to conceptualize the antecedent factors of buyer-seller long-term relationships from the perspective of structural bondingin metal industry. The four underlying determinants (i.e. technology, CLalt, transaction-specific assets, and importance) of structural bonding are very critical variables of buyer-seller long-term business relationships. Our model of structural bonding makes an attempt to systematically examine the relationship between the antecedent factors of structural bonding and long-term commitment. Managerially, this research provides industrial purchasing managers with a good framework to assess the interaction processes with their partners and, ability to position their business relationships from the perspective of structural bonding. In other words, based on those underlying variables, industrial purchasing managers can determine the strength of the company's relationships with the key suppliers and its state of preparation to be a successful partner with those suppliers. Both the supplying and customer companies can also benefit by using the concept of 'structural bonding' and evaluating their relationships with key business partners from the structural point of view. In general, the results indicate that structural bonding gives a critical impact on the level of relationship commitment. Managerial implications and limitations of the study are also discussed.

  • PDF

A Store Recommendation Procedure in Ubiquitous Market for User Privacy (U-마켓에서의 사용자 정보보호를 위한 매장 추천방법)

  • Kim, Jae-Kyeong;Chae, Kyung-Hee;Gu, Ja-Chul
    • Asia pacific journal of information systems
    • /
    • v.18 no.3
    • /
    • pp.123-145
    • /
    • 2008
  • Recently, as the information communication technology develops, the discussion regarding the ubiquitous environment is occurring in diverse perspectives. Ubiquitous environment is an environment that could transfer data through networks regardless of the physical space, virtual space, time or location. In order to realize the ubiquitous environment, the Pervasive Sensing technology that enables the recognition of users' data without the border between physical and virtual space is required. In addition, the latest and diversified technologies such as Context-Awareness technology are necessary to construct the context around the user by sharing the data accessed through the Pervasive Sensing technology and linkage technology that is to prevent information loss through the wired, wireless networking and database. Especially, Pervasive Sensing technology is taken as an essential technology that enables user oriented services by recognizing the needs of the users even before the users inquire. There are lots of characteristics of ubiquitous environment through the technologies mentioned above such as ubiquity, abundance of data, mutuality, high information density, individualization and customization. Among them, information density directs the accessible amount and quality of the information and it is stored in bulk with ensured quality through Pervasive Sensing technology. Using this, in the companies, the personalized contents(or information) providing became possible for a target customer. Most of all, there are an increasing number of researches with respect to recommender systems that provide what customers need even when the customers do not explicitly ask something for their needs. Recommender systems are well renowned for its affirmative effect that enlarges the selling opportunities and reduces the searching cost of customers since it finds and provides information according to the customers' traits and preference in advance, in a commerce environment. Recommender systems have proved its usability through several methodologies and experiments conducted upon many different fields from the mid-1990s. Most of the researches related with the recommender systems until now take the products or information of internet or mobile context as its object, but there is not enough research concerned with recommending adequate store to customers in a ubiquitous environment. It is possible to track customers' behaviors in a ubiquitous environment, the same way it is implemented in an online market space even when customers are purchasing in an offline marketplace. Unlike existing internet space, in ubiquitous environment, the interest toward the stores is increasing that provides information according to the traffic line of the customers. In other words, the same product can be purchased in several different stores and the preferred store can be different from the customers by personal preference such as traffic line between stores, location, atmosphere, quality, and price. Krulwich(1997) has developed Lifestyle Finder which recommends a product and a store by using the demographical information and purchasing information generated in the internet commerce. Also, Fano(1998) has created a Shopper's Eye which is an information proving system. The information regarding the closest store from the customers' present location is shown when the customer has sent a to-buy list, Sadeh(2003) developed MyCampus that recommends appropriate information and a store in accordance with the schedule saved in a customers' mobile. Moreover, Keegan and O'Hare(2004) came up with EasiShop that provides the suitable tore information including price, after service, and accessibility after analyzing the to-buy list and the current location of customers. However, Krulwich(1997) does not indicate the characteristics of physical space based on the online commerce context and Keegan and O'Hare(2004) only provides information about store related to a product, while Fano(1998) does not fully consider the relationship between the preference toward the stores and the store itself. The most recent research by Sedah(2003), experimented on campus by suggesting recommender systems that reflect situation and preference information besides the characteristics of the physical space. Yet, there is a potential problem since the researches are based on location and preference information of customers which is connected to the invasion of privacy. The primary beginning point of controversy is an invasion of privacy and individual information in a ubiquitous environment according to researches conducted by Al-Muhtadi(2002), Beresford and Stajano(2003), and Ren(2006). Additionally, individuals want to be left anonymous to protect their own personal information, mentioned in Srivastava(2000). Therefore, in this paper, we suggest a methodology to recommend stores in U-market on the basis of ubiquitous environment not using personal information in order to protect individual information and privacy. The main idea behind our suggested methodology is based on Feature Matrices model (FM model, Shahabi and Banaei-Kashani, 2003) that uses clusters of customers' similar transaction data, which is similar to the Collaborative Filtering. However unlike Collaborative Filtering, this methodology overcomes the problems of personal information and privacy since it is not aware of the customer, exactly who they are, The methodology is compared with single trait model(vector model) such as visitor logs, while looking at the actual improvements of the recommendation when the context information is used. It is not easy to find real U-market data, so we experimented with factual data from a real department store with context information. The recommendation procedure of U-market proposed in this paper is divided into four major phases. First phase is collecting and preprocessing data for analysis of shopping patterns of customers. The traits of shopping patterns are expressed as feature matrices of N dimension. On second phase, the similar shopping patterns are grouped into clusters and the representative pattern of each cluster is derived. The distance between shopping patterns is calculated by Projected Pure Euclidean Distance (Shahabi and Banaei-Kashani, 2003). Third phase finds a representative pattern that is similar to a target customer, and at the same time, the shopping information of the customer is traced and saved dynamically. Fourth, the next store is recommended based on the physical distance between stores of representative patterns and the present location of target customer. In this research, we have evaluated the accuracy of recommendation method based on a factual data derived from a department store. There are technological difficulties of tracking on a real-time basis so we extracted purchasing related information and we added on context information on each transaction. As a result, recommendation based on FM model that applies purchasing and context information is more stable and accurate compared to that of vector model. Additionally, we could find more precise recommendation result as more shopping information is accumulated. Realistically, because of the limitation of ubiquitous environment realization, we were not able to reflect on all different kinds of context but more explicit analysis is expected to be attainable in the future after practical system is embodied.

A Study on the Critical Success Factors of Social Commerce through the Analysis of the Perception Gap between the Service Providers and the Users: Focused on Ticket Monster in Korea (서비스제공자와 사용자의 인식차이 분석을 통한 소셜커머스 핵심성공요인에 대한 연구: 한국의 티켓몬스터 중심으로)

  • Kim, Il Jung;Lee, Dae Chul;Lim, Gyoo Gun
    • Asia pacific journal of information systems
    • /
    • v.24 no.2
    • /
    • pp.211-232
    • /
    • 2014
  • Recently, there is a growing interest toward social commerce using SNS(Social Networking Service), and the size of its market is also expanding due to popularization of smart phones, tablet PCs and other smart devices. Accordingly, various studies have been attempted but it is shown that most of the previous studies have been conducted from perspectives of the users. The purpose of this study is to derive user-centered CSF(Critical Success Factor) of social commerce from the previous studies and analyze the CSF perception gap between social commerce service providers and users. The CSF perception gap between two groups shows that there is a difference between ideal images the service providers hope for and the actual image the service users have on social commerce companies. This study provides effective improvement directions for social commerce companies by presenting current business problems and its solution plans. For this, This study selected Korea's representative social commerce business Ticket Monster, which is dominant in sales and staff size together with its excellent funding power through M&A by stock exchange with the US social commerce business Living Social with Amazon.com as a shareholder in August, 2011, as a target group of social commerce service provider. we have gathered questionnaires from both service providers and the users from October 22, 2012 until October 31, 2012 to conduct an empirical analysis. We surveyed 160 service providers of Ticket Monster We also surveyed 160 social commerce users who have experienced in using Ticket Monster service. Out of 320 surveys, 20 questionaries which were unfit or undependable were discarded. Consequently the remaining 300(service provider 150, user 150)were used for this empirical study. The statistics were analyzed using SPSS 12.0. Implications of the empirical analysis result of this study are as follows: First of all, There are order differences in the importance of social commerce CSF between two groups. While service providers regard Price Economic as the most important CSF influencing purchasing intention, the users regard 'Trust' as the most important CSF influencing purchasing intention. This means that the service providers have to utilize the unique strong point of social commerce which make the customers be trusted rathe than just focusing on selling product at a discounted price. It means that service Providers need to enhance effective communication skills by using SNS and play a vital role as a trusted adviser who provides curation services and explains the value of products through information filtering. Also, they need to pay attention to preventing consumer damages from deceptive and false advertising. service providers have to create the detailed reward system in case of a consumer damages caused by above problems. It can make strong ties with customers. Second, both service providers and users tend to consider that social commerce CSF influencing purchasing intention are Price Economic, Utility, Trust, and Word of Mouth Effect. Accordingly, it can be learned that users are expecting the benefit from the aspect of prices and economy when using social commerce, and service providers should be able to suggest the individualized discount benefit through diverse methods using social network service. Looking into it from the aspect of usefulness, service providers are required to get users to be cognizant of time-saving, efficiency, and convenience when they are using social commerce. Therefore, it is necessary to increase the usefulness of social commerce through the introduction of a new management strategy, such as intensification of search engine of the Website, facilitation in payment through shopping basket, and package distribution. Trust, as mentioned before, is the most important variable in consumers' mind, so it should definitely be managed for sustainable management. If the trust in social commerce should fall due to consumers' damage case due to false and puffery advertising forgeries, it could have a negative influence on the image of the social commerce industry in general. Instead of advertising with famous celebrities and using a bombastic amount of money on marketing expenses, the social commerce industry should be able to use the word of mouth effect between users by making use of the social network service, the major marketing method of initial social commerce. The word of mouth effect occurring from consumers' spontaneous self-marketer's duty performance can bring not only reduction effect in advertising cost to a service provider but it can also prepare the basis of discounted price suggestion to consumers; in this context, the word of mouth effect should be managed as the CSF of social commerce. Third, Trade safety was not derived as one of the CSF. Recently, with e-commerce like social commerce and Internet shopping increasing in a variety of methods, the importance of trade safety on the Internet also increases, but in this study result, trade safety wasn't evaluated as CSF of social commerce by both groups. This study judges that it's because both service provider groups and user group are perceiving that there is a reliable PG(Payment Gateway) which acts for e-payment of Internet transaction. Accordingly, it is understood that both two groups feel that social commerce can have a corporate identity by website and differentiation in products and services in sales, but don't feel a big difference by business in case of e-payment system. In other words, trade safety should be perceived as natural, basic universal service. Fourth, it's necessary that service providers should intensify the communication with users by making use of social network service which is the major marketing method of social commerce and should be able to use the word of mouth effect between users. The word of mouth effect occurring from consumers' spontaneous self- marketer's duty performance can bring not only reduction effect in advertising cost to a service provider but it can also prepare the basis of discounted price suggestion to consumers. in this context, it is judged that the word of mouth effect should be managed as CSF of social commerce. In this paper, the characteristics of social commerce are limited as five independent variables, however, if an additional study is proceeded with more various independent variables, more in-depth study results will be derived. In addition, this research targets social commerce service providers and the users, however, in the consideration of the fact that social commerce is a two-sided market, drawing CSF through an analysis of perception gap between social commerce service providers and its advertisement clients would be worth to be dealt with in a follow-up study.

The Research on Online Game Hedonic Experience - Focusing on Moderate Effect of Perceived Complexity - (온라인 게임에서의 쾌락적 경험에 관한 연구 - 지각된 복잡성의 조절효과를 중심으로 -)

  • Lee, Jong-Ho;Jung, Yun-Hee
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.2
    • /
    • pp.147-187
    • /
    • 2008
  • Online game researchers focus on the flow and factors influencing flow. Flow is conceptualized as an optimal experience state and useful explaining game experience in online. Many game studies focused on the customer loyalty and flow in playing online game, In showing specific game experience, however, it doesn't examine multidimensional experience process. Flow is not construct which show absorbing process, but construct which show absorbing result. Hence, Flow is not adequate to examine multidimensional experience of games. Online game is included in hedonic consumption. Hedonic consumption is a relatively new field of study in consumer research and it explores the consumption experience as a experiential view(Hirschman and Holbrook 1982). Hedonic consumption explores the consumption experience not as an information processing event but from a phenomenological of experiential view, which is a primarily subjective state. It includes various playful leisure activities, sensory pleasures, daydreams, esthetic enjoyment, and emotional responses. In online game experience, therefore, it is right to access through a experiential view of hedonic consumption. The objective of this paper was to make up for lacks in our understanding of online game experience by developing a framework for better insight into the hedonic experience of online game. We developed this framework by integrating and extending existing research in marketing, online game and hedonic responses. We then discussed several expectations for this framework. We concluded by discussing the results of this study, providing general recommendation and directions for future research. In hedonic response research, Lacher's research(1994)and Jongho lee and Yunhee Jung' research (2005;2006) has served as a fundamental starting point of our research. A common element in this extended research is the repeated identification of the four hedonic responses: sensory response, imaginal response, emotional response, analytic response. The validity of these four constructs finds in research of music(Lacher 1994) and movie(Jongho lee and Yunhee Jung' research 2005;2006). But, previous research on hedonic response didn't show that constructs of hedonic response have cause-effect relation. Also, although hedonic response enable to different by stimulus properties. effects of stimulus properties is not showed. To fill this gap, while largely based on Lacher(1994)' research and Jongho Lee and Yunhee Jung(2005, 2006)' research, we made several important adaptation with the primary goal of bringing the model into online game and compensating lacks of previous research. We maintained the same construct proposed by Lacher et al.(1994), with four constructs of hedonic response:sensory response, imaginal response, emotional response, analytical response. In this study, the sensory response is typified by some physical movement(Yingling 1962), the imaginal response is typified by images, memories, or situations that game evokes(Myers 1914), and the emotional response represents the feelings one experiences when playing game, such as pleasure, arousal, dominance, finally, the analytical response is that game player engaged in cognition seeking while playing game(Myers 1912). However, this paper has several important differences. We attempted to suggest multi-dimensional experience process in online game and cause-effect relation among hedonic responses. Also, We investigated moderate effects of perceived complexity. Previous studies about hedonic responses didn't show influences of stimulus properties. According to Berlyne's theory(1960, 1974) of aesthetic response, perceived complexity is a important construct because it effects pleasure. Pleasure in response to an object will increase with increased complexity, to an optimal level. After that, with increased complexity, pleasure begins with a linearly increasing line for complexity. Therefore, We expected this perceived complexity will influence hedonic response in game experience. We discussed the rationale for these suggested changes, the assumptions of the resulting framework, and developed some expectations based on its application in Online game context. In the first stage of methodology, questions were developed to measure the constructs. We constructed a survey measuring our theoretical constructs based on a combination of sources, including Yingling(1962), Hargreaves(1962), Lacher (1994), Jongho Lee and Yunhee Jung(2005, 2006), Mehrabian and Russell(1974), Pucely et al(1987). Based on comments received in the pretest, we made several revisions to arrive at our final survey. We investigated the proposed framework through a convenience sample, where participation in a self-report survey was solicited from various respondents having different knowledges. All respondents participated to different degrees, in these habitually practiced activities and received no compensation for their participation. Questionnaires were distributed to graduates and we used 381 completed questionnaires to analysis. The sample consisted of more men(n=225) than women(n=156). In measure, the study used multi-item scales based previous study. We analyze the data using structural equation modeling(LISREL-VIII; Joreskog and Sorbom 1993). First, we used the entire sample(n=381) to refine the measures and test their convergent and discriminant validity. The evidence from both the factor analysis and the analysis of reliability provides support that the scales exhibit internal consistency and construct validity. Second, we test the hypothesized structural model. And, we divided the sample into two different complexity group and analyze the hypothesized structural model of each group. The analysis suggest that hedonic response plays different roles from hypothesized in our study. The results indicate that hedonic response-sensory response, imaginal response, emotional response, analytical response- are related positively to respondents' level of game satisfaction. And game satisfaction is related to higher levels of game loyalty. Additionally, we found that perceived complexity is important to online game experience. Our results suggest that importance of each hedonic response different by perceived game complexity. Understanding the role of perceived complexity in hedonic response enables to have a better understanding of underlying mechanisms at game experience. If game has high complexity, analytical response become important response. So game producers or marketers have to consider more cognitive stimulus. Controversy, if game has low complexity, sensorial response respectively become important. Finally, we discussed several limitations of our study and suggested directions for future research. we concluded with a discussion of managerial implications. Our study provides managers with a basis for game strategies.

  • PDF

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

How to improve the accuracy of recommendation systems: Combining ratings and review texts sentiment scores (평점과 리뷰 텍스트 감성분석을 결합한 추천시스템 향상 방안 연구)

  • Hyun, Jiyeon;Ryu, Sangyi;Lee, Sang-Yong Tom
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.219-239
    • /
    • 2019
  • As the importance of providing customized services to individuals becomes important, researches on personalized recommendation systems are constantly being carried out. Collaborative filtering is one of the most popular systems in academia and industry. However, there exists limitation in a sense that recommendations were mostly based on quantitative information such as users' ratings, which made the accuracy be lowered. To solve these problems, many studies have been actively attempted to improve the performance of the recommendation system by using other information besides the quantitative information. Good examples are the usages of the sentiment analysis on customer review text data. Nevertheless, the existing research has not directly combined the results of the sentiment analysis and quantitative rating scores in the recommendation system. Therefore, this study aims to reflect the sentiments shown in the reviews into the rating scores. In other words, we propose a new algorithm that can directly convert the user 's own review into the empirically quantitative information and reflect it directly to the recommendation system. To do this, we needed to quantify users' reviews, which were originally qualitative information. In this study, sentiment score was calculated through sentiment analysis technique of text mining. The data was targeted for movie review. Based on the data, a domain specific sentiment dictionary is constructed for the movie reviews. Regression analysis was used as a method to construct sentiment dictionary. Each positive / negative dictionary was constructed using Lasso regression, Ridge regression, and ElasticNet methods. Based on this constructed sentiment dictionary, the accuracy was verified through confusion matrix. The accuracy of the Lasso based dictionary was 70%, the accuracy of the Ridge based dictionary was 79%, and that of the ElasticNet (${\alpha}=0.3$) was 83%. Therefore, in this study, the sentiment score of the review is calculated based on the dictionary of the ElasticNet method. It was combined with a rating to create a new rating. In this paper, we show that the collaborative filtering that reflects sentiment scores of user review is superior to the traditional method that only considers the existing rating. In order to show that the proposed algorithm is based on memory-based user collaboration filtering, item-based collaborative filtering and model based matrix factorization SVD, and SVD ++. Based on the above algorithm, the mean absolute error (MAE) and the root mean square error (RMSE) are calculated to evaluate the recommendation system with a score that combines sentiment scores with a system that only considers scores. When the evaluation index was MAE, it was improved by 0.059 for UBCF, 0.0862 for IBCF, 0.1012 for SVD and 0.188 for SVD ++. When the evaluation index is RMSE, UBCF is 0.0431, IBCF is 0.0882, SVD is 0.1103, and SVD ++ is 0.1756. As a result, it can be seen that the prediction performance of the evaluation point reflecting the sentiment score proposed in this paper is superior to that of the conventional evaluation method. In other words, in this paper, it is confirmed that the collaborative filtering that reflects the sentiment score of the user review shows superior accuracy as compared with the conventional type of collaborative filtering that only considers the quantitative score. We then attempted paired t-test validation to ensure that the proposed model was a better approach and concluded that the proposed model is better. In this study, to overcome limitations of previous researches that judge user's sentiment only by quantitative rating score, the review was numerically calculated and a user's opinion was more refined and considered into the recommendation system to improve the accuracy. The findings of this study have managerial implications to recommendation system developers who need to consider both quantitative information and qualitative information it is expect. The way of constructing the combined system in this paper might be directly used by the developers.

Analyzing the User Intention of Booth Recommender System in Smart Exhibition Environment (스마트 전시환경에서 부스 추천시스템의 사용자 의도에 관한 조사연구)

  • Choi, Jae Ho;Xiang, Jun-Yong;Moon, Hyun Sil;Choi, Il Young;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.153-169
    • /
    • 2012
  • Exhibitions have played a key role of effective marketing activity which directly informs services and products to current and potential customers. Through participating in exhibitions, exhibitors have got the opportunity to make face-to-face contact so that they can secure the market share and improve their corporate images. According to this economic importance of exhibitions, show organizers try to adopt a new IT technology for improving their performance, and researchers have also studied services which can improve the satisfaction of visitors through analyzing visit patterns of visitors. Especially, as smart technologies make them monitor activities of visitors in real-time, they have considered booth recommender systems which infer preference of visitors and recommender proper service to them like on-line environment. However, while there are many studies which can improve their performance in the side of new technological development, they have not considered the choice factor of visitors for booth recommender systems. That is, studies for factors which can influence the development direction and effective diffusion of these systems are insufficient. Most of prior studies for the acceptance of new technologies and the continuous intention of use have adopted Technology Acceptance Model (TAM) and Extended Technology Acceptance Model (ETAM). Booth recommender systems may not be new technology because they are similar with commercial recommender systems such as book recommender systems, in the smart exhibition environment, they can be considered new technology. However, for considering the smart exhibition environment beyond TAM, measurements for the intention of reuse should focus on how booth recommender systems can provide correct information to visitors. In this study, through literature reviews, we draw factors which can influence the satisfaction and reuse intention of visitors for booth recommender systems, and design a model to forecast adaptation of visitors for booth recommendation in the exhibition environment. For these purposes, we conduct a survey for visitors who attended DMC Culture Open in November 2011 and experienced booth recommender systems using own smart phone, and examine hypothesis by regression analysis. As a result, factors which can influence the satisfaction of visitors for booth recommender systems are the effectiveness, perceived ease of use, argument quality, serendipity, and so on. Moreover, the satisfaction for booth recommender systems has a positive relationship with the development of reuse intention. For these results, we have some insights for booth recommender systems in the smart exhibition environment. First, this study gives shape to important factors which are considered when they establish strategies which induce visitors to consistently use booth recommender systems. Recently, although show organizers try to improve their performances using new IT technologies, their visitors have not felt the satisfaction from these efforts. At this point, this study can help them to provide services which can improve the satisfaction of visitors and make them last relationship with visitors. On the other hands, this study suggests that they managers along the using time of booth recommender systems. For example, in the early stage of the adoption, they should focus on the argument quality, perceived ease of use, and serendipity, so that improve the acceptance of booth recommender systems. After these stages, they should bridge the differences between expectation and perception for booth recommender systems, and lead continuous uses of visitors. However, this study has some limitations. We only use four factors which can influence the satisfaction of visitors. Therefore, we should development our model to consider important additional factors. And the exhibition in our experiments has small number of booths so that visitors may not need to booth recommender systems. In the future study, we will conduct experiments in the exhibition environment which has a larger scale.

Characteristics of Everyday Movement Represented in Steve Paxton's Works: Focused on Satisfyin' Lover, Bound, Contact at 10th & 2nd- (스티브 팩스톤(Steve Paxton)의 작품에서 나타난 일상적 움직임의 특성에 관한 연구: , , 를 중심으로)

  • KIM, Hyunhee
    • Trans-
    • /
    • v.3
    • /
    • pp.109-135
    • /
    • 2017
  • The purpose of this thesis is to analyze characteristics of everyday movement showed in performances of Steve Paxton. A work of art has been realized as a special object enjoyed by high class people as high culture for a long time. Therefore, a gap between everyday life and art has been greatly existed, and the emergence of everyday elements in a work of art means that public awareness involving social change is changed. The postmodernism as the period when a boundary between art and everyday life is uncertain was a postwar society after the Second World War and a social situation that rapidly changes into a capitalistic society. Changes in this time made scholars gain access academically concepts related to everyday life, and affected artists as the spirit of the times of pluralistic postmodernism refusing totality. At the same period of the time, modern dance also faced a turning point as post-modern dance. After the Second World War, modern dance started to be evaluated as it reaches the limit, and at this juncture, headed by dancers including the Judson Dance Theatre. Acting as a dancer in a dance company of Merce Cunningham, Steve Paxton, one of founders of the Judson Dance Theatre, had a critical mind of the conditions of dance company with the social structure and the process that movement is made. This thinking is showed in early performances as an at tempt to realize everyday motion it self in performances. His early activity represented by a walking motion attracted attention as a simple motion that excludes all artful elements of existing dance performances and is possible to conduct by a person who is not a dancer. Although starting the use of everyday movement is regarded as an open characteristic of post-modern dance, advanced researches on this were rare, so this study started. In addition, studies related to Steve Paxton are skewed towards Contact Improvisation that he rose as an active practician. As the use of ordinary movement before he focused on Contact Improvisation, this study examines other attempts including Contact Improvisation as attempts after the beginning of his performances. Therefore, the study analyzes Satisfyin' Lover, Contact at 10th & 2nd and Bound that are performances of Steve Paxton, and based on this, draws everyday characteristics. In addition, related books, academic essays, dance articles and reviews are consulted to consider a concept related to everyday life and understand dance historical movement of post-modern dance. Paxton attracted attention because of his activity starting at critical approach of movement of existing modern dance. As walking of performers who are not dancers, a walking motion showed in Satisfyin' Lover gave esthetic meaning to everyday movement. After that, he was affected by Eastern ideas, so developed Contact Improvisation making a motion through energy of the natural laws. In addition, he had everyday things on his performances, and used a method to deliver various images by using mundane movement and impromptu gestures originating from relaxed body. Everyday movement of his performances represents change in awareness of performances of the art of dancing that are traditionally maintained including change of dance genre of an area. His activity with unprecedented attempt and experimentation should be highly evaluated as efforts to overcome the limit of modern dance.

  • PDF

The Evaluation of Difference according to Image Scan Duration in PET Scan using Short Half-Lived Radionuclide (단 반감기 핵종을 이용한 PET 검사 시 영상 획득 시간에 따른 정량성 평가)

  • Hong, Gun-Chul;Cha, Eun-Sun;Kwak, In-Suk;Lee, Hyuk;Park, Hoon;Choi, Choon-Ki;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.1
    • /
    • pp.102-107
    • /
    • 2012
  • Purpose : Because of the rapid physical decay of the short half-lived radionuclide, counting of event for image is very limited. In this reason, long scan duration is applied for more accurate quantitative analysis in the relatively low sensitive examination. The aim of this study was to evaluate the difference according to scan duration and investigate the resonable scan duration using the radionuclide of 11C and 18F in PET scan. Materials and Methods : 1994-NEMA Phantom was filled with 11C of $30.08{\pm}4.22MBq$ and 18F of $40.08{\pm}8.29MBq$ diluted with distilled water. Dynamic images were acquired 20frames/1minute and static image was acquired for 20minutes with 11C. And dynamic images were acquired 20frames/2.5minutes and static image was acquired for 50minutes with 18F. All of data were applied with same reconstruction method and time decay correction. Region of interest (ROI) was set on the image, maximum radioactivity concentration (maxRC, kBq/mL) was compared. We compared maxRC with acquired dynamic image which was summed one bye one to increase the total scan duration. Results : maxRC over time of 11C was $3.85{\pm}0.45{\sim}5.15{\pm}0.50kBq/mL$ in dynamic image, and static image was $2.15{\pm}0.26kBq/mL$. In case of 18F, the maxRC was $9.09{\pm}0.42{\sim}9.48{\pm}0.31kBq/mL$ in dynamic image and $7.24{\pm}0.14kBq/mL$ in static. In summed image of 11C, as total scan duration was increased to 5, 10, 15, 20minutes, the maxRC were $2.47{\pm}0.4$, $2.22{\pm}0.37$, $2.08{\pm}0.42$, $1.95{\pm}0.55kBq/mL$ respectively. In case of 18F, the total scan duration was increased to 12.5, 25, 37.5, and 50minutes, the maxRC were $7.89{\pm}0.27$, $7.61{\pm}0.23$, $7.36{\pm}0.21$, $7.31{\pm}0.23kBq/mL$. Conclusion : As elapsed time was increased after completion of injection, the maxRC was increased by 33% and 4% in dynamic study of 11C and 18F respectively. Also the total scan duration was increased, the maxRC was reduced by 50% and 20% in summed image of 11C and 18F respectively. The percentage difference of each result is more larger in study using relatively shorter half-lived radionuclide. It appears that the accuracy of decay correction declined not only increment of scan duration but also increment of elapsed time from a starting point of acquisition. In study using 18F, there was no big difference so it's not necessary to consider error of quantitative evaluation according to elapsed time. It's recommended to apply additional decay correction method considering decay correction the error concerning elapsed time or to set the scan duration of static image less than 5minutes corresponding 25% of half life in study using shorter half-lived radionuclide as 11C.

  • PDF