• Title/Summary/Keyword: importance-performance

Search Result 3,952, Processing Time 0.038 seconds

Real data-based active sonar signal synthesis method (실데이터 기반 능동 소나 신호 합성 방법론)

  • Yunsu Kim;Juho Kim;Jongwon Seok;Jungpyo Hong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.1
    • /
    • pp.9-18
    • /
    • 2024
  • The importance of active sonar systems is emerging due to the quietness of underwater targets and the increase in ambient noise due to the increase in maritime traffic. However, the low signal-to-noise ratio of the echo signal due to multipath propagation of the signal, various clutter, ambient noise and reverberation makes it difficult to identify underwater targets using active sonar. Attempts have been made to apply data-based methods such as machine learning or deep learning to improve the performance of underwater target recognition systems, but it is difficult to collect enough data for training due to the nature of sonar datasets. Methods based on mathematical modeling have been mainly used to compensate for insufficient active sonar data. However, methodologies based on mathematical modeling have limitations in accurately simulating complex underwater phenomena. Therefore, in this paper, we propose a sonar signal synthesis method based on a deep neural network. In order to apply the neural network model to the field of sonar signal synthesis, the proposed method appropriately corrects the attention-based encoder and decoder to the sonar signal, which is the main module of the Tacotron model mainly used in the field of speech synthesis. It is possible to synthesize a signal more similar to the actual signal by training the proposed model using the dataset collected by arranging a simulated target in an actual marine environment. In order to verify the performance of the proposed method, Perceptual evaluation of audio quality test was conducted and within score difference -2.3 was shown compared to actual signal in a total of four different environments. These results prove that the active sonar signal generated by the proposed method approximates the actual signal.

The Impact of Utilizing Online Outsourcing in Startups on Member Organizational Commitment and Job Satisfaction (스타트업의 온라인 아웃소싱 활용이 구성원 조직몰입과 직무만족에 미치는 영향에 관한 연구)

  • Kim, Joonhak;Park, Jae-Whan
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.19 no.3
    • /
    • pp.139-153
    • /
    • 2024
  • The importance of sustainable growth and cost reduction has increased globally, leading to the expansion of outsourcing by companies. Additionally, the spread of the platform economy has brought changes in the way we work, and the online outsourcing market, where tasks are mediated through platforms, is growing. Academically, while research on general outsourcing is actively conducted, studies on online outsourcing are relatively insufficient compared to its actual utilization. This study aims to analyze the factors and performance factors of online outsourcing utilization by startups, to identify the effects and concerns of using online outsourcing from multiple perspectives, and to suggest the roles of various stakeholders for effective utilization and industry development. For the research, a survey was conducted with 281 employees of startups who have experience in using online outsourcing, and the main findings are as follows. First, the enhancement of efficiency, profitability, and innovation through the use of online outsourcing positively affects organizational commitment and job satisfaction of startup members. Especially, the improvement of efficiency due to the use of online outsourcing has a significant effect on enhancing job satisfaction. Second, concerns about the burden of online outsourcing fees or uncertain outcomes negatively affect organizational commitment and job satisfaction. Third, there are perceptual differences in the motivations and performance regarding the utilization of online outsourcing depending on the job position. Practitioners perceive that the use of online outsourcing increases organizational commitment, whereas managers have relatively higher concerns about the uncertainty of outsourced task outcomes and information security. Through this study, the possibility that human resource shortages and employee management issues in startups can be improved through online outsourcing was confirmed. By verifying the influence of various factors of online outsourcing utilization, this study also provides meaningful implications for establishing business strategies for online outsourcing intermediary platform companies and for formulating startup support policies by government and other startup support organizations.

  • PDF

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

Development and application of prediction model of hyperlipidemia using SVM and meta-learning algorithm (SVM과 meta-learning algorithm을 이용한 고지혈증 유병 예측모형 개발과 활용)

  • Lee, Seulki;Shin, Taeksoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.111-124
    • /
    • 2018
  • This study aims to develop a classification model for predicting the occurrence of hyperlipidemia, one of the chronic diseases. Prior studies applying data mining techniques for predicting disease can be classified into a model design study for predicting cardiovascular disease and a study comparing disease prediction research results. In the case of foreign literatures, studies predicting cardiovascular disease were predominant in predicting disease using data mining techniques. Although domestic studies were not much different from those of foreign countries, studies focusing on hypertension and diabetes were mainly conducted. Since hypertension and diabetes as well as chronic diseases, hyperlipidemia, are also of high importance, this study selected hyperlipidemia as the disease to be analyzed. We also developed a model for predicting hyperlipidemia using SVM and meta learning algorithms, which are already known to have excellent predictive power. In order to achieve the purpose of this study, we used data set from Korea Health Panel 2012. The Korean Health Panel produces basic data on the level of health expenditure, health level and health behavior, and has conducted an annual survey since 2008. In this study, 1,088 patients with hyperlipidemia were randomly selected from the hospitalized, outpatient, emergency, and chronic disease data of the Korean Health Panel in 2012, and 1,088 nonpatients were also randomly extracted. A total of 2,176 people were selected for the study. Three methods were used to select input variables for predicting hyperlipidemia. First, stepwise method was performed using logistic regression. Among the 17 variables, the categorical variables(except for length of smoking) are expressed as dummy variables, which are assumed to be separate variables on the basis of the reference group, and these variables were analyzed. Six variables (age, BMI, education level, marital status, smoking status, gender) excluding income level and smoking period were selected based on significance level 0.1. Second, C4.5 as a decision tree algorithm is used. The significant input variables were age, smoking status, and education level. Finally, C4.5 as a decision tree algorithm is used. In SVM, the input variables selected by genetic algorithms consisted of 6 variables such as age, marital status, education level, economic activity, smoking period, and physical activity status, and the input variables selected by genetic algorithms in artificial neural network consist of 3 variables such as age, marital status, and education level. Based on the selected parameters, we compared SVM, meta learning algorithm and other prediction models for hyperlipidemia patients, and compared the classification performances using TP rate and precision. The main results of the analysis are as follows. First, the accuracy of the SVM was 88.4% and the accuracy of the artificial neural network was 86.7%. Second, the accuracy of classification models using the selected input variables through stepwise method was slightly higher than that of classification models using the whole variables. Third, the precision of artificial neural network was higher than that of SVM when only three variables as input variables were selected by decision trees. As a result of classification models based on the input variables selected through the genetic algorithm, classification accuracy of SVM was 88.5% and that of artificial neural network was 87.9%. Finally, this study indicated that stacking as the meta learning algorithm proposed in this study, has the best performance when it uses the predicted outputs of SVM and MLP as input variables of SVM, which is a meta classifier. The purpose of this study was to predict hyperlipidemia, one of the representative chronic diseases. To do this, we used SVM and meta-learning algorithms, which is known to have high accuracy. As a result, the accuracy of classification of hyperlipidemia in the stacking as a meta learner was higher than other meta-learning algorithms. However, the predictive performance of the meta-learning algorithm proposed in this study is the same as that of SVM with the best performance (88.6%) among the single models. The limitations of this study are as follows. First, various variable selection methods were tried, but most variables used in the study were categorical dummy variables. In the case with a large number of categorical variables, the results may be different if continuous variables are used because the model can be better suited to categorical variables such as decision trees than general models such as neural networks. Despite these limitations, this study has significance in predicting hyperlipidemia with hybrid models such as met learning algorithms which have not been studied previously. It can be said that the result of improving the model accuracy by applying various variable selection techniques is meaningful. In addition, it is expected that our proposed model will be effective for the prevention and management of hyperlipidemia.

A Study on Developing a VKOSPI Forecasting Model via GARCH Class Models for Intelligent Volatility Trading Systems (지능형 변동성트레이딩시스템개발을 위한 GARCH 모형을 통한 VKOSPI 예측모형 개발에 관한 연구)

  • Kim, Sun-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.19-32
    • /
    • 2010
  • Volatility plays a central role in both academic and practical applications, especially in pricing financial derivative products and trading volatility strategies. This study presents a novel mechanism based on generalized autoregressive conditional heteroskedasticity (GARCH) models that is able to enhance the performance of intelligent volatility trading systems by predicting Korean stock market volatility more accurately. In particular, we embedded the concept of the volatility asymmetry documented widely in the literature into our model. The newly developed Korean stock market volatility index of KOSPI 200, VKOSPI, is used as a volatility proxy. It is the price of a linear portfolio of the KOSPI 200 index options and measures the effect of the expectations of dealers and option traders on stock market volatility for 30 calendar days. The KOSPI 200 index options market started in 1997 and has become the most actively traded market in the world. Its trading volume is more than 10 million contracts a day and records the highest of all the stock index option markets. Therefore, analyzing the VKOSPI has great importance in understanding volatility inherent in option prices and can afford some trading ideas for futures and option dealers. Use of the VKOSPI as volatility proxy avoids statistical estimation problems associated with other measures of volatility since the VKOSPI is model-free expected volatility of market participants calculated directly from the transacted option prices. This study estimates the symmetric and asymmetric GARCH models for the KOSPI 200 index from January 2003 to December 2006 by the maximum likelihood procedure. Asymmetric GARCH models include GJR-GARCH model of Glosten, Jagannathan and Runke, exponential GARCH model of Nelson and power autoregressive conditional heteroskedasticity (ARCH) of Ding, Granger and Engle. Symmetric GARCH model indicates basic GARCH (1, 1). Tomorrow's forecasted value and change direction of stock market volatility are obtained by recursive GARCH specifications from January 2007 to December 2009 and are compared with the VKOSPI. Empirical results indicate that negative unanticipated returns increase volatility more than positive return shocks of equal magnitude decrease volatility, indicating the existence of volatility asymmetry in the Korean stock market. The point value and change direction of tomorrow VKOSPI are estimated and forecasted by GARCH models. Volatility trading system is developed using the forecasted change direction of the VKOSPI, that is, if tomorrow VKOSPI is expected to rise, a long straddle or strangle position is established. A short straddle or strangle position is taken if VKOSPI is expected to fall tomorrow. Total profit is calculated as the cumulative sum of the VKOSPI percentage change. If forecasted direction is correct, the absolute value of the VKOSPI percentage changes is added to trading profit. It is subtracted from the trading profit if forecasted direction is not correct. For the in-sample period, the power ARCH model best fits in a statistical metric, Mean Squared Prediction Error (MSPE), and the exponential GARCH model shows the highest Mean Correct Prediction (MCP). The power ARCH model best fits also for the out-of-sample period and provides the highest probability for the VKOSPI change direction tomorrow. Generally, the power ARCH model shows the best fit for the VKOSPI. All the GARCH models provide trading profits for volatility trading system and the exponential GARCH model shows the best performance, annual profit of 197.56%, during the in-sample period. The GARCH models present trading profits during the out-of-sample period except for the exponential GARCH model. During the out-of-sample period, the power ARCH model shows the largest annual trading profit of 38%. The volatility clustering and asymmetry found in this research are the reflection of volatility non-linearity. This further suggests that combining the asymmetric GARCH models and artificial neural networks can significantly enhance the performance of the suggested volatility trading system, since artificial neural networks have been shown to effectively model nonlinear relationships.

A Study on Perception and Attitudes of Health Workers Towards the Organization and Activities of Urban Health Centers (도시보건소 직원의 보건소 업무에 대한 인식 및 견해)

  • Lee, Jae-Mu;Kang, Pock-Soo;Lee, Kyeong-Soo;Kim, Cheon-Tae
    • Journal of Yeungnam Medical Science
    • /
    • v.12 no.2
    • /
    • pp.347-365
    • /
    • 1995
  • A survey was conducted to study perception and attitudes of health workers towards health center's activities and organization of health services, from August 15 to September 30, 1994. The study population was 310 health workers engaged in seven urban health centers in Taegu City area. A questionnaire method was used to collect data and response rate was 81.3 percent or 252 respondents. The following are summaries of findings: Profiles of study population: Health workers were predominantly female(62.3%); had college education(60.3%); and held medical and nursing positions(39.6%), technicians(30.6%) and public health/administrative positions(29.8%). Perceptions on health center's resources: Slightly more than a half(51.1%) of respondents expressed that physical facilities of the centers are inadequate; equipments needed are short(39.0%); human resource is inadequate(44.8%); and health budget allocated is insufficient(38.5%) to support the performance of health center's activities. Decentralization and health services: The majority revealed that the decentralization of government system would affect the future activities of health centers(51.9%) which may have to change. However, only one quarter of respondents(25.4%) seemed to view the decentralization positively as they expect that it would help perform health activities more effectively. The majority of the respondents(78.6%) insisted that the function and organization of the urban health centers should be changed. Target workload and job satisfaction: A large proportion (43.3%) of respondents felt that present target setting systems for various health activities are unrealistic in terms of community needs and health center's situation while only 11.1 percent responded it positively; the majority(57.5%) revealed that they need further training in professional fields to perform their job more effectively; more than one third(35.7%) expressed that they enjoy their professional autonomy in their job performance; and a considerable proportion (39.3%) said they are satisfied with their present work. Regarding the personnel management, more worker(47.3%) perceived it negatively than positive(11.5%) as most of workers seemed to think the personnel management practiced at the health centers is not fair or justly done. Health services rendered: Among health services rendered, health workers perceived the following services are most successfully delivered; they are, in order of importance, Tb control, curative services, and maternal and child health care. Such areas as health education, oral health, environmental sanitation, and integrated health services are needed to be strengthening. Regarding the community attitudes towards health workers, 41.3 percent of respondents think they are trusted by the community they serve. New areas of concern identified which must be included in future activities of health centers are, in order of priority, health care of elderly population, home health care, rehabilitation services, and such chronic diseases control programs as diabetes, hypertension, school health and mental health care. In conclusion, the study revealed that health workers seemed to have more negative perceptions and attitudes than positive ones towards organization and management of health services and activities performed by the urban health centers where they are engaged. More specifically, the majority of health workers studied revealed to have the following areas of health center's organization and management inadequate or insufficient to support effective performance of their health activities: Namely, physical facilities and equipments required are inadequate; human and financial resources are insufficient; personnel management is unsatisfactory; setting of service target system is unrealistic in terms of the community needs. However, respondents displayed a number of positive perceptions, particularly to those areas as further training needs and implementation of decentralization of government system which will bring more autonomy of local government as they perceived these change would bring the necessary changes to future activities of the health center. They also displayed positive perceptions in their job autonomy and have job satisfactions.

  • PDF

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

Analysis of Prognostic Factors Related to Survival Time for Patients with Small Cell Lung Cancer (소세포폐암 환자의 생존기간에 관련된 인자 분석)

  • Kim, Hee-Kyoo;Yook, Dong-Seung;Shin, Ho-Sik;Kim, Eun-Seok;Lim, Hyun-Jeung;Lim, Tae-Kwan;Ok, Chul-Ho;Cho, Hyun-Myung;Jung, Maan-Hong;Jang, Tae-Won
    • Tuberculosis and Respiratory Diseases
    • /
    • v.54 no.1
    • /
    • pp.57-70
    • /
    • 2003
  • Background : Small cell lung cancer represents approximately 20% of all carcinomas of the lung, and is recognized as having a poor long term outcome compared to non-small cell lung cancer. Therefore, this study investigated the prognostic factors in small cell lung cancer patients in order to improved the survival rate by using the proper therapeutic methods. Material and method : The clinical data from 394 patients who diagnosed with small cell lung cancer and treated from 1993 to 2001 at the Kosin University Gospel Hospital, were analyzed. Result : There were 314 male patients (79.7%), and 80 female patients (20.3%). The number of those with limited disease was 177 (44.9%), and the number of those with extensive disease was 217 (55.1%). Overall, 366 out of 394 enrolled patients had died. The median survival time was 215 days (95% CI : 192-237days). The disease stage, Karnofsky performance state, 5% body weight loss for the recent 3 months, chemotherapy regimens, and the additive chest radiotherapy were identified as being statistically significant factors for the survival time. The median survival times of the supportive care group, one anticancer therapy, and two or more treatment groups were 17 days, 211 days, and 419 day, respectively (p<0.001). These data emphasize the importance of anticancer treatment to improve survival time for patients. The group of concurrent chemoradiotherapy (30 patients) showed significantly longer survival time than the group given sequential chemoradiotherapy (55 patients) (528 days versus 373 days, p=0.0237). The favorable prognostic factors of laboratory study were groups of leukocyte =8,000/mm3, ALP=200 U/L, LDH=450 IU/L, NSE=15 ng/mL, s-GOT=40 IU/L. In extensive disease, there was no difference according to the number of metastatic site. However, the median survival time of patients with ipsilateral pleural effusion had longer than patients having other metastatic sites. According to the survey periods, three groups were divided into 1993-1995, 1996-1998, and 1999-2001. The median survival time was significantly prolonged after 1999 in comparison to previous groups (177 days, 194 days, 289 days, p=0.001, 0.002, respectively). Conclusion: Disease stage and 5% body weight loss for recent 3 months at diagnostic state were significant prognostic factors. In addition, the performance status, serum ALP, LDH, NSE, CEA levels also appear to be prognostic factors. The survival time of those patients with small cell lung cancer has been prologned in recent years. It was suggested that the used of the EP (etoposied and cisplatin) chemotherapy method and concurrent chemoradiotherapy for patients with a limited stage contributed to the improved survival time.

Analyzing the User Intention of Booth Recommender System in Smart Exhibition Environment (스마트 전시환경에서 부스 추천시스템의 사용자 의도에 관한 조사연구)

  • Choi, Jae Ho;Xiang, Jun-Yong;Moon, Hyun Sil;Choi, Il Young;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.153-169
    • /
    • 2012
  • Exhibitions have played a key role of effective marketing activity which directly informs services and products to current and potential customers. Through participating in exhibitions, exhibitors have got the opportunity to make face-to-face contact so that they can secure the market share and improve their corporate images. According to this economic importance of exhibitions, show organizers try to adopt a new IT technology for improving their performance, and researchers have also studied services which can improve the satisfaction of visitors through analyzing visit patterns of visitors. Especially, as smart technologies make them monitor activities of visitors in real-time, they have considered booth recommender systems which infer preference of visitors and recommender proper service to them like on-line environment. However, while there are many studies which can improve their performance in the side of new technological development, they have not considered the choice factor of visitors for booth recommender systems. That is, studies for factors which can influence the development direction and effective diffusion of these systems are insufficient. Most of prior studies for the acceptance of new technologies and the continuous intention of use have adopted Technology Acceptance Model (TAM) and Extended Technology Acceptance Model (ETAM). Booth recommender systems may not be new technology because they are similar with commercial recommender systems such as book recommender systems, in the smart exhibition environment, they can be considered new technology. However, for considering the smart exhibition environment beyond TAM, measurements for the intention of reuse should focus on how booth recommender systems can provide correct information to visitors. In this study, through literature reviews, we draw factors which can influence the satisfaction and reuse intention of visitors for booth recommender systems, and design a model to forecast adaptation of visitors for booth recommendation in the exhibition environment. For these purposes, we conduct a survey for visitors who attended DMC Culture Open in November 2011 and experienced booth recommender systems using own smart phone, and examine hypothesis by regression analysis. As a result, factors which can influence the satisfaction of visitors for booth recommender systems are the effectiveness, perceived ease of use, argument quality, serendipity, and so on. Moreover, the satisfaction for booth recommender systems has a positive relationship with the development of reuse intention. For these results, we have some insights for booth recommender systems in the smart exhibition environment. First, this study gives shape to important factors which are considered when they establish strategies which induce visitors to consistently use booth recommender systems. Recently, although show organizers try to improve their performances using new IT technologies, their visitors have not felt the satisfaction from these efforts. At this point, this study can help them to provide services which can improve the satisfaction of visitors and make them last relationship with visitors. On the other hands, this study suggests that they managers along the using time of booth recommender systems. For example, in the early stage of the adoption, they should focus on the argument quality, perceived ease of use, and serendipity, so that improve the acceptance of booth recommender systems. After these stages, they should bridge the differences between expectation and perception for booth recommender systems, and lead continuous uses of visitors. However, this study has some limitations. We only use four factors which can influence the satisfaction of visitors. Therefore, we should development our model to consider important additional factors. And the exhibition in our experiments has small number of booths so that visitors may not need to booth recommender systems. In the future study, we will conduct experiments in the exhibition environment which has a larger scale.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.