• Title/Summary/Keyword: HAS3

Search Result 97,593, Processing Time 0.124 seconds

Postoperative Radiation Therapy in the Soft-tissue Sarcoma (연부 조직 육종의 수술 후 방사선 치료 결과)

  • Kim Yeon Shil;Jang Hong Seok;Yoon Sei Chul;Ryu Mi Ryeong;Kay Chul Seung;Chung Su Mi;Kim Hoon Kyo;Kang Yong Koo
    • Radiation Oncology Journal
    • /
    • v.16 no.4
    • /
    • pp.485-495
    • /
    • 1998
  • Purpose : The major goal of the therapy in the soft tissue sarcoma is to control both local and distant tumor. However, the technique of obtaining local control has changed significantly over the past few decades from more aggressive surgery to combined therapy including conservative surgery and radiation and/or chemotherapy. We retrospectively analyzed the treatment results of the postoperative radiation therapy of soft tissue sarcoma and its prognostic factor. Materials and Methods : Between March 1983 and June 1994, 50 patients with soft tissue sarcoma were treated with surgery and postoperative radiation therapy at Kang-Nam St. Mary's hospital. Complete follow up was possible for all patients with median follow up duration 50 months (range 6-162 months). There were 28 male and 32 female patients. Their age ranged from 6 to 83 with a median of 44 years. Extremity (58$\%$) was the most frequent site of occurrence followed by trunk (20$\%$) and head and neck (12$\%$). Histologically malignant fibrous histiocytoma (23$\%$), liposarcoma (17$\%$), malignant schwannoma (12$\%$) constitute 52$\%$ of the patients. Daily radiation therapy designed to treat all areas at a risk for tumor spread upto dose of 4500-5000 cGy. A shrinking field technique was then used and total 55-65 Gy was delivered to tumor bed. Twenty-five patients (42$\%$) received chemotherapy with various regimen in the postoperative period. Results : Total 41 patients failed either with local recurrence or with distant metastasis. There were 29 patients (48$\%$) of local recurrence. Four patients (7$\%$) developed simultaneous local recurrence and distant metastasis and 8 patients (13$\%$) developed only distant metastasis. Local recurrence rate was rather higher than of other reported series. This study included patients of gross residual, recurrent cases after previous operation, trunk and head and neck Primary This feature is likely explanation for the decreased local control rate. Five of 29 Patients who failed only locally were salvaged by re-excision and/or re-irradiation and remained free of disease. Factors affecting local control include histologic type, grade, stage, extent of operation and surgical margin involvement, lymph node metastasis (p<0.05). All 21 patients who failed distantly are dead with progressive disease at the time of this report. Our overall survival results are similar to those of larger series. Actuarial 5 year overall survival and disease free survival were 60.4 $\%$, 30.6$\%$ respectively. Grade, stage (being close association with grade), residual disease (negative margin, microscopic, gross) were significant as a predictor of survival in our series (p<0.05). Conclusion : Combined surgery and postoperative radiation therapy obtained 5 year survival rate comparable to that of radical surgery.

  • PDF

The Correlation between Acholic Stool and the Result of $Tc^{99m}$ DISIDA Hepatobiliary Scintigraphy and Biochemical Test in Neonatal Cholestasis (신생아 담즙 정체증에서 무담즙변의 유무와 $Tc^{99m}$ DISIDA 간담도 주사 결과간의 상관성과 생화학적 검사의 차이에 관한 연구)

  • Joo, Eun-Young;Ahn, Yeon-Mo;Kim, Yong-Joo;Moon, Soo-Ji;Choi, Yun-Young
    • Pediatric Gastroenterology, Hepatology & Nutrition
    • /
    • v.5 no.1
    • /
    • pp.51-61
    • /
    • 2002
  • Purpose: The most common causes of neonatal cholestasis are neonatal hepatitis (NH) and extrahepatic biliary atresia (EHBA). Since neonatal cholestasis presents with variable expression of same pathologic process and has similar clinical, biochemical, and histologic features between EHBA and idiopathic neonatal hepatitis (NH), differential diagnosis is often difficult. We reviewed the differences of clinical characteristics and laboratory data to find out any correlation between the results of $Tc^{99m}$ DISIDA scan and presence of acholic stool. Methods: Between June 1993 and January 2001, total 29 infants younger than 4 month-old underwent $Tc^{99m}$ DISIDA scan. Their biochemical tests and clinical course were reviewed retrospectively. Results: Patients who had negative intestinal activity on $Tc^{99m}$ DISIDA scan showed acholic stool and revealed higher serum direct bilirubin and urine bilirubin level. 18.2% of patients with acholic stool showed intestinal activity on $Tc^{99m}$ DISIDA scan and 81.8% of them did not. All the patients without acholic stool showed positive intestinal activity on $Tc^{99m}$ DISIDA scan. The result of $Tc^{99m}$ DISIDA scan and the presence of acholic stool showed high negative correlation (r :-0.858). Patients with acholic stool and negative intestinal activity on $Tc^{99m}$ DISIDA scan showed higher serum total bilirubin level. Patients without acholic stool and positive intestinal activity on $Tc^{99m}$ DISIDA scan showed higher serum level of ALT. Conclusion: Patients with acholic stool and negative intestinal activity showed high correlation, but 18.2% of patients with acholic stool showed positive intestinal activity. So operative cholangiogram or transcutaneous liver biopsy should be performed for confirmation.

  • PDF

Quantitative Differences between X-Ray CT-Based and $^{137}Cs$-Based Attenuation Correction in Philips Gemini PET/CT (GEMINI PET/CT의 X-ray CT, $^{137}Cs$ 기반 511 keV 광자 감쇠계수의 정량적 차이)

  • Kim, Jin-Su;Lee, Jae-Sung;Lee, Dong-Soo;Park, Eun-Kyung;Kim, Jong-Hyo;Kim, Jae-Il;Lee, Hong-Jae;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.3
    • /
    • pp.182-190
    • /
    • 2005
  • Purpose: There are differences between Standard Uptake Value (SUV) of CT attenuation corrected PET and that of $^{137}Cs$. Since various causes lead to difference of SUV, it is important to know what is the cause of these difference. Since only the X-ray CT and $^{137}Cs$ transmission data are used for the attenuation correction, in Philips GEMINI PET/CT scanner, proper transformation of these data into usable attenuation coefficients for 511 keV photon has to be ascertained. The aim of this study was to evaluate the accuracy in the CT measurement and compare the CT and $^{137}Cs$-based attenuation correction in this scanner. Methods: For all the experiments, CT was set to 40 keV (120 kVp) and 50 mAs. To evaluate the accuracy of the CT measurement, CT performance phantom was scanned and Hounsfield units (HU) for those regions were compared to the true values. For the comparison of CT and $^{137}Cs$-based attenuation corrections, transmission scans of the elliptical lung-spine-body phantom and electron density CT phantom composed of various components, such as water, bone, brain and adipose, were performed using CT and $^{137}Cs$. Transformed attenuation coefficients from these data were compared to each other and true 511 keV attenuation coefficient acquired using $^{68}Ge$ and ECAT EXACT 47 scanner. In addition, CT and $^{137}Cs$-derived attenuation coefficients and SUV values for $^{18}F$-FDG measured from the regions with normal and pathological uptake in patients' data were also compared. Results: HU of all the regions in CT performance phantom measured using GEMINI PET/CT were equivalent to the known true values. CT based attenuation coefficients were lower than those of $^{68}Ge$ about 10% in bony region of NEMA ECT phantom. Attenuation coefficients derived from $^{137}Cs$ data was slightly higher than those from CT data also in the images of electron density CT phantom and patients' body with electron density. However, the SUV values in attenuation corrected images using $^{137}Cs$ were lower than images corrected using CT. Percent difference between SUV values was about 15%. Conclusion: Although the HU measured using this scanner was accurate, accuracy in the conversion from CT data into the 511 keV attenuation coefficients was limited in the bony region. Discrepancy in the transformed attenuation coefficients and SUV values between CT and $^{137}Cs$-based data shown in this study suggests that further optimization of various parameters in data acquisition and processing would be necessary for this scanner.

Spatial effect on the diffusion of discount stores (대형할인점 확산에 대한 공간적 영향)

  • Joo, Young-Jin;Kim, Mi-Ae
    • Journal of Distribution Research
    • /
    • v.15 no.4
    • /
    • pp.61-85
    • /
    • 2010
  • Introduction: Diffusion is process by which an innovation is communicated through certain channel overtime among the members of a social system(Rogers 1983). Bass(1969) suggested the Bass model describing diffusion process. The Bass model assumes potential adopters of innovation are influenced by mass-media and word-of-mouth from communication with previous adopters. Various expansions of the Bass model have been conducted. Some of them proposed a third factor affecting diffusion. Others proposed multinational diffusion model and it stressed interactive effect on diffusion among several countries. We add a spatial factor in the Bass model as a third communication factor. Because of situation where we can not control the interaction between markets, we need to consider that diffusion within certain market can be influenced by diffusion in contiguous market. The process that certain type of retail extends is a result that particular market can be described by the retail life cycle. Diffusion of retail has pattern following three phases of spatial diffusion: adoption of innovation happens in near the diffusion center first, spreads to the vicinity of the diffusing center and then adoption of innovation is completed in peripheral areas in saturation stage. So we expect spatial effect to be important to describe diffusion of domestic discount store. We define a spatial diffusion model using multinational diffusion model and apply it to the diffusion of discount store. Modeling: In this paper, we define a spatial diffusion model and apply it to the diffusion of discount store. To define a spatial diffusion model, we expand learning model(Kumar and Krishnan 2002) and separate diffusion process in diffusion center(market A) from diffusion process in the vicinity of the diffusing center(market B). The proposed spatial diffusion model is shown in equation (1a) and (1b). Equation (1a) is the diffusion process in diffusion center and equation (1b) is one in the vicinity of the diffusing center. $$\array{{S_{i,t}=(p_i+q_i{\frac{Y_{i,t-1}}{m_i}})(m_i-Y_{i,t-1})\;i{\in}\{1,{\cdots},I\}\;(1a)}\\{S_{j,t}=(p_j+q_j{\frac{Y_{j,t-1}}{m_i}}+{\sum\limits_{i=1}^I}{\gamma}_{ij}{\frac{Y_{i,t-1}}{m_i}})(m_j-Y_{j,t-1})\;i{\in}\{1,{\cdots},I\},\;j{\in}\{I+1,{\cdots},I+J\}\;(1b)}}$$ We rise two research questions. (1) The proposed spatial diffusion model is more effective than the Bass model to describe the diffusion of discount stores. (2) The more similar retail environment of diffusing center with that of the vicinity of the contiguous market is, the larger spatial effect of diffusing center on diffusion of the vicinity of the contiguous market is. To examine above two questions, we adopt the Bass model to estimate diffusion of discount store first. Next spatial diffusion model where spatial factor is added to the Bass model is used to estimate it. Finally by comparing Bass model with spatial diffusion model, we try to find out which model describes diffusion of discount store better. In addition, we investigate the relationship between similarity of retail environment(conceptual distance) and spatial factor impact with correlation analysis. Result and Implication: We suggest spatial diffusion model to describe diffusion of discount stores. To examine the proposed spatial diffusion model, 347 domestic discount stores are used and we divide nation into 5 districts, Seoul-Gyeongin(SG), Busan-Gyeongnam(BG), Daegu-Gyeongbuk(DG), Gwan- gju-Jeonla(GJ), Daejeon-Chungcheong(DC), and the result is shown

    . In a result of the Bass model(I), the estimates of innovation coefficient(p) and imitation coefficient(q) are 0.017 and 0.323 respectively. While the estimate of market potential is 384. A result of the Bass model(II) for each district shows the estimates of innovation coefficient(p) in SG is 0.019 and the lowest among 5 areas. This is because SG is the diffusion center. The estimates of imitation coefficient(q) in BG is 0.353 and the highest. The imitation coefficient in the vicinity of the diffusing center such as BG is higher than that in the diffusing center because much information flows through various paths more as diffusion is progressing. A result of the Bass model(II) shows the estimates of innovation coefficient(p) in SG is 0.019 and the lowest among 5 areas. This is because SG is the diffusion center. The estimates of imitation coefficient(q) in BG is 0.353 and the highest. The imitation coefficient in the vicinity of the diffusing center such as BG is higher than that in the diffusing center because much information flows through various paths more as diffusion is progressing. In a result of spatial diffusion model(IV), we can notice the changes between coefficients of the bass model and those of the spatial diffusion model. Except for GJ, the estimates of innovation and imitation coefficients in Model IV are lower than those in Model II. The changes of innovation and imitation coefficients are reflected to spatial coefficient(${\gamma}$). From spatial coefficient(${\gamma}$) we can infer that when the diffusion in the vicinity of the diffusing center occurs, the diffusion is influenced by one in the diffusing center. The difference between the Bass model(II) and the spatial diffusion model(IV) is statistically significant with the ${\chi}^2$-distributed likelihood ratio statistic is 16.598(p=0.0023). Which implies that the spatial diffusion model is more effective than the Bass model to describe diffusion of discount stores. So the research question (1) is supported. In addition, we found that there are statistically significant relationship between similarity of retail environment and spatial effect by using correlation analysis. So the research question (2) is also supported.

  • PDF
  • Effects on School Lunch Service Programme of Elementary School in Rural Area (농촌지역(農村地域) 국민학교(國民學校) 급식아동(給食兒童)과 성장발달(成長發達)과 식생활(食生活) 습관(習慣))

    • Park, Jin Wook;Lee, Sung Kook
      • Journal of the Korean Society of School Health
      • /
      • v.5 no.2
      • /
      • pp.74-90
      • /
      • 1992
    • The purpose of this study is to know the effects on school lunch service programme of elementary school in rural area, by using the group consisting of the sixth year students in the schools that have provided them with the lunch for six years or longer(male student:312, & female student:324), while using the comparing group consisting of the sixth year students in the schools that have not provided them with the school lunch under their similar living condition(male student: 306 & female student:322). In addition, this study was carried out by examining all continued information about their height and weight shown in the developmetal record for six years from the 1st to 6th year, and by checking their eating habits on the basis of questionnaires. The result of this study is summarized as follows. As the result of comparing the values of their height and weight grown for 6 years, it was shown that the height of the male group provided with school lunch is 27.8 cm while the male group without lunch is 27.1 cm. And the female group provided with school lunch indicated the growing value of 29.9 cm while the group without lunch did 28.4 cm. Then, it appeared that both male and female groups provided with school lunch show higher growing values of 0.7 cm, respectively, and 1.5 cm than these groups without lunch. Also, the weight of the group without lunch was 14.8 kg. Moreover, the weight of the female group provided with school lunch was 16.9 kg while the group without lunch was 17.2 kg. Then, it was shown that the male group provided with school lunch indicates heavier growing value of 0.9 kg than the group without lunch while the female group without lunch does heavier value of 0.3 kg than the group provided with school lunch. It's figure showed that although this distribution according to percentile in the 1st year is similar to the standard regular curve it is positioned in the upper group(more thatn 70%) divided centering around 50% in the 6th year, of which distribution of children provided with school lunch was higher. When comparing the values of physical status in the 6th year, it was also shown that male children with school lunch are better than these children without lunch in jumping, throwing, chinning and lifting while female children are better than these children without lunch only in jumping, which were a significant difference. In addition, the group provided with lunch showed distribution of the higher physical grade. The result of analysis on their breakfast indicated that the children with every morning breakfast account for 67.6% of the group provided with school lunch while the group without lunch for 57.8%. Regarding the reason that they do not have the breakfast, the group with school lunch answered "Because of habits"(50.7%) while the group without lunch did "Because they have no appetite"(58.9%). When comparing the degree of preference for hot or salty food, it was apparent that these children with school lunch generally tend to prefer less hot or sailty food. With respect to the frequency and place of their eating between meals, it was shown that about 70.0% of both groups has the eating between meals, more than one time a day. Then, the group with school lunch had the eating between meals at home(45.2%) while the group without lunch did it in the process of returning to home(48.4%). Regarding the degree of their preference for a certain food, it was shown that more children of the group with school lunch do not prefer a food to others. Also, their eating attitude indicated that such children as eating the food with chat after completely swallowing food and with T.V watching are larger and lower among the group with school lunch, which showed a remarkable defference from the group without lunch. With respect to their sanitary habits such as hand washing and toothing, these children who always wash their hand before eating, accounted for 84.4 % of the group provided with school lunch while the group without lunch did for 63.6%, of which the female group with school lunch indicated a remarkable difference. The actual condition of their nutrition education showed that these children who answered "Received this education" accounted for 78.0% of the group with school lunch while the group without lunch accounted for 57.5%.

    • PDF

    A Study on ChoSonT'ongPaeJiIn (조선통폐지인(朝鮮通幣之印) 연구)

    • Moon, Sangleun
      • Korean Journal of Heritage: History & Science
      • /
      • v.52 no.2
      • /
      • pp.220-239
      • /
      • 2019
    • According to the National Currency (國幣) article in GyeongGukDaeJeon (經國大典), the ChoSonT'ongPaeJiIn (朝鮮通幣之印) was a seal that was imprinted on both ends of a piece of hemp fabric (布). It was used for the circulation of hemp fabric as a fabric currency (布幣). The issued fabric currency was used as a currency for trade or as pecuniary means to have one's crime exempted or replace one's labor duty. The ChoSonT'ongPaeJiIn would be imprinted on a piece of hemp fabric (布) to collect one-twentieth of tax. The ChoSonT'ongPaeJiIn (朝鮮通幣之印) was one of the historical currencies and seal materials used during the early Chosun dynasty. Its imprint was a means of collecting taxes; hence, it was one of the taxation research materials. Despite its value, however, there has been no active research undertaken on it. Thus, the investigator conducted comprehensive research on it based on related content found in JeonRokTongGo (典錄通考), Dae'JeonHu-Sok'Rok (大典後續錄), JeongHeonSwaeRok (貞軒?錄) and other geography books (地理志) as well as the materials mentioned by researchers in previous studies. The investigator demonstrated that the ChoSonT'ongPaeJiIn was established based on the concept of circulating Choson fabric notes (朝鮮布貨) with a seal on ChongOseungp'o (正五升布) in entreaty documents submitted in 1401 and that the fabric currency (布幣) with the imprint of the ChoSonT'ongPaeJiIn was used as a currency for trade, pecuniary or taxation means of having one's crime exempted, or replacing one's labor, and as a tool of revenue from ships. The use of ChoSonT'ongPaeJiIn continued even after a ban on fabric currencies (布幣) in March 1516 due to a policy on the "use of Joehwa (paper notes)" in 1515. It was still used as an official seal on local official documents in 1598. During the reign of King Yeongjo (英祖), it was used to make a military service (軍布) hemp fabric. Some records of 1779 indicate that it was used as a means of taxation for international trade. It is estimated that approximately 330 ChoSonT'ongPaeJiIn were in circulation based on records in JeongHeonSwaeRok (貞軒?錄). Although there was the imprint of ChoSonT'ongPaeJiIn in An Inquiry on Choson Currency (朝鮮貨幣考) published in 1940, there had been no fabric currencies (布幣) with its imprint on them or genuine cases of the seal. It was recently found among the artifacts of Wongaksa Temple. The seal imprint was also found on historical manuscripts produced at the Jikjisa Temple in 1775. The investigator compared the seal imprints found on the historical manuscripts of the Jikjisa Temple, attached to TapJwaJongJeonGji (塔左從政志), and published in An Inquiry on Choson Currency with the ChoSonT'ongPaeJiIn housed at the Wongaksa Temple. It was found that these seal imprints were the same shape as the one at Wongaksa Temple. In addition, their overall form was the same as the one depicted in Daerokji (大麓誌) and LiJaeNanGo (?齋亂藁). These findings demonstrate that the ChoSonT'ongPaeJiIn at Wongaksa Temple was a seal made in the 15th century and is, therefore, an important artifact in the study of Choson's currency history, taxation, and seals. There is a need for future research examining its various aspects.

    Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

    • Kim, Sunwoong
      • Journal of Intelligence and Information Systems
      • /
      • v.25 no.2
      • /
      • pp.39-55
      • /
      • 2019
    • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

    Development of New Variables Affecting Movie Success and Prediction of Weekly Box Office Using Them Based on Machine Learning (영화 흥행에 영향을 미치는 새로운 변수 개발과 이를 이용한 머신러닝 기반의 주간 박스오피스 예측)

    • Song, Junga;Choi, Keunho;Kim, Gunwoo
      • Journal of Intelligence and Information Systems
      • /
      • v.24 no.4
      • /
      • pp.67-83
      • /
      • 2018
    • The Korean film industry with significant increase every year exceeded the number of cumulative audiences of 200 million people in 2013 finally. However, starting from 2015 the Korean film industry entered a period of low growth and experienced a negative growth after all in 2016. To overcome such difficulty, stakeholders like production company, distribution company, multiplex have attempted to maximize the market returns using strategies of predicting change of market and of responding to such market change immediately. Since a film is classified as one of experiential products, it is not easy to predict a box office record and the initial number of audiences before the film is released. And also, the number of audiences fluctuates with a variety of factors after the film is released. So, the production company and distribution company try to be guaranteed the number of screens at the opining time of a newly released by multiplex chains. However, the multiplex chains tend to open the screening schedule during only a week and then determine the number of screening of the forthcoming week based on the box office record and the evaluation of audiences. Many previous researches have conducted to deal with the prediction of box office records of films. In the early stage, the researches attempted to identify factors affecting the box office record. And nowadays, many studies have tried to apply various analytic techniques to the factors identified previously in order to improve the accuracy of prediction and to explain the effect of each factor instead of identifying new factors affecting the box office record. However, most of previous researches have limitations in that they used the total number of audiences from the opening to the end as a target variable, and this makes it difficult to predict and respond to the demand of market which changes dynamically. Therefore, the purpose of this study is to predict the weekly number of audiences of a newly released film so that the stakeholder can flexibly and elastically respond to the change of the number of audiences in the film. To that end, we considered the factors used in the previous studies affecting box office and developed new factors not used in previous studies such as the order of opening of movies, dynamics of sales. Along with the comprehensive factors, we used the machine learning method such as Random Forest, Multi Layer Perception, Support Vector Machine, and Naive Bays, to predict the number of cumulative visitors from the first week after a film release to the third week. At the point of the first and the second week, we predicted the cumulative number of visitors of the forthcoming week for a released film. And at the point of the third week, we predict the total number of visitors of the film. In addition, we predicted the total number of cumulative visitors also at the point of the both first week and second week using the same factors. As a result, we found the accuracy of predicting the number of visitors at the forthcoming week was higher than that of predicting the total number of them in all of three weeks, and also the accuracy of the Random Forest was the highest among the machine learning methods we used. This study has implications in that this study 1) considered various factors comprehensively which affect the box office record and merely addressed by other previous researches such as the weekly rating of audiences after release, the weekly rank of the film after release, and the weekly sales share after release, and 2) tried to predict and respond to the demand of market which changes dynamically by suggesting models which predicts the weekly number of audiences of newly released films so that the stakeholders can flexibly and elastically respond to the change of the number of audiences in the film.

    How to improve the accuracy of recommendation systems: Combining ratings and review texts sentiment scores (평점과 리뷰 텍스트 감성분석을 결합한 추천시스템 향상 방안 연구)

    • Hyun, Jiyeon;Ryu, Sangyi;Lee, Sang-Yong Tom
      • Journal of Intelligence and Information Systems
      • /
      • v.25 no.1
      • /
      • pp.219-239
      • /
      • 2019
    • As the importance of providing customized services to individuals becomes important, researches on personalized recommendation systems are constantly being carried out. Collaborative filtering is one of the most popular systems in academia and industry. However, there exists limitation in a sense that recommendations were mostly based on quantitative information such as users' ratings, which made the accuracy be lowered. To solve these problems, many studies have been actively attempted to improve the performance of the recommendation system by using other information besides the quantitative information. Good examples are the usages of the sentiment analysis on customer review text data. Nevertheless, the existing research has not directly combined the results of the sentiment analysis and quantitative rating scores in the recommendation system. Therefore, this study aims to reflect the sentiments shown in the reviews into the rating scores. In other words, we propose a new algorithm that can directly convert the user 's own review into the empirically quantitative information and reflect it directly to the recommendation system. To do this, we needed to quantify users' reviews, which were originally qualitative information. In this study, sentiment score was calculated through sentiment analysis technique of text mining. The data was targeted for movie review. Based on the data, a domain specific sentiment dictionary is constructed for the movie reviews. Regression analysis was used as a method to construct sentiment dictionary. Each positive / negative dictionary was constructed using Lasso regression, Ridge regression, and ElasticNet methods. Based on this constructed sentiment dictionary, the accuracy was verified through confusion matrix. The accuracy of the Lasso based dictionary was 70%, the accuracy of the Ridge based dictionary was 79%, and that of the ElasticNet (${\alpha}=0.3$) was 83%. Therefore, in this study, the sentiment score of the review is calculated based on the dictionary of the ElasticNet method. It was combined with a rating to create a new rating. In this paper, we show that the collaborative filtering that reflects sentiment scores of user review is superior to the traditional method that only considers the existing rating. In order to show that the proposed algorithm is based on memory-based user collaboration filtering, item-based collaborative filtering and model based matrix factorization SVD, and SVD ++. Based on the above algorithm, the mean absolute error (MAE) and the root mean square error (RMSE) are calculated to evaluate the recommendation system with a score that combines sentiment scores with a system that only considers scores. When the evaluation index was MAE, it was improved by 0.059 for UBCF, 0.0862 for IBCF, 0.1012 for SVD and 0.188 for SVD ++. When the evaluation index is RMSE, UBCF is 0.0431, IBCF is 0.0882, SVD is 0.1103, and SVD ++ is 0.1756. As a result, it can be seen that the prediction performance of the evaluation point reflecting the sentiment score proposed in this paper is superior to that of the conventional evaluation method. In other words, in this paper, it is confirmed that the collaborative filtering that reflects the sentiment score of the user review shows superior accuracy as compared with the conventional type of collaborative filtering that only considers the quantitative score. We then attempted paired t-test validation to ensure that the proposed model was a better approach and concluded that the proposed model is better. In this study, to overcome limitations of previous researches that judge user's sentiment only by quantitative rating score, the review was numerically calculated and a user's opinion was more refined and considered into the recommendation system to improve the accuracy. The findings of this study have managerial implications to recommendation system developers who need to consider both quantitative information and qualitative information it is expect. The way of constructing the combined system in this paper might be directly used by the developers.

    A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

    • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
      • Journal of Intelligence and Information Systems
      • /
      • v.25 no.1
      • /
      • pp.163-177
      • /
      • 2019
    • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.