• Title/Summary/Keyword: high-performance

Search Result 36,151, Processing Time 0.072 seconds

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

A Monitoring of Aflatoxins in Commercial Herbs for Food and Medicine (식·약공용 농산물의 아플라톡신 오염 실태 조사)

  • Kim, Sung-dan;Kim, Ae-kyung;Lee, Hyun-kyung;Lee, Sae-ram;Lee, Hee-jin;Ryu, Hoe-jin;Lee, Jung-mi;Yu, In-sil;Jung, Kweon
    • Journal of Food Hygiene and Safety
    • /
    • v.32 no.4
    • /
    • pp.267-274
    • /
    • 2017
  • This paper deals with the natural occurrence of total aflatoxins ($B_1$, $B_2$, $G_1$, and $G_2$) in commercial herbs for food and medicine. To monitor aflatoxins in commercial herbs for food and medicine not included in the specifications of Food Code, a total of 62 samples of 6 different herbs (Bombycis Corpus, Glycyrrhizae Radix et Rhizoma, Menthae Herba, Nelumbinis Semen, Polygalae Radix, Zizyphi Semen) were collected from Yangnyeong market in Seoul, Korea. The samples were treated by the immunoaffinity column clean-up method and quantified by high performance liquid chromatography (HPLC) with on-line post column photochemical derivatization (PHRED) and fluorescence detection (FLD). The analytical method for aflatoxins was validated by accuracy, precision and detection limits. The method showed recovery values in the 86.9~114.0% range and the values of percent coefficient of variaton (CV%) in the 0.9~9.8% range. The limits of detection (LOD) and quantitation (LOQ) in herb were ranged from 0.020 to $0.363{\mu}g/kg$ and from 0.059 to $1.101{\mu}g/kg$, respectively. Of 62 samples analyzed, 6 semens (the original form of 2 Nelumbinis Semen and 2 Zizyphi Semen, the powder of 1 Nelumbinis Semen and 1 Zizyphi Semen) were aflatoxin positive. Aflatoxins $B_1$ or $B_2$ were detected in all positive samples, and the presence of aflatoxins $G_1$ and $G_2$ were not detected. The amount of total aflatoxins ($B_1$, $B_2$, $G_1$, and $G_2$) in the powder and original form of Nelumbinis Semen and Zizyphi Semen were observed around $ND{\sim}21.8{\mu}g/kg$, which is not regulated presently in Korea. The 56 samples presented levels below the limits of detection and quantitation.

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.

Effectiveness of Smoking Prevention Program based on Social Influence Model in the Middle School Students (흡연예방교육에 의한 청소년들의 흡연에 대한 지식 및 태도변화와 흡연량의 감소 효과)

  • Roh, Won-Hwan;Kang, Pock-Soo;Kim, Sok-Beom;Lee, Kyeong-Soo
    • Journal of agricultural medicine and community health
    • /
    • v.26 no.1
    • /
    • pp.37-56
    • /
    • 2001
  • This study was conducted to analyze the degree of changes in knowledge and attitude toward smoking and to examine the factors affecting knowledge and attitude for smoking after providing a smoking prevention program based on social influence model for a year to middle school students. Study population consists of 665 subjects of middle school students(aged 14 years) in Gumi city in Kyeongsangbukdo Province. Among them three-hundred sixty-seven students(intervention group) were educated to a smoking prevention program for 1 year from April 1999 to April 2000. School-based four-class program to prevent smoking was developed. The program provides instruction about short and long-term negative physiologic and social consequences of smoking and also discussed the health hazards of smoking, social pressure to smoke, peer norms regarding tobacco use, and refusal skill. A 45-item self-administered structured questionnaire was designed to evaluate the change of knowledge, attitude, smoking rate and the amount of smoking. The instrument was comprised of 11 knowledge items, thirteen attitude item and demographic items. Each scales were created by summing responses to each items within each scales and high scores on the knowledge, attitude, and smoking behavioral intention scales indicated positive responses. Based on the changes before and after the implementation of smoking prevention program between intervention and control group, the change of scores on knowledge were significantly different between the control group and the intervention group(p<0.05) and the change of scores on the attitude toward smoking was significantly different between intervention and control group. The change of smoking rate were not showing a significant difference between two groups but the amount of smoking were significantly reduced in intervention group than control group. In multiple regression analysis on changes of knowledge about smoking, the variables of smoking prevention program education, previous knowledge on smoking and students' school performance were selected the significant variables. In multiple regression to analysis of the factors influencing changes in attitude toward smoking, the variables of smoking prevention program education, previous knowledge on smoking were shown to be significant. The smoking prevention program was effective on change of knowledge and attitude of middle school students. In considering that the policy should be needed to extent of implementation of school-based health education curricula based on social influence model and it would contribute to reduce smoking of students.

  • PDF

Content-based Recommendation Based on Social Network for Personalized News Services (개인화된 뉴스 서비스를 위한 소셜 네트워크 기반의 콘텐츠 추천기법)

  • Hong, Myung-Duk;Oh, Kyeong-Jin;Ga, Myung-Hyun;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.57-71
    • /
    • 2013
  • Over a billion people in the world generate new news minute by minute. People forecasts some news but most news are from unexpected events such as natural disasters, accidents, crimes. People spend much time to watch a huge amount of news delivered from many media because they want to understand what is happening now, to predict what might happen in the near future, and to share and discuss on the news. People make better daily decisions through watching and obtaining useful information from news they saw. However, it is difficult that people choose news suitable to them and obtain useful information from the news because there are so many news media such as portal sites, broadcasters, and most news articles consist of gossipy news and breaking news. User interest changes over time and many people have no interest in outdated news. From this fact, applying users' recent interest to personalized news service is also required in news service. It means that personalized news service should dynamically manage user profiles. In this paper, a content-based news recommendation system is proposed to provide the personalized news service. For a personalized service, user's personal information is requisitely required. Social network service is used to extract user information for personalization service. The proposed system constructs dynamic user profile based on recent user information of Facebook, which is one of social network services. User information contains personal information, recent articles, and Facebook Page information. Facebook Pages are used for businesses, organizations and brands to share their contents and connect with people. Facebook users can add Facebook Page to specify their interest in the Page. The proposed system uses this Page information to create user profile, and to match user preferences to news topics. However, some Pages are not directly matched to news topic because Page deals with individual objects and do not provide topic information suitable to news. Freebase, which is a large collaborative database of well-known people, places, things, is used to match Page to news topic by using hierarchy information of its objects. By using recent Page information and articles of Facebook users, the proposed systems can own dynamic user profile. The generated user profile is used to measure user preferences on news. To generate news profile, news category predefined by news media is used and keywords of news articles are extracted after analysis of news contents including title, category, and scripts. TF-IDF technique, which reflects how important a word is to a document in a corpus, is used to identify keywords of each news article. For user profile and news profile, same format is used to efficiently measure similarity between user preferences and news. The proposed system calculates all similarity values between user profiles and news profiles. Existing methods of similarity calculation in vector space model do not cover synonym, hypernym and hyponym because they only handle given words in vector space model. The proposed system applies WordNet to similarity calculation to overcome the limitation. Top-N news articles, which have high similarity value for a target user, are recommended to the user. To evaluate the proposed news recommendation system, user profiles are generated using Facebook account with participants consent, and we implement a Web crawler to extract news information from PBS, which is non-profit public broadcasting television network in the United States, and construct news profiles. We compare the performance of the proposed method with that of benchmark algorithms. One is a traditional method based on TF-IDF. Another is 6Sub-Vectors method that divides the points to get keywords into six parts. Experimental results demonstrate that the proposed system provide useful news to users by applying user's social network information and WordNet functions, in terms of prediction error of recommended news.

NFC-based Smartwork Service Model Design (NFC 기반의 스마트워크 서비스 모델 설계)

  • Park, Arum;Kang, Min Su;Jun, Jungho;Lee, Kyoung Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.157-175
    • /
    • 2013
  • Since Korean government announced 'Smartwork promotion strategy' in 2010, Korean firms and government organizations have started to adopt smartwork. However, the smartwork has been implemented only in a few of large enterprises and government organizations rather than SMEs (small and medium enterprises). In USA, both Yahoo! and Best Buy have stopped their flexible work because of its reported low productivity and job loafing problems. In addition, according to the literature on smartwork, we could draw obstacles of smartwork adoption and categorize them into the three types: institutional, organizational, and technological. The first category of smartwork adoption obstacles, institutional, include the difficulties of smartwork performance evaluation metrics, the lack of readiness of organizational processes, limitation of smartwork types and models, lack of employee participation in smartwork adoption procedure, high cost of building smartwork system, and insufficiency of government support. The second category, organizational, includes limitation of the organization hierarchy, wrong perception of employees and employers, a difficulty in close collaboration, low productivity with remote coworkers, insufficient understanding on remote working, and lack of training about smartwork. The third category, technological, obstacles include security concern of mobile work, lack of specialized solution, and lack of adoption and operation know-how. To overcome the current problems of smartwork in reality and the reported obstacles in literature, we suggest a novel smartwork service model based on NFC(Near Field Communication). This paper suggests NFC-based Smartwork Service Model composed of NFC-based Smartworker networking service and NFC-based Smartwork space management service. NFC-based smartworker networking service is comprised of NFC-based communication/SNS service and NFC-based recruiting/job seeking service. NFC-based communication/SNS Service Model supplements the key shortcomings that existing smartwork service model has. By connecting to existing legacy system of a company through NFC tags and systems, the low productivity and the difficulty of collaboration and attendance management can be overcome since managers can get work processing information, work time information and work space information of employees and employees can do real-time communication with coworkers and get location information of coworkers. Shortly, this service model has features such as affordable system cost, provision of location-based information, and possibility of knowledge accumulation. NFC-based recruiting/job-seeking service provides new value by linking NFC tag service and sharing economy sites. This service model has features such as easiness of service attachment and removal, efficient space-based work provision, easy search of location-based recruiting/job-seeking information, and system flexibility. This service model combines advantages of sharing economy sites with the advantages of NFC. By cooperation with sharing economy sites, the model can provide recruiters with human resource who finds not only long-term works but also short-term works. Additionally, SMEs (Small Medium-sized Enterprises) can easily find job seeker by attaching NFC tags to any spaces at which human resource with qualification may be located. In short, this service model helps efficient human resource distribution by providing location of job hunters and job applicants. NFC-based smartwork space management service can promote smartwork by linking NFC tags attached to the work space and existing smartwork system. This service has features such as low cost, provision of indoor and outdoor location information, and customized service. In particular, this model can help small company adopt smartwork system because it is light-weight system and cost-effective compared to existing smartwork system. This paper proposes the scenarios of the service models, the roles and incentives of the participants, and the comparative analysis. The superiority of NFC-based smartwork service model is shown by comparing and analyzing the new service models and the existing service models. The service model can expand scope of enterprises and organizations that adopt smartwork and expand the scope of employees that take advantages of smartwork.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

The Results and Prognostic Factors of Chemo-radiation Therapy in the Management of Small Cell Lung Cancer (항암화학요법과 방사선 치료를 시행한 소세포폐암 환자의 치료 성적 -생존율과 예후인자, 실패양상-)

  • Kim Eun-Seog;Choi Doo-Ho;Won Jong-Ho;Uh Soo-Taek;Hong Dae-Sik;Park Choon-Sik;Park Hee-Sook;Youm Wook
    • Radiation Oncology Journal
    • /
    • v.16 no.4
    • /
    • pp.433-440
    • /
    • 1998
  • Purpose : Although small ceil lung cancer (SCLC) has high response rate to chemotherapy and radiotherapy (RT), the prognosis is dismal. The authors evaluated survival and failure patterns according to the prognostic factors in SCLC patients who had thoracic radiation therapy with chemotherapy. Materials and Methods : One hundred and twenty nine patients with SCLC had received thoracic radiation therapy from August 1985 to December 1990. Seventy-seven accessible patients were evaluated retrospectively among 87 patients who completed RT. Median follow-up period was 14 months (2-87months). Results : The two years survival rate was 13$\%$ with a median survival time of 14 months. The two year survival rates of limited disease and extensive disease were 20$\%$ and 8$\%$, respectively, with median survival time of 14 months and 9 months, respectively. Twenty two patients (88$\%$) of limited disease showed complete response (CR) and 3 patients (12$\%$) did partial response (PR). The two year survival rates on CR and PR groups were 24$\%$ and 0$\%$, with median survival times of 14 months and 5 months. respectively (p=0.005). No patients with serum sodium were lower than 135 mmol/L survived 2years and their median survival time was 7 months (p=0.002). Patients whose alkaline phophatase lower than 130 IU/L showed 26$\%$ of 2 year survival rate and showed median survival time of 14 months and those with alkaline phosphatase higher than 130 IU/L showed no 2 year survival and median survival time of 5 the months, respectively (p=0.019). No statistical differences were found according to the age, sex, and performance status. Among the patients with extensive disease, two rear survivals according to the metastatic sites were 14$\%$, 0$\%$, and 7$\%$ in brain, liver, and other metastatic sites, respectively, with median survival time of 9 months, 9 months, and 8 months, respectively (p>0.05). Two year survivals on CR group and PR group were 15$\%$ and 4$\%$, respectively, with a median survival time of 11 months and 7 months, respectively (p=0.01). Conclusion : For SCLC, complete response after chemoradiotherapy was the most significant prognostic tactor. To achieve this goal. there should be further investigation about hyperfractionation, dose escalation, and compatible chemo-radiation schedule such as concurrent chemo-radiation and early radiation therapy with chemotherapy.

  • PDF

Effects of an Aspirated Radiation Shield on Temperature Measurement in a Greenhouse (강제 흡출식 복사선 차폐장치가 온실의 기온측정에 미치는 영향)

  • Jeong, Young Kyun;Lee, Jong Goo;Yun, Sung Wook;Kim, Hyeon Tae;Ahn, Enu Ki;Seo, Jae Seok;Yoon, Yong Cheol
    • Journal of Bio-Environment Control
    • /
    • v.28 no.1
    • /
    • pp.78-85
    • /
    • 2019
  • This study was designed to examine the performance of an aspirated radiation shield(ARS), which was made at the investigator's lab and characterized by relatively easier making and lower costs based on survey data and reports on errors in its measurements of temperature and relative humidity. The findings were summarized as follows: the ARS and the Jinju weather station made measurements and recorded the range of maximum, average, and minimum temperature at $2.0{\sim}34.1^{\circ}C$, $-6.1{\sim}22.2^{\circ}C$, $-14.0{\sim}15.1^{\circ}C$ and $0.4{\sim}31.5^{\circ}C$, $-5.8{\sim}22.0^{\circ}C$, $-14.1{\sim}16.3^{\circ}C$, respectively. There were no big differences in temperature measurements between the two institutions except that the lowest and highest point of maximum temperature was higher on the campus by $1.6^{\circ}C$ and $2.6^{\circ}C$, respectively. The measurements of ARS were tested against those of a standard thermometer. The results show that the temperature measured by ARS was lower by $-2.0^{\circ}C$ or higher by $1.8^{\circ}C$ than the temperature measured by a standard thermometer. The analysis results of its correlations with a standard thermometer reveal that the coefficient of determination was 0.99. Temperature was compared between fans and no fans, and the results show that maximum, average, and minimum temperature was higher overall with no fans by $0.5{\sim}7.6^{\circ}C$, $0.3{\sim}4.6^{\circ}C$ and $0.5{\sim}3.9^{\circ}C$, respectively. The daily average relative humidity measurements were compared between ARS and the weather station of Jinju, and the results show that the measurements of ARS were a little bit higher than those of the Jinju weather station. The measurements on June 27, July 26 and 29, and August 20 were relatively higher by 5.7%, 5.2%, 9.1%, and 5.8%, respectively, but differences in the monthly average between the two institutions were trivial at 2.0~3.0%. Relative humidity was in the range of -3.98~+7.78% overall based on measurements with ARS and Assman's psychometer. The study analyzed correlations in relative humidity between the measurements of the Jinju weather station and those of Assman's psychometer and found high correlations between them with the coefficient of determination at 0.94 and 0.97, respectively.

Animal Infectious Diseases Prevention through Big Data and Deep Learning (빅데이터와 딥러닝을 활용한 동물 감염병 확산 차단)

  • Kim, Sung Hyun;Choi, Joon Ki;Kim, Jae Seok;Jang, Ah Reum;Lee, Jae Ho;Cha, Kyung Jin;Lee, Sang Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.137-154
    • /
    • 2018
  • Animal infectious diseases, such as avian influenza and foot and mouth disease, occur almost every year and cause huge economic and social damage to the country. In order to prevent this, the anti-quarantine authorities have tried various human and material endeavors, but the infectious diseases have continued to occur. Avian influenza is known to be developed in 1878 and it rose as a national issue due to its high lethality. Food and mouth disease is considered as most critical animal infectious disease internationally. In a nation where this disease has not been spread, food and mouth disease is recognized as economic disease or political disease because it restricts international trade by making it complex to import processed and non-processed live stock, and also quarantine is costly. In a society where whole nation is connected by zone of life, there is no way to prevent the spread of infectious disease fully. Hence, there is a need to be aware of occurrence of the disease and to take action before it is distributed. Epidemiological investigation on definite diagnosis target is implemented and measures are taken to prevent the spread of disease according to the investigation results, simultaneously with the confirmation of both human infectious disease and animal infectious disease. The foundation of epidemiological investigation is figuring out to where one has been, and whom he or she has met. In a data perspective, this can be defined as an action taken to predict the cause of disease outbreak, outbreak location, and future infection, by collecting and analyzing geographic data and relation data. Recently, an attempt has been made to develop a prediction model of infectious disease by using Big Data and deep learning technology, but there is no active research on model building studies and case reports. KT and the Ministry of Science and ICT have been carrying out big data projects since 2014 as part of national R &D projects to analyze and predict the route of livestock related vehicles. To prevent animal infectious diseases, the researchers first developed a prediction model based on a regression analysis using vehicle movement data. After that, more accurate prediction model was constructed using machine learning algorithms such as Logistic Regression, Lasso, Support Vector Machine and Random Forest. In particular, the prediction model for 2017 added the risk of diffusion to the facilities, and the performance of the model was improved by considering the hyper-parameters of the modeling in various ways. Confusion Matrix and ROC Curve show that the model constructed in 2017 is superior to the machine learning model. The difference between the2016 model and the 2017 model is that visiting information on facilities such as feed factory and slaughter house, and information on bird livestock, which was limited to chicken and duck but now expanded to goose and quail, has been used for analysis in the later model. In addition, an explanation of the results was added to help the authorities in making decisions and to establish a basis for persuading stakeholders in 2017. This study reports an animal infectious disease prevention system which is constructed on the basis of hazardous vehicle movement, farm and environment Big Data. The significance of this study is that it describes the evolution process of the prediction model using Big Data which is used in the field and the model is expected to be more complete if the form of viruses is put into consideration. This will contribute to data utilization and analysis model development in related field. In addition, we expect that the system constructed in this study will provide more preventive and effective prevention.