• Title/Summary/Keyword: time combination

Search Result 3,469, Processing Time 0.902 seconds

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF

Late Rectal Complication in Patients treated with High Dose Rate Brachytherapy for Stage IIB Carcinoma of the Cervix (FIGO병기 IIB 자궁경부암에서 고선량 강내 방사선치료후의 후기 직장 합병증)

  • Chung, Eun-Ji;Kim, Gwi-Eon;Suh, Chang-Ok;Keum, Ki-Chang;Kim, Woo-Cheol
    • Radiation Oncology Journal
    • /
    • v.14 no.1
    • /
    • pp.41-52
    • /
    • 1996
  • Purpose : This paper reports a dosimetric study of 88 patients treated with a combination of external radiotherapy and high dose rate ICR for FIGO stage IIB carcinoma of the cervix. The purpose is to investigate the correlation between the radiation doses to the rectum, external radiation dose to the whole pelvis, ICR reference volume, TDF BED and the incidence of late rectal complications, retrospectively. Materials and Methods : From November 1989 through December 1992, 88 patients with stage IIB cervical carcinoma received radical radiotherapy at Department of Radiation Oncology in Yonsei University Hospital. Radiotherapy consisted of 44-54 Gy(median 49 Gy) external beam irradiation plus high dose rate intracavitary brachytherapy with 5 Gy per fraction twice a week to a total dose of 30 Gy on point A. The maximum dose to the rectum by contrast(r, R) and reference rectal dose by ICRU 38(dr, DR) were calculated. The ICR reference volume was calculated by Gamma Dot 3.11 HDR planning system, retrospectively The time-dose factor(TDF) and the biologically effective dose (BED) were calculated. Results : Twenty seven($30.7\%$) of the 88 patients developed late rectal complications:12 patients($13.6\%$) for grade 1, 12 patients($13.6\%$) for grade 2 and 3 patients($3.4\%$) for grade 3. We found a significant correlation between the external whole pelvis irradiation dose and grade 2, 3 rectal complication. The mean dose to the whole pelvis for the group of patients with grade 2, 3 complication was Higher, $4093.3\pm453.1$ cGy, than that for the patients without complication, $3873.8\pm415.6$ (0.05$7163.0\pm838.5$ cGy, than that for the Patients without rectal complication, $0772.7\pm884.0$ (p<0.05). There was no correlation of the rate of grade 2, 3 rectal complication with the iCR rectal doses(r, dr), ICR reference volume, TDF and BED. Conclusion : This investigation has revealed a significant correlation between the dose calculated at the rectal dose by ICRU 38(DR) or the most anterior rectal dose by contrast(R) dose to the whole pelvis and the incidence of grade 2, 3 late rectal complications in patients with stage IIB cervical cancer undergoing external beam radiotherapy and HOR ICR. Thus these rectal reference points doses and whole pelvis dose appear to be useful Prognostic indicators of late rectal complication in high dose rate ICR treatment in cervical carcinoma.

  • PDF

Results of Radiation Therapy for Carcinoma of the Uterine Cervix (자궁경부암의 방사선치료 성적)

  • Lee Kyung-Ja
    • Radiation Oncology Journal
    • /
    • v.13 no.4
    • /
    • pp.359-368
    • /
    • 1995
  • Purpose : This is a retrospective analysis for pattern of failure, survival rate and prognostic factors of 114 patients with histologically proven invasive cancer of the uterine cervix treated with definitive irradiation. Materials and Methods : One hundred fourteen patients with invasive carcinoma of the cervix were treated with a combination of intracavitary irradiation using Fletcher-Suit applicator and external beam irradiation by 6MV X-ray at the Ewha Womans University Hospital between March 1982 and Mar 1990. The median age was 53 years(range:30-77 years). FIGO stage distribution was 19 for IB, 23 for IIA, 42 for IIB, 12 for IIIA and 18 for IIIB. Summation dose of external beam and intracavitary irradiation to point A was 80-90 Gy(median:8580 cGy) in early stage(IB-IIA) and 85-100 Gy(median:8850 cGy) in advanced stage(IIB-IIIB). Kaplan-Meier method was used to estimate the survival rate and multivariate analysis for progrostic factors was performed using the Log likelihood for Weibull Results : The pelvic failure rates by stage were $10.5{\%}$ for IB. $8.7{\%}$ for IIA, $23.8{\%}$ for IIB, $50.0{\%}$ for IIIA and $38.9{\%}$ for IIIB. The rate of distant metastasis by stage were $0{\%}$ for IB, $8.7{\%}$ for IIA, $4.8{\%}$ for IIB. $0{\%}$ for IIIA and $11.1{\%}$ for IIIB. The time of failure was from 3 to 50 months and with median of 15 months after completion of radiation therapy. There was no significant coorelation between dose to point A($\leq$90 Gy vs >90 Gy) and pelvic tumor control(P>0.05). Incidence rates of grade 2 rectal and bladder complications were $3.5{\%}$(4/114) and $7{\%}$(8/114), respectively and 1 patient had sigmoid colon obstruction and 1 patient had severe cystitis. Overall 5-year survival rate was $70.5{\%}$ and disease-free survival rate was $53.6{\%}$. Overall 5-year survival rate by stage was $100{\%}$ for IB, $76.9{\%}$ for IIA, $77.6{\%}$ for IIB $87.5{\%}$ for IIIA and $69.1{\%}$ for IIIB. Five-rear disease-free survival rate by stage was $81.3{\%}$ for IB, $67.9{\%}$ for IIA, $46.8{\%}$ for IIB, $45.4{\%}$ for IIIA and $34.4{\%}$ for IIIB. The prognostic factors for disease-free survival rate by multivariate analysis was performance status(p= 0.0063) and response rate after completion of radiation therapy(p= 0.0026) but stage, age and radiation dose to point A were not siginificant. Conclusion : The result of radiation therapy for early stage of the uterine cervix cancer was relatively good but local control rate and survival rate in advanced stage were poor inspite of high dose irradiation to point A above 90 Gy. Prospective randomized studies are recommended to establish optimal tumor doses for various stages and volume of carcinoma of uterine cervix, And ajuvant chemotherapy or radiation-sensitizing agents must be considered to increase the pelvic control and survival rate in advanced cancer of uterine cervix.

  • PDF

Comparative Analysis of Patterns of Care Study of Radiotherapy for Esophageal Cancer among Three Countries: South Korea, Japan and the United States (한국, 미국, 일본의 식도암 방사선 치료에 대한 PCS($1998{\sim}1999$) 결과의 비교 분석)

  • Hur, Won-Joo;Choi, Young-Min;Kim, Jeung-Kee;Lee, Hyung-Sik;Choi, Seok-Reyol;Kim, Il-Han
    • Radiation Oncology Journal
    • /
    • v.26 no.2
    • /
    • pp.83-90
    • /
    • 2008
  • Purpose: For the first time, a nationwide survey of the Patterns of Care Study(PCS) for the various radiotherapy treatments of esophageal cancer was carried out in South Korea. In order to observe the different parameters, as well as offer a solid cooperative system, we compared the Korean results with those observed in the United States(US) and Japan. Materials and Methods: Two hundreds forty-six esophageal cancer patients from 21 institutions were enrolled in the South Korean study. The patients received radiation theraphy(RT) from 1998 to 1999. In order to compare these results with those from the United States, a published study by Suntharalingam, which included 414 patients[treated by Radiotherapy(RT)] from 59 institutions between 1996 and 1999 was chosen. In order to compare the South Korean with the Japanese data, we choose two different studies. The results published by Gomi were selected as the surgery group, in which 220 esophageal cancer patients were analyzed from 76 facilities. The patients underwent surgery and received RT with or without chemotherapy between 1998 and 2001. The non-surgery group originated from a study by Murakami, in which 385 patients were treated either by RT alone or RT with chemotherapy, but no surgery, between 1999 and 2001. Results: The median age of enrolled patients was highest in the Japanese non-surgery group(71 years old). The gender ratio was approximately 9:1(male:female) in both the Korean and Japanese studies, whereas females made up 23.1% of the study population in the US study. Adenocarcinoma outnumbered squamous cell carcinoma in the US study, whereas squamous cell carcinoma was more prevalent both the Korean and Japanese studies(Korea 96.3%, Japan 98%). An esophagogram, endoscopy, and chest CT scan were the main modalities of diagnostic evaluation used in all three countries. The US and Japan used the abdominal CT scan more frequently than the abdominal ultrasonography. Radiotherapy alone treatment was most rarely used in the US study(9.5%), compared to the Korean(23.2%) and Japanese(39%) studies. The combination of the three modalities(Surgery+RT+Chemotherapy) was performed least often in Korea(11.8%) compared to the Japanese(49.5%) and US(32.8%) studies. Chemotherapy(89%) and chemotherapy with concurrent chemoradiotherapy(97%) was most frequently used in the US study. Fluorouracil(5-FU) and Cisplatin were the most preferred drug treatments used in all three countries. The median radiation dose was 50.4 Gy in the US study, as compared to 55.8 Gy in the Korean study regardless of whether an operation was performed. However, in Japan, different median doses were delivered for the surgery(48 Gy) and non-surgery groups(60 Gy). Conclusion: Although some aspects of the evaluation of esophageal cancer and its various treatment modalities were heterogeneous among the three countries surveyed, we found no remarkable differences in the RT dose or technique, which includes the number of portals and energy beams.

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

The Clinical Features of Endobronchial Tuberculosis - A Retrospective Study on 201 Patients for 6 years (기관지결핵의 임상상-201예에 대한 후향적 고찰)

  • Lee, Jae Young;Kim, Chung Mi;Moon, Doo Seop;Lee, Chang Wha;Lee, Kyung Sang;Yang, Suck Chul;Yoon, Ho Joo;Shin, Dong Ho;Park, Sung Soo;Lee, Jung Hee
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.5
    • /
    • pp.671-682
    • /
    • 1996
  • Background : Endobronchial tuberculosis is definded as tuberculous infection of the tracheobronchial tree with microbiological and histopathological evidence. Endobronchial tuberculosis has clinical significance due to its sequela of cicatrical stenosis which causes atelectasis, dyspnea and secondary pneumonia and may mimic bronchial asthma and pulmanary malignancy. Method : The authors carried out, retrospectively, a clinical study on 201 patients confirmed with endobronchial tuberculosis who visited the Department of Pulmonary Medicine at Hangyang University Hospital from January 1990 10 April 1996. The following results were obtained. Results: 1) Total 201 parients(l9.5%) were confirmed as endobronchial tuberculosis among 1031 patients who had been undergone flexible bronchofiberscopic examination. The number of male patients were 55 and that of female patients were 146. and the male to female ratio was 1 : 2.7. 2) The age distribution were as follows: there were 61(30.3%) cases in the third decade, 40 cases(19.9%) in the fourth decade, 27 cases(13.4%) in the sixth decade, 21 cases(10.4%) in the fifth decade, 19 cases(9.5%) in the age group between 15 and 19 years, 19 cases(9.5%) in the seventh decade, and 14 cases(7.0%) over 70 years, in decreasing order. 3) The most common symptom, in 192 cases, was cough 74.5%, followed by sputum 55.2%, dyspnea 28.6%, chest discomfort 19.8%, fever 17.2%, hemoptysis 11.5%, in decreasing order, and localized wheezing was heard in 15.6%. 4) In chest X-ray of 189 cases, consolidation was the most frequent finding in 67.7%, followed by collapse 43.9%. cavitary lesion 11.6%, pleural effusion 7.4%, in decreasing order, and there was no abnormal findings in 3.2%. 5) In the 76 pulmanary function tests, a normal pattern was found in 44.7%, restrictive pattern in 39.5 %, obstructive pattern in 11.8%, and combined pattern in 3.9%. 6) Among total 201 patients, bronchoscopy showed caseous pseudomembrane in 70 cases(34.8%), mucosal erythema and edema in 54 cases(26.9%), hyperplastic lesion in 52 cases(25.9%), fibrous s.enosis in 22 cases(10.9%), and erosion or ulcer in 3 cases(1.5%). 7) In total 201 cases, bronchial washing AFB stain was positive in 103 cases(51.2%), bronchial washing culture for tuberculous bacilli in 55 cases(27.4%). In the 99 bronchoscopic biopsies, AFB slain positive in 36.4%. granuloma without AFB stain positive in 13.1%, chronic inflammation only in 36.4%. and non diagnostic biopsy finding in 14.1%. Conclusions : Young female patients, whose cough resistant to genenal antitussive agents, should be evaluated for endobronchial tuberculosis, even with clear chest roentgenogram and negative sputum AFB stain. Furthermore, we would like to emphasize that the bronchoscopic approach is a substantially useful means of making a differential diagnosis of atelectasis in older patients of cancer age. At this time we have to make a standard endoscopic classification of endobronchial tuberculosis, and well designed prospective studies are required to elucidate the effect of combination therapy using antituberculous chemotherapy with steroids on bronchial stenosis in patients with endobronchial tuberculosis.

  • PDF

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Outcomes after Radiotherapy in Inoperable Patients with Squamous Cell Lung Cancer (수술이 불가능한 편평상피성 폐암의 방사선치료 성적)

  • Ahn Sung-Ja;Chung Woong-Ki;Nah Byung-Sik;Nam Tack-Keun;Kim Young-Chul;Park Kyung-Ok
    • Radiation Oncology Journal
    • /
    • v.19 no.3
    • /
    • pp.216-223
    • /
    • 2001
  • Purpose : We evaluated retrospectively the outcomes of inoperable squamous cell lung cancer patients treated with radiotherapy to find out prognostic factors affecting survival. Materials and methods : Four hundred and eleven patients diagnosed as squamous cell lung cancer between November 1988 and December 1997 were the basis of this analyses. The planned dose to the gross tumor volume was ranged from 30 to 70.2 Gy. Chemotherapy was combined in 72 patients $(17.5\%)$ with the variable schedule and drug combination regimens. Follow-up period ranged from 1 to 113 months with the median of 8 months and survival status was identified in 381 patients $(92.7\%)$. Overall survival rate was calculated using the Kaplan-Meier method. Results : Age ranged from 23 years to 83 years with the median 63 years. The male to female ratio was about 16:1. For all 411 patients, the median overall survival was 8 months and the 1-year survival rate (YSR), 2-YSR, and 5-YSR were $35.6\%,\;12.6\%,\;and\;3.7\%$, respectively. The median and 5-YSR were 29 months and $33.3\%$ for Stage IA, 13 months and $6.3\%$ for Stage IIIA, and 9 months and $3.4\%$ for Stage IIIB, respectively(p=0.00). The median survival by treatment aim was 11 months in radical intent group and 5 months in palliative, respectively (p=0.00). Of 344 patients treated with radical intent, median survival of patients (N=247) who received planned radiotherapy completely was 12 months while that of patients (N=97) who did not was 5 months (p=0.0006). In the analyses of the various prognostic factors affecting to the survival outcomes in 247 patients who completed the planned radiotherapy, tumor location, supraclavicular LAP, SVC syndrome, pleural effusion, total lung atelectasis and hoarseness were statistically significant prognostic factors both in the univariate and multivariate analyses while the addition of chemotherapy was statistically significant only in multivariate analyses. The acute radiation esophagitis requiring analgesics was appeared in 49 patients $(11.9\%)$ and severe radiation esophagitis requiring hospitalization was shown in 2 patients $(0.5\%)$. The radiation pneumonitis requiring steroid medication was shown in 62 patients $(15.1\%)$ and severe pneumonitis requiring hospitalization was occurred in 2 patients $(0.5\%)$. During follow-up, 114 patients $(27.7\%)$ had progression of local disease with 10 months of median time to recur (range : $1\~87\;months$) and 49 patients $(11.9\%)$ had distant failure with 7 months of median value (range : $1\~52\;months$). Second malignancy before or after the diagnosis of lung cancer was appeared in 11 patients Conclusion : The conventional radiotherapy in the patients with locally advanced squamous cell lung cancer has given small survival advantage over supportive care and it is very important to select the patient group who can obtain the maximal benefit and to select the radiotherapy technique that would not compromise the life quality in these patients.

  • PDF

Virtuous Concordance of Yin and Yang and Tai-Ji in Joseon art: Focusing on Daesoon Thought (조선 미술에 내재한 음양합덕과 태극 - 대순사상을 중심으로 -)

  • Hwang, Eui-pil
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.35
    • /
    • pp.217-253
    • /
    • 2020
  • This study analyzes the principles of the 'Earthly Paradise' (仙境, the realm of immortals), 'Virtuous Concordance of Yin and Yang' (陰陽合德), and the 'Reordering Works of Heaven and Earth' (天地公事) while combining them with Joseon art. Therefore, this study aims to discover the context wherein the concept of Taiji in 'Daesoon Truth,' deeply penetrates into Joseon art. Doing so reveals how 'Daesoon Thought' is embedded in the lives and customs of the Korean people. In addition, this study follows a review of the sentiments and intellectual traditions of the Korean people based on 'Daesoon Thought' and creative works. Moreover, 'Daesoon Thought' brings all of this to the forefront in academics and art at the cosmological level. The purpose of this research is to vividly reveal the core of 'Daesoon Thought' as a visual image. Through this, the combination of 'Daesoon Thought' and Joseon art will secure both data and reality at the same time. As part of this, this study deals with the world of 'Daesoon Thought' as a cosmological Taiji principle. This concept is revealed in Joseon art, which is analyzed and examined from the viewpoint of art philosophy. First, as a way to make use of 'Daesoon Thought,' 'Daesoon Truth' was developed and directly applied to Joseon art. In this way, reflections on Korean life within 'Daesoon Thought' can be revealed. In this regard, the selection of Joseon art used in this study highlights creative works that have been deeply ingrained into people's lives. For example, as 'Daesoon Thought' appears to focus on the genre painting, folk painting, and landscape painting of the Joseon Dynasty, attention is given to verifying these cases. This study analyzes 'Daesoon Thought,' which borrows from Joseon art, from the perspective of art philosophy. Accordingly, attempts are made to find examples of the 'Virtuous Concordance of Yin and Yang' and Tai-Ji in Joseon art which became a basis by which 'Daesoon Thought' was communicated to people. In addition, appreciating 'Daesoon Thought' in Joseon art is an opportunity to vividly examine not only the Joseon art style but also the life, consciousness, and mental world of the Korean people. As part of this, Chapter 2 made several findings related to the formation of 'Daesoon Thought.' In Chapter 3, the structures of the ideas of 'Earthly Paradise' and 'Virtuous Concordance of Yin and Yang' were likewise found to have support. And 'The Reordering Works of Heaven and Earth' and Tai-Ji were found in depictions of metaphysical laws. To this end, the laws of 'The Reordering Works of Heaven and Earth' and the structure of Tai-Ji were combined. In chapter 4, we analyzed the 'Daesoon Thought' in the life and work of the Korean people at the level of the convergence of 'Daeesoon Thought' and Joseon art. The analysis of works provides a glimpse into the precise identity of 'Daesoon Thought' as observable in Joseon art, as doing so is useful for generating empirical data. For example, works such as Tai-Jido, Ssanggeum Daemu, Jusachaebujeokdo, Hwajogi Myeonghwabundo, and Gyeongdodo are objects that inspired descriptions of 'Earthly Paradise', 'Virtuous Concordance of Yin and Yang,' and 'The Reordering Works of Heaven and Earth.' As a result, Tai-Ji which appears in 'Daesoon Thought', proved the status of people in Joseon art. Given all of these statements, the Tai-Ji idea pursued by Daesoon Thought is a providence that follows change as all things are mutually created. In other words, it was derived that Tai-Ji ideology sits profoundly in the lives of the Korean people and responds mutually to the providence that converges with 'Mutual Beneficence.'

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.