• Title/Summary/Keyword: proximity method

Search Result 368, Processing Time 0.041 seconds

Clinical Studies on Locally Invasive Thyroid Cancer (국소침범한 갑상선암의 임상적 고찰)

  • Kim Young-Min;Lee Chang-Yun;Yang Kyung-Hun;Rho Young-Soo;Park Young-Min;Lim Hyun-Jun
    • Korean Journal of Head & Neck Oncology
    • /
    • v.14 no.2
    • /
    • pp.236-243
    • /
    • 1998
  • Objectives: Local invasion of the thyroid cancer that is invasion of the upper aerodigestive tract, neurovascular structures of the neck and superior mediastinum, is infrequent and comprises of 1-16% of well-differentiated thyroid cancer. However the proximity of the thyroid gland to these structures provides the means for an invasive cancer to gain ready access into theses structures and when invasion occurs, it is the source of significant morbidity and mortality. So locally invasive thyroid cancer should be removed as much as possible, but still much debates have been exist whether the surgical method should be radical or conservative. This study was desinged to evaluate the clinical characteristics and the surgical treatment of the locally invasive thyroid cancer. Material and Methods: At the department of otorhinolaryngology of Hallym university, 10 patients diagnosed as locally invasive thyroid cancer among the 81 patients treated for thyroid cancer between 1991 to 1997 were retrospectively evaluated. Results: Of the 10 patients, 3 patients had histories of previous surgical treatment with or without radiation or radioactive iodine therapy. The site of invasion of thyroid cancer were trachea(7 cases), recurrent laryngeal nerve(5 cases), mediastinal node(5 cases), esophagus(3cases), larynx(3cases), carotid artery(3 cases), pharynx(l case), and other sites(4 cases). The operation techniques included 1 partial laryngectomy and 1 partial cricoid resection, 2 shavings and 3 window resections of the trachea, 1 sleeve resection of the trachea with end-to-end anastomosis and 1 cricotracheoplasty for tracheal invasion, 2 shavings and 1 partial esophagectomies for esophageal invasion, and 1 wall shaving and 2 partial resections with $Gortex^{\circledR}$ tube reconstruction for carotid artery invasion, and so on. Conclusions: These data and review of literature suggest that the surgical method should be perfomed on the basis of individual condition and complete removal of all gross tumor with preservation of vital structures whenever possible will offer a good result.

  • PDF

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

The Signaling Effect of Stock Repurchase on Equity Offerings in Korea (자기주식매입의 유상증자에 대한 신호효과)

  • Park, Young-Kyu
    • The Korean Journal of Financial Management
    • /
    • v.25 no.1
    • /
    • pp.51-84
    • /
    • 2008
  • We investigate the signaling effect of repurchase preceding new equity issue using Korean data. In a short time span, firms announce stock repurchases and equity offerings. The proximity of two events in Korean firms indicates that those are not independent of each other. In this paper, we test the signaling effect of repurchase on equity offerings on the two measures. One is announcement effect, which is measured as CAR(0, +2). The other is the effectiveness which is measured as CAR(0, +30) because the price movement during this window influences on the price of new issues. Previous studies that stock repurchase convey positive signal to equity offerings-Billet and Xue(2004) and Jung(2004)-construct sample without the limit of time interval between two events. This causes the unclear relation between those because of the long time interval. In this study we consider only samples of being within one year each other to reduce this problem and clarify the signal of repurchase on equity offerings. Korean firms are allowed to repurchase own shares with two different method. One is direct repurchase as same as open market repurchase. The other is stock stabilization fund and stock trust fund which trust company or bank buy and sell their shares on the behalf of firms. Generally, the striking different characteristic between direct repurchase and indirect repurchase is following. Direct repurchase is applied by more strict regulation than indirect repurchase. Therefore, the direct repurchase is more informative signal to the equity offering than the indirect repurchase. We construct two sample firms- firms with direct repurchase preceding-equity offerings and indirect repurchase-preceding equity offering, and one control firms-equity offerings only firms-to investigate the announcement effect and the effectiveness of repurchases. Our findings are as follows. Direct repurchase favorably affect the price of new issues favorably. CAR(0, +2) of firms with direct repurchase is not different from that of equity offerings only firms but CAR(0, +30) is higher than that of equity offerings only firms. For firms with indirect repurchase and equity offerings, Both the announcement effect and the effectiveness does not exist. Jung(2004) suggest the possibilities of how indirect stock repurchase can be regarded as one of unfair trading practices on based on the survey results that financial managers of some of KSE listed firms have been asked of their opinion on the likelihood of the stock repurchase being used in unfair trading. This is not objective empirical evidence but opinion of financial managers. To investigate whether firms announce false signal before equity offerings to boost the price of new issues, we calculate the long-run performance following equity offerings. If firms have announced repurchase to boost the price of new issues intentionally, they would undergo the severe underperformance. The empirical results do not show the severer underperformance of both sample firms than equity offerings only firms. The suggestion of false signaling of repurchase preceding equity offerings is not supported by our evidence.

  • PDF

Characteristics of Vertical Ozone Distributions in the Pohang Area, Korea (포항지역 오존의 수직분포 특성)

  • Kim, Ji-Young;Youn, Yong-Hoon;Song, Ki-Bum;Kim, Ki-Hyun
    • Journal of the Korean earth science society
    • /
    • v.21 no.3
    • /
    • pp.287-301
    • /
    • 2000
  • In order to investigate the factors and processes affecting the vertical distributions of ozone, we analyzed the ozone profile data measured using ozonesonde from 1995 to 1997 at Pohang city, Korea. In the course of our study, we analyzed temporal and spatial distribution characteristics of ozone at four different heights: surface (100m), troposphere (10km), lower stratosphere (20km), and middle stratosphere (30km). Despite its proximity to a local, but major, industrial complex known as Pohang Iron and Steel Co. (POSCO), the concentrations of surface ozone in the study area were comparable to those typically observed from rural and/or unpolluted area. In addition, the findings of relative enhancement of ozone at this height, especially between spring and summer may be accounted for by the prevalence of photochemical reactions during that period of year. The temporal distribution patterns for both 10 and 20km heights were quite compatible despite large differences in their altitudes with such consistency as spring maxima and summer minima. Explanations for these phenomena may be sought by the mixed effects of various processes including: ozone transport across two heights, photochemical reaction, the formation of inversion layer, and so on. However, the temporal distribution pattern for the middle stratosphere (30km) was rather comparable to that of the surface. We also evaluated total ozone concentration of the study area using Brewer spectrophotometer. The total ozone concentration data were compared with those derived by combining the data representing stratospheric layers via Umkehr method. The results of correlation analysis showed that total ozone is negatively correlated with cloud cover but not with such parameter as UV-B. Based on our study, we conclude that areal characteristics of Pohang which represents a typical coastal area may be quite important in explaining the distribution patterns of ozone not only from surface but also from upper atmosphere.

  • PDF

The Evaluation of Quantitative Accuracy According to Detection Distance in SPECT/CT Applied to Collimator Detector Response(CDR) Recovery (Collimator Detector Response(CDR) 회복이 적용된 SPECT/CT에서 검출거리에 따른 정량적 정확성 평가)

  • Kim, Ji-Hyeon;Son, Hyeon-Soo;Lee, Juyoung;Park, Hoon-Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.21 no.2
    • /
    • pp.55-64
    • /
    • 2017
  • Purpose Recently, with the spread of SPECT/CT, various image correction methods can be applied quickly and accurately, which enabled us to expect quantitative accuracy as well as image quality improvement. Among them, the Collimator Detector Response(CDR) recovery is a correction method aiming at resolution recovery by compensating the blurring effect generated from the distance between the detector and the object. The purpose of this study is to find out quantitative change depending on the change in detection distance in SPECT/CT images with CDR recovery applied. Materials and Methods In order to find out the error of acquisition count depending on the change of detection distance, we set the detection distance according to the obit type as X, Y axis radius 30cm for circular, X, Y axis radius 21cm, 10cm for non-circular and non-circular auto(=auto body contouring, ABC_spacing limit 1cm) and applied reconstruction methods by dividing them into Astonish(3D-OSEM with CDR recovery) and OSEM(w/o CDR recovery) to find out the difference in activity recovery depending on the use of CDR recovery. At this time, attenuation correction, scatter correction, and decay correction were applied to all images. For the quantitative evaluation, calibration scan(cylindrical phantom, $^{99m}TcO_4$ 123.3 MBq, water 9293 ml) was obtained for the purpose of calculating the calibration factor(CF). For the phantom scan, a 50 cc syringe was filled with 31 ml of water and a phantom image was obtained by setting $^{99m}TcO_4$ 123.3 MBq. We set the VOI(volume of interest) in the entire volume of the syringe in the phantom image to measure total counts for each condition and obtained the error of the measured value against true value set by setting CF to check the quantitative accuracy according to the correction. Results The calculated CF was 154.28 (Bq/ml/cps/ml) and the measured values against true values in each conditional image were analyzed to be circular 87.5%, non-circular 90.1%, ABC 91.3% and circular 93.6%, non-circular 93.6%, ABC 93.9% in OSEM and Astonish, respectively. The closer the detection distance, the higher the accuracy of OSEM, and Astonish showed almost similar values regardless of distance. The error was the largest in the OSEM circular(-13.5%) and the smallest in the Astonish ABC(-6.1%). Conclusion SPECT/CT images showed that when the distance compensation is made through the application of CDR recovery, the detection distance shows almost the same quantitative accuracy as the proximity detection even under the distant condition, and accurate correction is possible without being affected by the change in detection distance.

  • PDF

Geomagnetic Paleosecular Variation in the Korean Peninsula during the First Six Centuries (기원후 600년간 한반도 지구 자기장 고영년변화)

  • Park, Jong kyu;Park, Yong-Hee
    • The Journal of Engineering Geology
    • /
    • v.32 no.4
    • /
    • pp.611-625
    • /
    • 2022
  • One of the applications of geomagnetic paleo-secular variation (PSV) is the age dating of archeological remains (i.e., the archeomagnetic dating technique). This application requires the local model of PSV that reflects non-dipole fields with regional differences. Until now, the tentative Korean paleosecular variation (t-KPSV) calculated based on JPSV (SW Japanese PSV) has been applied as a reference curve for individual archeomagnetic directions in Korea. However, it is less reliable due to regional differences in the non-dipole magnetic field. Here, we present PSV curves for AD 1 to 600, corresponding to the Korean Three Kingdoms (including the Proto Three Kingdoms) Period, using the results of archeomagnetic studies in the Korean Peninsula and published research data. Then we compare our PSV with the global geomagnetic prediction model and t-KPSV. A total of 49 reliable archeomagnetic directional data from 16 regions were compiled for our PSV. In detail, each data showed statistical consistency (N > 6, 𝛼95 < 7.8°, and k > 57.8) and had radiocarbon or archeological ages in the range of AD 1 to 600 years with less than ±200 years error range. The compiled PSV for the initial six centuries (KPSV0.6k) showed declination and inclination in the range of 341.7° to 20.1° and 43.5° to 60.3°, respectively. Compared to the t-KPSV, our curve revealed different variation patterns both in declination and inclination. On the other hand, KPSV0.6k and global geomagnetic prediction models (ARCH3K.1, CALS3K.4, and SED3K.1) revealed consistent variation trends during the first six centennials. In particular, the ARCH3K.1 showed the best fitting with our KPSV0.6k. These results indicate that contribution of the non-dipole field to Korea and Japan is quite different, despite their geographical proximity. Moreover, the compilation of archeomagnetic data from the Korea territory is essential to build a reliable PSV curve for an age dating tool. Lastly, we double-check the reliability of our KPSV0.6k by showing a good fitting of newly acquired age-controlled archeomagnetic data on our curve.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.

Analysis of the Causes of Subfrontal Recurrence in Medulloblastoma and Its Salvage Treatment (수모세포종의 방사선치료 후 전두엽하방 재발된 환자에서 원인 분석 및 구제 치료)

  • Cho Jae Ho;Koom Woong Sub;Lee Chang Geol;Kim Kyoung Ju;Shim Su Jung;Bak Jino;Jeong Kyoungkeun;Kim Tae_Gon;Kim Dong Seok;Choi oong-Uhn;Suh Chang Ok
    • Radiation Oncology Journal
    • /
    • v.22 no.3
    • /
    • pp.165-176
    • /
    • 2004
  • Purpose: Firstly, to analyze facto in terms of radiation treatment that might potentially cause subfrontal relapse in two patients who had been treated by craniospinal irradiation (CSI) for medulloblastoma, Secondly, to explore an effective salvage treatment for these relapses. Materials and Methods: Two patients who had high-risk disease (T3bMl, T3bM3) were treated with combined chemoradiotherapy CT-simulation based radiation-treatment planning (RTP) was peformed. One patient who experienced relapse at 16 months after CSI was treated with salvage surgery followed by a 30.6 Gy IMRT (intensity modulated radiotherapy). The other patient whose tumor relapsed at 12 months after CSI was treated by surgery alone for the recurrence. To investigate factors that might potentially cause subfrontal relapse, we evaluated thoroughly the charts and treatment planning process including portal films, and tried to find out a method to give help for placing blocks appropriately between subfrotal-cribrifrom plate region and both eyes. To salvage subfrontal relapse in a patient, re-irradiation was planned after subtotal tumor removal. We have decided to treat this patient with IMRT because of the proximity of critical normal tissues and large burden of re-irradiation. With seven beam directions, the prescribed mean dose to PTV was 30.6 Gy (1.8 Gy fraction) and the doses to the optic nerves and eyes were limited to 25 Gy and 10 Gy, respectively. Results: Review of radiotherapy Portals clearly indicated that the subfrontal-cribriform plate region was excluded from the therapy beam by eye blocks in both cases, resulting in cold spot within the target volume, When the whole brain was rendered in 3-D after organ drawing in each slice, it was easier to judge appropriateness of the blocks in port film. IMRT planning showed excellent dose distributions (Mean doses to PTV, right and left optic nerves, right and left eyes: 31.1 Gy, 14.7 Gy, 13.9 Gy, 6.9 Gy, and 5.5 Gy, respectively. Maximum dose to PTV: 36 Gy). The patient who received IMRT is still alive with no evidence of recurrence and any neurologic complications for 1 year. Conclusion: To prevent recurrence of medulloblastoma in subfrontal-cribriform plate region, we need to pay close attention to the placement of eye blocks during the treatment. Once subfrontal recurrence has happened, IMRT may be a good choice for re-irradiation as a salvage treatment to maximize the differences of dose distributions between the normal tissues and target volume.