• Title/Summary/Keyword: data algorithm system

Search Result 6,193, Processing Time 0.039 seconds

Development and Performance Evaluation of an Animal SPECT System Using Philips ARGUS Gamma Camera and Pinhole Collimator (Philips ARGUS 감마카메라와 바늘구멍조준기를 이용한 소동물 SPECT 시스템의 개발 및 성능 평가)

  • Kim, Joong-Hyun;Lee, Jae-Sung;Kim, Jin-Su;Lee, Byeong-Il;Kim, Soo-Mee;Choung, In-Soon;Kim, Yu-Kyeong;Lee, Won-Woo;Kim, Sang-Eun;Chung, June-Key;Lee, Myung-Chul;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.445-455
    • /
    • 2005
  • Purpose: We developed an animal SPECT system using clinical Philips ARGUS scintillation camera and pinhole collimator with specially manufactured small apertures. In this study, we evaluated the physical characteristics of this system and biological feasibility for animal experiments. Materials and Methods: Rotating station for small animals using a step motor and operating software were developed. Pinhole inserts with small apertures (diameter of 0.5, 1.0, and 2.0 mm) were manufactured and physical parameters including planar spatial resolution and sensitivity and reconstructed resolution were measured for some apertures. In order to measure the size of the usable field of view according to the distance from the focal point, manufactured multiple line sources separated with the same distance were scanned and numbers of lines within the field of view were counted. Using a Tc-99m line source with 0.5 mm diameter and 12 mm length placed in the exact center of field of view, planar spatial resolution according to the distance was measured. Calibration factor to obtain FWHM values in 'mm' unit was calculated from the planar image of two separated line sources. Te-99m point source with i mm diameter was used for the measurement of system sensitivity. In addition, SPECT data of micro phantom with cold and hot line inserts and rat brain after intravenous injection of [I-123]FP-CIT were acquired and reconstructed using filtered back protection reconstruction algorithm for pinhole collimator. Results: Size of usable field of view was proportional to the distance from the focal point and their relationship could be fitted into a linear equation (y=1.4x+0.5, x: distance). System sensitivity and planar spatial resolution at 3 cm measured using 1.0 mm aperture was 71 cps/MBq and 1.24 mm, respectively. In the SPECT image of rat brain with [I-123]FP-CIT acquired using 1.0 mm aperture, the distribution of dopamine transporter in the striatum was well identified in each hemisphere. Conclusion: We verified that this new animal SPECT system with the Phlilps ARGUS scanner and small apertures had sufficient performance for small animal imaging.

Clinical Application of in Vivo Dosimetry System in Radiotherapy of Pelvis (골반부 방사선 치료 환자에서 in vivo 선량측정시스템의 임상적용)

  • Kim, Bo-Kyung;Chie, Eui-Kyu;Huh, Soon-Nyung;Lee, Hyoung-Koo;Ha, Sung-Whan
    • Journal of Radiation Protection and Research
    • /
    • v.27 no.1
    • /
    • pp.37-49
    • /
    • 2002
  • The accuracy of radiation dose delivery to target volume is one of the most important factors for good local control and less treatment complication. In vivo dosimetry is an essential QA procedure to confirm the radiation dose delivered to the patients. Transmission dose measurement is a useful method of in vivo dosimetry and it's advantages are non-invasiveness, simplicity and no additional efforts needed for dosimetry. In our department, in vivo dosimetry system using measurement of transmission dose was manufactured and algorithms for estimation of transmission dose were developed and tested with phantom in various conditions successfully. This system was applied in clinic to test stability, reproducibility and applicability to daily treatment and the accuracy of the algorithm. Transmission dose measurement was performed over three weeks. To test the reproducibility of this system, X-tay output was measured before daily treatment and then every hour during treatment time in reference condition(field size; $10 cm{\times} 10 cm$, 100 MU). Data of 11 patients whose pelvis were treated more than three times were analyzed. The reproducibility of the dosimetry system was acceptable with variations of measurement during each day and over 3 week period within ${\pm}2.0%$. On anterior- posterior and posterior fields, mean errors were between -5.20% and +2.20% without bone correction and between -0.62% and +3.32% with bone correction. On right and left lateral fields, mean errors were between -10.80% and +3.46% without bone correction and between -0.55% and +3.50% with bone correction. As the results, we could confirm the reproducibility and stability of our dosimetry system and its applicability in daily radiation treatment. We could also find that inhomogeneity correction for bone is essential and the estimated transmission doses are relatively accurate.

Design of a Crowd-Sourced Fingerprint Mapping and Localization System (군중-제공 신호지도 작성 및 위치 추적 시스템의 설계)

  • Choi, Eun-Mi;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.9
    • /
    • pp.595-602
    • /
    • 2013
  • WiFi fingerprinting is well known as an effective localization technique used for indoor environments. However, this technique requires a large amount of pre-built fingerprint maps over the entire space. Moreover, due to environmental changes, these maps have to be newly built or updated periodically by experts. As a way to avoid this problem, crowd-sourced fingerprint mapping attracts many interests from researchers. This approach supports many volunteer users to share their WiFi fingerprints collected at a specific environment. Therefore, crowd-sourced fingerprinting can automatically update fingerprint maps up-to-date. In most previous systems, however, individual users were asked to enter their positions manually to build their local fingerprint maps. Moreover, the systems do not have any principled mechanism to keep fingerprint maps clean by detecting and filtering out erroneous fingerprints collected from multiple users. In this paper, we present the design of a crowd-sourced fingerprint mapping and localization(CMAL) system. The proposed system can not only automatically build and/or update WiFi fingerprint maps from fingerprint collections provided by multiple smartphone users, but also simultaneously track their positions using the up-to-date maps. The CMAL system consists of multiple clients to work on individual smartphones to collect fingerprints and a central server to maintain a database of fingerprint maps. Each client contains a particle filter-based WiFi SLAM engine, tracking the smartphone user's position and building each local fingerprint map. The server of our system adopts a Gaussian interpolation-based error filtering algorithm to maintain the integrity of fingerprint maps. Through various experiments, we show the high performance of our system.

A Study of a Non-commercial 3D Planning System, Plunc for Clinical Applicability (비 상업용 3차원 치료계획시스템인 Plunc의 임상적용 가능성에 대한 연구)

  • Cho, Byung-Chul;Oh, Do-Hoon;Bae, Hoon-Sik
    • Radiation Oncology Journal
    • /
    • v.16 no.1
    • /
    • pp.71-79
    • /
    • 1998
  • Purpose : The objective of this study is to introduce our installation of a non-commercial 3D Planning system, Plunc and confirm it's clinical applicability in various treatment situations. Materials and Methods : We obtained source codes of Plunc, offered by University of North Carolina and installed them on a Pentium Pro 200MHz (128MB RAM, Millenium VGA) with Linux operating system. To examine accuracy of dose distributions calculated by Plunc, we input beam data of 6MV Photon of our linear accelerator(Siemens MXE 6740) including tissue-maximum ratio, scatter-maximum ratio, attenuation coefficients and shapes of wedge filters. After then, we compared values of dose distributions(Percent depth dose; PDD, dose profiles with and without wedge filters, oblique incident beam, and dose distributions under air-gap) calculated by Plunc with measured values. Results : Plunc operated in almost real time except spending about 10 seconds in full volume dose distribution and dose-volume histogram(DVH) on the PC described above. As compared with measurements for irradiations of 90-cm 550 and 10-cm depth isocenter, the PDD curves calculated by Plunc did not exceed $1\%$ of inaccuracies except buildup region. For dose profiles with and without wedge filter, the calculated ones are accurate within $2\%$ except low-dose region outside irradiations where Plunc showed $5\%$ of dose reduction. For the oblique incident beam, it showed a good agreement except low dose region below $30\%$ of isocenter dose. In the case of dose distribution under air-gap, there was $5\%$ errors of the central-axis dose. Conclusion : By comparing photon dose calculations using the Plunc with measurements, we confirmed that Plunc showed acceptable accuracies about $2-5\%$ in typical treatment situations which was comparable to commercial planning systems using correction-based a1gorithms. Plunc does not have a function for electron beam planning up to the present. However, it is possible to implement electron dose calculation modules or more accurate photon dose calculation into the Plunc system. Plunc is shown to be useful to clear many limitations of 2D planning systems in clinics where a commercial 3D planning system is not available.

  • PDF

A Basic Study for the Retrieval of Surface Temperature from Single Channel Middle-infrared Images (단일 밴드 중적외선 영상으로부터 표면온도 추정을 위한 기초연구)

  • Park, Wook;Lee, Yoon-Kyung;Won, Joong-Sun;Lee, Seung-Geun;Kim, Jong-Min
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.2
    • /
    • pp.189-194
    • /
    • 2008
  • Middle-infrared (MIR) spectral region between 3.0 and $5.0\;{\mu}m$ in wavelength is useful for observing high temperature events such as volcanic activities and forest fire. However, atmospheric effects and sun irradiance in day time has not been well studied for this MIR spectral band. The objectives of this basic study is to evaluate atmospheric effects and eventually to estimate surface temperature from a single channel MIR image, although a typical approach utilize split-window method using more than two channels. Several parameters are involved for the correction including various atmospheric data and sun-irradiance at the area of interest. To evaluate the effect of sun irradiance, MODIS MIR images acquired in day and night times were used for comparison. Atmospheric parameters were modeled by MODTRAN, and applied to a radiative transfer model for estimating the sea surface temperature. MODIS Sea Surface Temperature algorithm based upon multi-channel observation was performed in comparison with results from the radiative transfer model from a single channel. Temperature difference of the two methods was $0.89{\pm}0.54^{\circ}C$ and $1.25{\pm}0.41^{\circ}C$ from the day-time and night-time images, respectively. It is also shown that the emissivity effect has by more largely influenced on the estimated temperature than atmospheric effects. Although the test results encourage using a single channel MR observation, it must be noted that the results were obtained from water body not from land surface. Because emissivity greatly varies on land, it is very difficult to retrieval land surface temperature from a single channel MIR data.

Development of Estimation Equation for Minimum and Maximum DBH Using National Forest Inventory (국가산림자원조사 자료를 이용한 최저·최고 흉고직경 추정식 개발)

  • Kang, Jin-Taek;Yim, Jong-Su;Lee, Sun-Jeoung;Moon, Ga-Hyun;Ko, Chi-Ung
    • Journal of agriculture & life science
    • /
    • v.53 no.6
    • /
    • pp.23-33
    • /
    • 2019
  • In accordance with a change in the management information system containing the management record and planning for the entire national forest in South Korea by an amendment of the relevant law (The national forest management planning and methods, Korea Forest Service), in this study, average, the maximum, and the minimum values for DBH were presented while only average values were required before the amendment. In this regard, there is a need for an estimation algorithm by which all the existing values for DBH established before the revision can be converted to the highest and the lowest ones. The purpose of this study is to develop an estimation equation to automatically show the minimum and the maximum values for DBH for 12 main tree species from the data in the national forest management information system. In order to develop the estimation equation for the minimum and the maximum values for DBH, there was exploited the 6,858 fixed sample plots of the fifth and the sixth national forest inventory between in 2006 and 2015. Two estimation models were applied for DBH-tree age and DHB-tree height using such growth variables as DBH, tree age, and height, to draw the estimation equation for the maximum and the minimum values for DBH. The findings showed that the most suitable model to estimate the minimum and the maximum values for DBH was Dmin=a+bD+cH, Dmax=a+bD+cH with the variables of DBH and height. Based on these optimal models, the estimation equation was devised for the minimum and the maximum values for DBH for the 12 main tree species.

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

The Audience Behavior-based Emotion Prediction Model for Personalized Service (고객 맞춤형 서비스를 위한 관객 행동 기반 감정예측모형)

  • Ryoo, Eun Chung;Ahn, Hyunchul;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.73-85
    • /
    • 2013
  • Nowadays, in today's information society, the importance of the knowledge service using the information to creative value is getting higher day by day. In addition, depending on the development of IT technology, it is ease to collect and use information. Also, many companies actively use customer information to marketing in a variety of industries. Into the 21st century, companies have been actively using the culture arts to manage corporate image and marketing closely linked to their commercial interests. But, it is difficult that companies attract or maintain consumer's interest through their technology. For that reason, it is trend to perform cultural activities for tool of differentiation over many firms. Many firms used the customer's experience to new marketing strategy in order to effectively respond to competitive market. Accordingly, it is emerging rapidly that the necessity of personalized service to provide a new experience for people based on the personal profile information that contains the characteristics of the individual. Like this, personalized service using customer's individual profile information such as language, symbols, behavior, and emotions is very important today. Through this, we will be able to judge interaction between people and content and to maximize customer's experience and satisfaction. There are various relative works provide customer-centered service. Specially, emotion recognition research is emerging recently. Existing researches experienced emotion recognition using mostly bio-signal. Most of researches are voice and face studies that have great emotional changes. However, there are several difficulties to predict people's emotion caused by limitation of equipment and service environments. So, in this paper, we develop emotion prediction model based on vision-based interface to overcome existing limitations. Emotion recognition research based on people's gesture and posture has been processed by several researchers. This paper developed a model that recognizes people's emotional states through body gesture and posture using difference image method. And we found optimization validation model for four kinds of emotions' prediction. A proposed model purposed to automatically determine and predict 4 human emotions (Sadness, Surprise, Joy, and Disgust). To build up the model, event booth was installed in the KOCCA's lobby and we provided some proper stimulative movie to collect their body gesture and posture as the change of emotions. And then, we extracted body movements using difference image method. And we revised people data to build proposed model through neural network. The proposed model for emotion prediction used 3 type time-frame sets (20 frames, 30 frames, and 40 frames). And then, we adopted the model which has best performance compared with other models.' Before build three kinds of models, the entire 97 data set were divided into three data sets of learning, test, and validation set. The proposed model for emotion prediction was constructed using artificial neural network. In this paper, we used the back-propagation algorithm as a learning method, and set learning rate to 10%, momentum rate to 10%. The sigmoid function was used as the transform function. And we designed a three-layer perceptron neural network with one hidden layer and four output nodes. Based on the test data set, the learning for this research model was stopped when it reaches 50000 after reaching the minimum error in order to explore the point of learning. We finally processed each model's accuracy and found best model to predict each emotions. The result showed prediction accuracy 100% from sadness, and 96% from joy prediction in 20 frames set model. And 88% from surprise, and 98% from disgust in 30 frames set model. The findings of our research are expected to be useful to provide effective algorithm for personalized service in various industries such as advertisement, exhibition, performance, etc.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.

A Study on the Improvement of Recommendation Accuracy by Using Category Association Rule Mining (카테고리 연관 규칙 마이닝을 활용한 추천 정확도 향상 기법)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.27-42
    • /
    • 2020
  • Traditional companies with offline stores were unable to secure large display space due to the problems of cost. This limitation inevitably allowed limited kinds of products to be displayed on the shelves, which resulted in consumers being deprived of the opportunity to experience various items. Taking advantage of the virtual space called the Internet, online shopping goes beyond the limits of limitations in physical space of offline shopping and is now able to display numerous products on web pages that can satisfy consumers with a variety of needs. Paradoxically, however, this can also cause consumers to experience the difficulty of comparing and evaluating too many alternatives in their purchase decision-making process. As an effort to address this side effect, various kinds of consumer's purchase decision support systems have been studied, such as keyword-based item search service and recommender systems. These systems can reduce search time for items, prevent consumer from leaving while browsing, and contribute to the seller's increased sales. Among those systems, recommender systems based on association rule mining techniques can effectively detect interrelated products from transaction data such as orders. The association between products obtained by statistical analysis provides clues to predicting how interested consumers will be in another product. However, since its algorithm is based on the number of transactions, products not sold enough so far in the early days of launch may not be included in the list of recommendations even though they are highly likely to be sold. Such missing items may not have sufficient opportunities to be exposed to consumers to record sufficient sales, and then fall into a vicious cycle of a vicious cycle of declining sales and omission in the recommendation list. This situation is an inevitable outcome in situations in which recommendations are made based on past transaction histories, rather than on determining potential future sales possibilities. This study started with the idea that reflecting the means by which this potential possibility can be identified indirectly would help to select highly recommended products. In the light of the fact that the attributes of a product affect the consumer's purchasing decisions, this study was conducted to reflect them in the recommender systems. In other words, consumers who visit a product page have shown interest in the attributes of the product and would be also interested in other products with the same attributes. On such assumption, based on these attributes, the recommender system can select recommended products that can show a higher acceptance rate. Given that a category is one of the main attributes of a product, it can be a good indicator of not only direct associations between two items but also potential associations that have yet to be revealed. Based on this idea, the study devised a recommender system that reflects not only associations between products but also categories. Through regression analysis, two kinds of associations were combined to form a model that could predict the hit rate of recommendation. To evaluate the performance of the proposed model, another regression model was also developed based only on associations between products. Comparative experiments were designed to be similar to the environment in which products are actually recommended in online shopping malls. First, the association rules for all possible combinations of antecedent and consequent items were generated from the order data. Then, hit rates for each of the associated rules were predicted from the support and confidence that are calculated by each of the models. The comparative experiments using order data collected from an online shopping mall show that the recommendation accuracy can be improved by further reflecting not only the association between products but also categories in the recommendation of related products. The proposed model showed a 2 to 3 percent improvement in hit rates compared to the existing model. From a practical point of view, it is expected to have a positive effect on improving consumers' purchasing satisfaction and increasing sellers' sales.