• Title/Summary/Keyword: Calculate

Search Result 11,086, Processing Time 0.04 seconds

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.

The Ontology Based, the Movie Contents Recommendation Scheme, Using Relations of Movie Metadata (온톨로지 기반 영화 메타데이터간 연관성을 활용한 영화 추천 기법)

  • Kim, Jaeyoung;Lee, Seok-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.25-44
    • /
    • 2013
  • Accessing movie contents has become easier and increased with the advent of smart TV, IPTV and web services that are able to be used to search and watch movies. In this situation, there are increasing search for preference movie contents of users. However, since the amount of provided movie contents is too large, the user needs more effort and time for searching the movie contents. Hence, there are a lot of researches for recommendations of personalized item through analysis and clustering of the user preferences and user profiles. In this study, we propose recommendation system which uses ontology based knowledge base. Our ontology can represent not only relations between metadata of movies but also relations between metadata and profile of user. The relation of each metadata can show similarity between movies. In order to build, the knowledge base our ontology model is considered two aspects which are the movie metadata model and the user model. On the part of build the movie metadata model based on ontology, we decide main metadata that are genre, actor/actress, keywords and synopsis. Those affect that users choose the interested movie. And there are demographic information of user and relation between user and movie metadata in user model. In our model, movie ontology model consists of seven concepts (Movie, Genre, Keywords, Synopsis Keywords, Character, and Person), eight attributes (title, rating, limit, description, character name, character description, person job, person name) and ten relations between concepts. For our knowledge base, we input individual data of 14,374 movies for each concept in contents ontology model. This movie metadata knowledge base is used to search the movie that is related to interesting metadata of user. And it can search the similar movie through relations between concepts. We also propose the architecture for movie recommendation. The proposed architecture consists of four components. The first component search candidate movies based the demographic information of the user. In this component, we decide the group of users according to demographic information to recommend the movie for each group and define the rule to decide the group of users. We generate the query that be used to search the candidate movie for recommendation in this component. The second component search candidate movies based user preference. When users choose the movie, users consider metadata such as genre, actor/actress, synopsis, keywords. Users input their preference and then in this component, system search the movie based on users preferences. The proposed system can search the similar movie through relation between concepts, unlike existing movie recommendation systems. Each metadata of recommended candidate movies have weight that will be used for deciding recommendation order. The third component the merges results of first component and second component. In this step, we calculate the weight of movies using the weight value of metadata for each movie. Then we sort movies order by the weight value. The fourth component analyzes result of third component, and then it decides level of the contribution of metadata. And we apply contribution weight to metadata. Finally, we use the result of this step as recommendation for users. We test the usability of the proposed scheme by using web application. We implement that web application for experimental process by using JSP, Java Script and prot$\acute{e}$g$\acute{e}$ API. In our experiment, we collect results of 20 men and woman, ranging in age from 20 to 29. And we use 7,418 movies with rating that is not fewer than 7.0. In order to experiment, we provide Top-5, Top-10 and Top-20 recommended movies to user, and then users choose interested movies. The result of experiment is that average number of to choose interested movie are 2.1 in Top-5, 3.35 in Top-10, 6.35 in Top-20. It is better than results that are yielded by for each metadata.

The Effects of Pergola Wisteria floribunda's LAI on Thermal Environment (그늘시렁 Wisteria floribunda의 엽면적지수가 온열환경에 미치는 영향)

  • Ryu, Nam-Hyong;Lee, Chun-Seok
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.45 no.6
    • /
    • pp.115-125
    • /
    • 2017
  • This study was to investigate the user's thermal environments under the pergola($L\;7,200{\times}W\;4,200{\times}H\;2,700mn$) covered with Wisteria floribunda(Willd.) DC. according to the variation of leaf area index(LAI). We carried out detailed measurements with two human-biometeorological stations on a popular square Jinju, Korea($N35^{\circ}10^{\prime}59.8^{{\prime}{\prime}}$, $E\;128^{\circ}05^{\prime}32.0^{{\prime}{\prime}}$, elevation: 38m). One of the stations stood under a pergola, while the other in the sun. The measurement spots were instrumented with microclimate monitoring stations to continuously measure air temperature and relative humidity, wind speed, shortwave and longwave radiation from the six cardinal directions at the height of 0.6m so as to calculate the Universal Thermal Climate Index(UTCI) from $9^{th}$ April to $27^{th}$ September 2017. The LAI was measured using the LAI-2200C Plant Canopy Analyzer. The analysis results of 18 day's 1 minute term human-biometeorological data absorbed by a man in sitting position from 10am to 4pm showed the following. During the whole observation period, daily average air temperatures under the pergola were respectively $0.7{\sim}2.3^{\circ}C$ lower compared with those in the sun, daily average wind speed and relative humidity under the pergola were respectively 0.17~0.38m/s and 0.4~3.1% higher compared with those in the sun. There was significant relationship in LAI, Julian day number and were expressed in the equation $y=-0.0004x^2+0.1719x-11.765(R^2=0.9897)$. The average $T_{mrt}$ under the pergola were $11.9{\sim}25.4^{\circ}C$ lower and maximum ${\Delta}T_{mrt}$ under the pergola were $24.1{\sim}30.2^{\circ}C$ when compared with those in the sun. There was significant relationship in LAI, reduction ratio(%) of daily average $T_{mrt}$ compared with those in the sun and was expressed in the equation $y=0.0678{\ln}(x)+0.3036(R^2=0.9454)$. The average UTCI under the pergola were $4.1{\sim}8.3^{\circ}C$ lower and maximum ${\Delta}UTCI$ under the pergola were $7.8{\sim}10.2^{\circ}C$ when compared with those in the sun. There was significant relationship in LAI, reduction ratio(%) of daily average UTCI compared with those in the sun and were expressed in the equation $y=0.0322{\ln}(x)+0.1538(R^2=0.8946)$. The shading by the pergola covered with vines was very effective for reducing daytime UTCI absorbed by a man in sitting position at summer largely through a reduction in mean radiant temperature from sun protection, lowering thermal stress from very strong(UTCI >$38^{\circ}C$) and strong(UTCI >$32^{\circ}C$) down to strong(UTCI >$32^{\circ}C$) and moderate(UTCI >$26^{\circ}C$). Therefore the pergola covered with vines used for shading outdoor spaces is essential to mitigate heat stress and can create better human thermal comfort especially in cities during summer. But the thermal environments under the pergola covered with vines during the heat wave supposed to user "very strong heat stress(UTCI>$38^{\circ}C$)". Therefore users must restrain themselves from outdoor activities during the heat waves.

Product Community Analysis Using Opinion Mining and Network Analysis: Movie Performance Prediction Case (오피니언 마이닝과 네트워크 분석을 활용한 상품 커뮤니티 분석: 영화 흥행성과 예측 사례)

  • Jin, Yu;Kim, Jungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.49-65
    • /
    • 2014
  • Word of Mouth (WOM) is a behavior used by consumers to transfer or communicate their product or service experience to other consumers. Due to the popularity of social media such as Facebook, Twitter, blogs, and online communities, electronic WOM (e-WOM) has become important to the success of products or services. As a result, most enterprises pay close attention to e-WOM for their products or services. This is especially important for movies, as these are experiential products. This paper aims to identify the network factors of an online movie community that impact box office revenue using social network analysis. In addition to traditional WOM factors (volume and valence of WOM), network centrality measures of the online community are included as influential factors in box office revenue. Based on previous research results, we develop five hypotheses on the relationships between potential influential factors (WOM volume, WOM valence, degree centrality, betweenness centrality, closeness centrality) and box office revenue. The first hypothesis is that the accumulated volume of WOM in online product communities is positively related to the total revenue of movies. The second hypothesis is that the accumulated valence of WOM in online product communities is positively related to the total revenue of movies. The third hypothesis is that the average of degree centralities of reviewers in online product communities is positively related to the total revenue of movies. The fourth hypothesis is that the average of betweenness centralities of reviewers in online product communities is positively related to the total revenue of movies. The fifth hypothesis is that the average of betweenness centralities of reviewers in online product communities is positively related to the total revenue of movies. To verify our research model, we collect movie review data from the Internet Movie Database (IMDb), which is a representative online movie community, and movie revenue data from the Box-Office-Mojo website. The movies in this analysis include weekly top-10 movies from September 1, 2012, to September 1, 2013, with in total. We collect movie metadata such as screening periods and user ratings; and community data in IMDb including reviewer identification, review content, review times, responder identification, reply content, reply times, and reply relationships. For the same period, the revenue data from Box-Office-Mojo is collected on a weekly basis. Movie community networks are constructed based on reply relationships between reviewers. Using a social network analysis tool, NodeXL, we calculate the averages of three centralities including degree, betweenness, and closeness centrality for each movie. Correlation analysis of focal variables and the dependent variable (final revenue) shows that three centrality measures are highly correlated, prompting us to perform multiple regressions separately with each centrality measure. Consistent with previous research results, our regression analysis results show that the volume and valence of WOM are positively related to the final box office revenue of movies. Moreover, the averages of betweenness centralities from initial community networks impact the final movie revenues. However, both of the averages of degree centralities and closeness centralities do not influence final movie performance. Based on the regression results, three hypotheses, 1, 2, and 4, are accepted, and two hypotheses, 3 and 5, are rejected. This study tries to link the network structure of e-WOM on online product communities with the product's performance. Based on the analysis of a real online movie community, the results show that online community network structures can work as a predictor of movie performance. The results show that the betweenness centralities of the reviewer community are critical for the prediction of movie performance. However, degree centralities and closeness centralities do not influence movie performance. As future research topics, similar analyses are required for other product categories such as electronic goods and online content to generalize the study results.

Application of OECD Agricultural Water Use Indicator in Korea (우리나라에 적합한 OECD 농업용수 사용지표의 설정)

  • Hur, Seung-Oh;Jung, Kang-Ho;Ha, Sang-Keun;Song, Kwan-Cheol;Eom, Ki-Cheol
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.39 no.5
    • /
    • pp.321-327
    • /
    • 2006
  • In Korea, there is a growing competitive for water resources between industrial, domestic and agricultural consumer, and the environment as many other OECD countries. The demand on water use is also affecting aquatic ecosystems particularly where withdrawals are in excess of minimum environmental needs for rivers, lakes and wetland habits. OECD developed three indicators related to water use by the agriculture in above contexts : the first is a water use intensity indicator, which is expressed as the quantity or share of agricultural water use in total national water utilization; the second is a water stress indicator, which is expressed as the proportion of rivers (in length) subject to diversion or regulation for irrigation without reserving a minimum of limiting reference flow; and the third is a water use efficiency indicator designated as the technical and the economic efficiency. These indicators have different meanings in the aspect of water resource conservation and sustainable water use. So, it will be more significant that the indicators should reflect the intrinsic meanings of them. The problem is that the aspect of an overall water flow in the agro-ecosystem and recycling of water use not considered in the assessment of agricultural water use needed for calculation of these water use indicators. Namely, regional or meteorological characteristics and site-specific farming practices were not considered in the calculation of these indicators. In this paper, we tried to calculate water use indicators suggested in OECD and to modify some other indicators considering our situation because water use pattern and water cycling in Korea where paddy rice farming is dominant in the monsoon region are quite different from those of semi-arid regions. In the calculation of water use intensity, we excluded the amount of water restored through the ground from the total agricultural water use because a large amount of water supplied to the farm was discharged into the stream or the ground water. The resultant water use intensity was 22.9% in 2001. As for water stress indicator, Korea has not defined nor monitored reference levels of minimum flow rate for rivers subject to diversion of water for irrigation. So, we calculated the water stress indicator in a different way from OECD method. The water stress indicator was calculated using data on the degree of water storage in agricultural water reservoirs because 87% of water for irrigation was taken from the agricultural water reservoirs. Water use technical efficiency was calculated as the reverse of the ratio of irrigation water to a standard water requirement of the paddy rice. The efficiency in 2001 was better than in 1990 and 1998. As for the economic efficiency for water use, we think that there are a lot of things to be taken into considerations to make a useful indicator to reflect socio-economic values of agricultural products resulted from the water use. Conclusively, site-specific, regional or meteorogical characteristics as in Korea were not considered in the calculation of water use indicators by methods suggested in OECD(Volume 3, 2001). So, it is needed to develop a new indicators for the indicators to be more widely applicable in the world.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

The Effect of Shading on Pedestrians' Thermal Comfort in the E-W Street (동-서 가로에서 차양이 보행자의 열적 쾌적성에 미치는 영향)

  • Ryu, Nam-Hyong;Lee, Chun-Seok
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.46 no.6
    • /
    • pp.60-74
    • /
    • 2018
  • This study was to investigate the pedestrian's thermal environments in the North Sidewalk of E-W Street during summer heatwave. We carried out detailed measurements with four human-biometeorological stations on Dongjin Street, Jinju, Korea ($N35^{\circ}10.73{\sim}10.75^{\prime}$, $E128^{\circ}55.90{\sim}58.00^{\prime}$, elevation: 50m). Two of the stations stood under one row street tree and hedge(One-Tree), two row street tree and hedge (Two-Tree), one of the stations stood under shelter and awning(Shelter), while the other in the sun (Sunlit). The measurement spots were instrumented with microclimate monitoring stations to continuously measure microclimate, radiation from the six cardinal directions at the height of 1.1m so as to calculate the Universal Thermal Climate Index (UTCI) from 24th July to 21th August 2018. The radiant temperature of sidewalk's elements were measured by the reflective sphere and thermal camera at 29th July 2018. The analysis results of 9 day's 1 minute term human-biometeorological data absorbed by a man in standing position from 10am to 4pm, and 1 day's radiant temperature of sidewalk elements from 1:16pm to 1:35pm, showed the following. The shading of street tree and shelter were mitigated heat stress by the lowered UTCI at mid and late summer's daytime, One-Tree and Two-Tree lowered respectively 0.4~0.5 level, 0.5~0.8 level of the heat stress, Shelter lowered respectively 0.3~1.0 level of the heat stress compared with those in the Sunlit. But the thermal environments in the One-Tree, Two-Tree and Shelter during the heat wave supposed to user "very strong heat stress" while those in the Sunlit supposed to user "very strong heat stres" and "exterme heat stress". The main heat load temperature compared with body temperature ($37^{\circ}C$) were respectively $7.4^{\circ}C{\sim}21.4^{\circ}C$ (pavement), $14.7^{\circ}C{\sim}15.8^{\circ}C$ (road), $12.7^{\circ}C$ (shelter canopy), $7.0^{\circ}C$ (street funiture), $3.5^{\circ}C{\sim}6.4^{\circ}C$ (building facade). The main heat load percentage were respectively 34.9%~81.0% (pavement), 9.6%~25.2% (road), 24.8% (shelter canopy), 14.1%~15.4% (building facade), 5.7% (street facility). Reducing the radiant temperature of the pavement, road, building surfaces by shading is the most effective means to achieve outdoor thermal comfort for pedestrians in sidewalk. Therefore, increasing the projected canopy area and LAI of street tree through the minimal training and pruning, building dense roadside hedge are essential for pedestrians thermal comfort. In addition, thermal liner, high reflective materials, greening etc. should be introduced for reducing the surface temperature of shelter and awning canopy. Also, retro-reflective materials of building facade should be introduced for the control of reflective sun radiation. More aggressively pavement watering should be introduced for reducing the surface temperature of sidewalk's pavement.

A study on Broad Quantification Calibration to various isotopes for Quantitative Analysis and its SUVs assessment in SPECT/CT (SPECT/CT 장비에서 정량분석을 위한 핵종 별 Broad Quantification Calibration 시행 및 SUV 평가를 위한 팬텀 실험에 관한 연구)

  • Hyun Soo, Ko;Jae Min, Choi;Soon Ki, Park
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.26 no.2
    • /
    • pp.20-31
    • /
    • 2022
  • Purpose Broad Quantification Calibration(B.Q.C) is the procedure for Quantitative Analysis to measure Standard Uptake Value(SUV) in SPECT/CT scanner. B.Q.C was performed with Tc-99m, I-123, I-131, Lu-177 respectively and then we acquired the phantom images whether the SUVs were measured accurately. Because there is no standard for SUV test in SPECT, we used ACR Esser PET phantom alternatively. The purpose of this study was to lay the groundwork for Quantitative Analysis with various isotopes in SPECT/CT scanner. Materials and Methods Siemens SPECT/CT Symbia Intevo 16 and Intevo Bold were used for this study. The procedure of B.Q.C has two steps; first is point source Sensitivity Cal. and second is Volume Sensitivity Cal. to calculate Volume Sensitivity Factor(VSF) using cylinder phantom. To verify SUV, we acquired the images with ACR Esser PET phantom and then we measured SUVmean on background and SUVmax on hot vials(25, 16, 12, 8 mm). SPSS was used to analyze the difference in the SUV between Intevo 16 and Intevo Bold by Mann-Whitney test. Results The results of Sensitivity(CPS/MBq) and VSF were in Detector 1, 2 of four isotopes (Intevo 16 D1 sensitivity/D2 sensitivity/VSF and Intevo Bold) 87.7/88.6/1.08, 91.9/91.2/1.07 on Tc-99m, 79.9/81.9/0.98, 89.4/89.4/0.98 on I-123, 124.8/128.9/0.69, 130.9, 126.8/0.71, on I-131, 8.7/8.9/1.02, 9.1/8.9/1.00 on Lu-177 respectively. The results of SUV test with ACR Esser PET phantom were (Intevo 16 BKG SUVmean/25mm SUVmax/16mm/12mm/8mm and Intevo Bold) 1.03/2.95/2.41/1.96/1.84, 1.03/2.91/2.38/1.87/1.82 on Tc-99m, 0.97/2.91/2.33/1.68/1.45, 1.00/2.80/2.23/1.57/1.32 on I-123, 0.96/1.61/1.13/1.02/0.69, 0.94/1.54/1.08/0.98/ 0.66 on I-131, 1.00/6.34/4.67/2.96/2.28, 1.01/6.21/4.49/2.86/2.21 on Lu-177. And there was no statistically significant difference of SUV between Intevo 16 and Intevo Bold(p>0.05). Conclusion Only Qualitative Analysis was possible with gamma camera in the past. On the other hand, it's possible to acquire not only anatomic localization, 3D tomography but also Quantitative Analysis with SUV measurements in SPECT/CT scanner. We could lay the groundwork for Quantitative Analysis with various isotopes; Tc-99m, I-123, I-131, Lu-177 by carrying out B.Q.C and could verify the SUV measurement with ACR phantom. It needs periodic calibration to maintain for precision of Quantitative evaluation. As a result, we can provide Quantitative Analysis on follow up scan with the SPECT/CT exams and evaluate the therapeutic response in theranosis.

A Study of Equipment Accuracy and Test Precision in Dual Energy X-ray Absorptiometry (골밀도검사의 올바른 질 관리에 따른 임상적용과 해석 -이중 에너지 방사선 흡수법을 중심으로-)

  • Dong, Kyung-Rae;Kim, Ho-Sung;Jung, Woon-Kwan
    • Journal of radiological science and technology
    • /
    • v.31 no.1
    • /
    • pp.17-23
    • /
    • 2008
  • Purpose : Because there is a difference depending on the environment as for an inspection equipment the important part of bone density scan and the precision/accuracy of a tester, the management of quality must be made systematically. The equipment failure caused by overload effect due to the aged equipment and the increase of a patient was made frequently. Thus, the replacement of equipment and additional purchases of new bonedensity equipment caused a compatibility problem in tracking patients. This study wants to know whether the clinical changes of patient's bonedensity can be accurately and precisely reflected when used it compatiblly like the existing equipment after equipment replacement and expansion. Materials and methods : Two equipments of GE Lunar Prodigy Advance(P1 and P2) and the Phantom HOLOGIC Spine Road(HSP) were used to measure equipment precision. Each device scans 20 times so that precision data was acquired from the phantom(Group 1). The precision of a tester was measured by shooting twice the same patient, every 15 members from each of the target equipment in 120 women(average age 48.78, 20-60 years old)(Group 2). In addition, the measurement of the precision of a tester and the cross-calibration data were made by scanning 20 times in each of the equipment using HSP, based on the data obtained from the management of quality using phantom(ASP) every morning (Group 3). The same patient was shot only once in one equipment alternately to make the measurement of the precision of a tester and the cross-calibration data in 120 women(average age 48.78, 20-60 years old)(Group 4). Results : It is steady equipment according to daily Q.C Data with $0.996\;g/cm^2$, change value(%CV) 0.08. The mean${\pm}$SD and a %CV price are ALP in Group 1(P1 : $1.064{\pm}0.002\;g/cm^2$, $%CV=0.190\;g/cm^2$, P2 : $1.061{\pm}0.003\;g/cm^2$, %CV=0.192). The mean${\pm}$SD and a %CV price are P1 : $1.187{\pm}0.002\;g/cm^2$, $%CV=0.164\;g/cm^2$, P2 : $1.198{\pm}0.002\;g/cm^2$, %CV=0.163 in Group 2. The average error${\pm}$2SD and %CV are P1 - (spine: $0.001{\pm}0.03\;g/cm^2$, %CV=0.94, Femur: $0.001{\pm}0.019\;g/cm^2$, %CV=0.96), P2 - (spine: $0.002{\pm}0.018\;g/cm^2$, %CV=0.55, Femur: $0.001{\pm}0.013\;g/cm^2$, %CV=0.48) in Group 3. The average error${\pm}2SD$, %CV, and r value was spine : $0.006{\pm}0.024\;g/cm^2$, %CV=0.86, r=0.995, Femur: $0{\pm}0.014\;g/cm^2$, %CV=0.54, r=0.998 in Group 4. Conclusion: Both LUNAR ASP CV% and HOLOGIC Spine Phantom are included in the normal range of error of ${\pm}2%$ defined in ISCD. BMD measurement keeps a relatively constant value, so showing excellent repeatability. The Phantom has homogeneous characteristics, but it has limitations to reflect the clinical part including variations in patient's body weight or body fat. As a result, it is believed that quality control using Phantom will be useful to check mis-calibration of the equipment used. A value measured a patient two times with one equipment, and that of double-crossed two equipment are all included within 2SD Value in the Bland - Altman Graph compared results of Group 3 with Group 4. The r value of 0.99 or higher in Linear regression analysis(Regression Analysis) indicated high precision and correlation. Therefore, it revealed that two compatible equipment did not affect in tracking the patients. Regular testing equipment and capabilities of a tester, then appropriate calibration will have to be achieved in order to calculate confidential BMD.

  • PDF

Emoticon by Emotions: The Development of an Emoticon Recommendation System Based on Consumer Emotions (Emoticon by Emotions: 소비자 감성 기반 이모티콘 추천 시스템 개발)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.227-252
    • /
    • 2018
  • The evolution of instant communication has mirrored the development of the Internet and messenger applications are among the most representative manifestations of instant communication technologies. In messenger applications, senders use emoticons to supplement the emotions conveyed in the text of their messages. The fact that communication via messenger applications is not face-to-face makes it difficult for senders to communicate their emotions to message recipients. Emoticons have long been used as symbols that indicate the moods of speakers. However, at present, emoticon-use is evolving into a means of conveying the psychological states of consumers who want to express individual characteristics and personality quirks while communicating their emotions to others. The fact that companies like KakaoTalk, Line, Apple, etc. have begun conducting emoticon business and sales of related content are expected to gradually increase testifies to the significance of this phenomenon. Nevertheless, despite the development of emoticons themselves and the growth of the emoticon market, no suitable emoticon recommendation system has yet been developed. Even KakaoTalk, a messenger application that commands more than 90% of domestic market share in South Korea, just grouped in to popularity, most recent, or brief category. This means consumers face the inconvenience of constantly scrolling around to locate the emoticons they want. The creation of an emoticon recommendation system would improve consumer convenience and satisfaction and increase the sales revenue of companies the sell emoticons. To recommend appropriate emoticons, it is necessary to quantify the emotions that the consumer sees and emotions. Such quantification will enable us to analyze the characteristics and emotions felt by consumers who used similar emoticons, which, in turn, will facilitate our emoticon recommendations for consumers. One way to quantify emoticons use is metadata-ization. Metadata-ization is a means of structuring or organizing unstructured and semi-structured data to extract meaning. By structuring unstructured emoticon data through metadata-ization, we can easily classify emoticons based on the emotions consumers want to express. To determine emoticons' precise emotions, we had to consider sub-detail expressions-not only the seven common emotional adjectives but also the metaphorical expressions that appear only in South Korean proved by previous studies related to emotion focusing on the emoticon's characteristics. We therefore collected the sub-detail expressions of emotion based on the "Shape", "Color" and "Adumbration". Moreover, to design a highly accurate recommendation system, we considered both emotion-technical indexes and emoticon-emotional indexes. We then identified 14 features of emoticon-technical indexes and selected 36 emotional adjectives. The 36 emotional adjectives consisted of contrasting adjectives, which we reduced to 18, and we measured the 18 emotional adjectives using 40 emoticon sets randomly selected from the top-ranked emoticons in the KakaoTalk shop. We surveyed 277 consumers in their mid-twenties who had experience purchasing emoticons; we recruited them online and asked them to evaluate five different emoticon sets. After data acquisition, we conducted a factor analysis of emoticon-emotional factors. We extracted four factors that we named "Comic", Softness", "Modernity" and "Transparency". We analyzed both the relationship between indexes and consumer attitude and the relationship between emoticon-technical indexes and emoticon-emotional factors. Through this process, we confirmed that the emoticon-technical indexes did not directly affect consumer attitudes but had a mediating effect on consumer attitudes through emoticon-emotional factors. The results of the analysis revealed the mechanism consumers use to evaluate emoticons; the results also showed that consumers' emoticon-technical indexes affected emoticon-emotional factors and that the emoticon-emotional factors affected consumer satisfaction. We therefore designed the emoticon recommendation system using only four emoticon-emotional factors; we created a recommendation method to calculate the Euclidean distance from each factors' emotion. In an attempt to increase the accuracy of the emoticon recommendation system, we compared the emotional patterns of selected emoticons with the recommended emoticons. The emotional patterns corresponded in principle. We verified the emoticon recommendation system by testing prediction accuracy; the predictions were 81.02% accurate in the first result, 76.64% accurate in the second, and 81.63% accurate in the third. This study developed a methodology that can be used in various fields academically and practically. We expect that the novel emoticon recommendation system we designed will increase emoticon sales for companies who conduct business in this domain and make consumer experiences more convenient. In addition, this study served as an important first step in the development of an intelligent emoticon recommendation system. The emotional factors proposed in this study could be collected in an emotional library that could serve as an emotion index for evaluation when new emoticons are released. Moreover, by combining the accumulated emotional library with company sales data, sales information, and consumer data, companies could develop hybrid recommendation systems that would bolster convenience for consumers and serve as intellectual assets that companies could strategically deploy.