• 제목/요약/키워드: time-weighted average standard

Search Result 35, Processing Time 0.023 seconds

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

[ $Gd(DTPA)^{2-}$ ]-enhanced, and Quantitative MR Imaging in Articular Cartilage (관절연골의 $Gd(DTPA)^{2-}$-조영증강 및 정량적 자기공명영상에 대한 실험적 연구)

  • Eun Choong-Ki;Lee Yeong-Joon;Park Auh-Whan;Park Yeong-Mi;Bae Jae-Ik;Ryu Ji Hwa;Baik Dae-Il;Jung Soo-Jin;Lee Seon-Joo
    • Investigative Magnetic Resonance Imaging
    • /
    • v.8 no.2
    • /
    • pp.100-108
    • /
    • 2004
  • Purpose : Early degeneration of articular cartilage is accompanied by a loss of glycosaminoglycan (GAG) and the consequent change of the integrity. The purpose of this study was to biochemically quantify the loss of GAG, and to evaluate the $Gd(DTPA)^{2-}$-enhanced, and T1, T2, rho relaxation map for detection of the early degeneration of cartilage. Materials and Methods : A cartilage-bone block in size of $8mm\;\times\;10mm$ was acquired from the patella in each of three pigs. Quantitative analysis of GAG of cartilage was performed at spectrophotometry by use of dimethylmethylene blue. Each of cartilage blocks was cultured in one of three different media: two different culture media (0.2 mg/ml trypsin solution, 1mM Gd $(DTPA)^{2-}$ mixed trypsin solution) and the control media (phosphate buffered saline (PBS)). The cartilage blocks were cultured for 5 hrs, during which MR images of the blocks were obtained at one hour interval (0 hr, 1 hr, 2 hr, 3 hr, 4 hr, 5 hr). And then, additional culture was done for 24 hrs and 48 hrs. Both T1-weighted image (TR/TE, 450/22 ms), and mixed-echo sequence (TR/TE, 760/21-168ms; 8 echoes) were obtained at all times using field of view 50 mm, slice thickness 2 mm, and matrix $256\times512$. The MRI data were analyzed with pixel-by-pixel comparisons. The cultured cartilage-bone blocks were microscopically observed using hematoxylin & eosin, toluidine blue, alcian blue, and trichrome stains. Results : At quantitation analysis, GAG concentration in the culture solutions was proportional to the culture durations. The T1-signal of the cartilage-bone block cultured in the $Gd(DTPA)^{2-}$ mixed solution was significantly higher ($42\%$ in average, p<0.05) than that of the cartilage-bone block cultured in the trypsin solution alone. The T1, T2, rho relaxation times of cultured tissue were not significantly correlated with culture duration (p>0.05). However the focal increase in T1 relaxation time at superficial and transitional layers of cartilage was seen in $Gd(DTPA)^{2-}$ mixed culture. Toluidine blue and alcian blue stains revealed multiple defects in whole thickness of the cartilage cultured in trypsin media. Conclusion : The quantitative analysis showed gradual loss of GAG proportional to the culture duration. Microimagings of cartilage with $Gd(DTPA)^{2-}$-enhancement, relaxation maps were available by pixel size of $97.9\times195\;{\mu}m$. Loss of GAG over time better demonstrated with $Gd(DTPA)^{2-}$-enhanced images than with T1, T2, rho relaxation maps. Therefore $Gd(DTPA)^{2-}$-enhanced T1-weighted image is superior for detection of early degeneration of cartilage.

  • PDF

Selection of TI for Suppression Fat Tissue of SPAIR and Comparative Study of SPAIR and STIR of Brain Fast SE T2 Weighted Imaging (뇌의 고속스핀에코 T2강조영상에서 지방조직 억제를 위한 SPAIR의 반전시간(TI) 결정 및 STIR 영상과의 비교 연구)

  • Lee, Hoo-Min;Kim, Ham-Gyum;Kong, Seok-Kyo
    • Journal of radiological science and technology
    • /
    • v.32 no.1
    • /
    • pp.95-99
    • /
    • 2009
  • The purpose of this research is to seek SPAIR's reversal time (TI) which satisfies two conditions ; maintaining the suppression ability of fat tissue and simultaneously minimizing the inhomogeneity of fat tissue in T2 high-speed spin echo 3.0T magnetic resonance image (MRI) of the brain, and to compare SPAIR with STIR which is fat-suppression technique. The reversal times (TI) of SPAIR protocol are set to 1/2, 1/3, 1/6 and 1/12 of SPAIR TR (420 msec), namely 210 msec (8 people), 140 msec (26 people), 70 msec (26 people) and 35 msec (18 people) and STIR TI is set with 250 msec (26 people). With these parameter sets, we acquired the axis direction 104 images of the brain. In ROI ($50\;mm^2$) of output image, signal intensities of the fatty tissue, the muscular tissue, and the background were measured and the CNRs of fatty tissue and the muscular tissue were calculated. The inhomogeneity of the fatty tissue is SD/mean, where SD is the standard deviation and 'mean' is a average fatty tissue signal. Consequently, SPAIR TI is determined on either 1/3 or 1/6 of TR (420 ms) ; 140 ms or 70 ms. Because the difference of statistics in fat-suppression ability and inhomogeneity of fatty tissue is very small (p < 0.001), Selecting 140 ms seems to be better choice for the image quality. Meanwhile, Comparing SPAIR (TI : 140 ms) with STIR, the fat-suppression is not able to be considered statistically (p < 0.252), but the image quality is able to be considered statistically (p < 0.01). In conclusion, SPAIR is better than STIR in the image quality.

  • PDF

Research on ITB Contract Terms Classification Model for Risk Management in EPC Projects: Deep Learning-Based PLM Ensemble Techniques (EPC 프로젝트의 위험 관리를 위한 ITB 문서 조항 분류 모델 연구: 딥러닝 기반 PLM 앙상블 기법 활용)

  • Hyunsang Lee;Wonseok Lee;Bogeun Jo;Heejun Lee;Sangjin Oh;Sangwoo You;Maru Nam;Hyunsik Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.11
    • /
    • pp.471-480
    • /
    • 2023
  • The Korean construction order volume in South Korea grew significantly from 91.3 trillion won in public orders in 2013 to a total of 212 trillion won in 2021, particularly in the private sector. As the size of the domestic and overseas markets grew, the scale and complexity of EPC (Engineering, Procurement, Construction) projects increased, and risk management of project management and ITB (Invitation to Bid) documents became a critical issue. The time granted to actual construction companies in the bidding process following the EPC project award is not only limited, but also extremely challenging to review all the risk terms in the ITB document due to manpower and cost issues. Previous research attempted to categorize the risk terms in EPC contract documents and detect them based on AI, but there were limitations to practical use due to problems related to data, such as the limit of labeled data utilization and class imbalance. Therefore, this study aims to develop an AI model that can categorize the contract terms based on the FIDIC Yellow 2017(Federation Internationale Des Ingenieurs-Conseils Contract terms) standard in detail, rather than defining and classifying risk terms like previous research. A multi-text classification function is necessary because the contract terms that need to be reviewed in detail may vary depending on the scale and type of the project. To enhance the performance of the multi-text classification model, we developed the ELECTRA PLM (Pre-trained Language Model) capable of efficiently learning the context of text data from the pre-training stage, and conducted a four-step experiment to validate the performance of the model. As a result, the ensemble version of the self-developed ITB-ELECTRA model and Legal-BERT achieved the best performance with a weighted average F1-Score of 76% in the classification of 57 contract terms.

A Study on Web-based Technology Valuation System (웹기반 지능형 기술가치평가 시스템에 관한 연구)

  • Sung, Tae-Eung;Jun, Seung-Pyo;Kim, Sang-Gook;Park, Hyun-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.23-46
    • /
    • 2017
  • Although there have been cases of evaluating the value of specific companies or projects which have centralized on developed countries in North America and Europe from the early 2000s, the system and methodology for estimating the economic value of individual technologies or patents has been activated on and on. Of course, there exist several online systems that qualitatively evaluate the technology's grade or the patent rating of the technology to be evaluated, as in 'KTRS' of the KIBO and 'SMART 3.1' of the Korea Invention Promotion Association. However, a web-based technology valuation system, referred to as 'STAR-Value system' that calculates the quantitative values of the subject technology for various purposes such as business feasibility analysis, investment attraction, tax/litigation, etc., has been officially opened and recently spreading. In this study, we introduce the type of methodology and evaluation model, reference information supporting these theories, and how database associated are utilized, focusing various modules and frameworks embedded in STAR-Value system. In particular, there are six valuation methods, including the discounted cash flow method (DCF), which is a representative one based on the income approach that anticipates future economic income to be valued at present, and the relief-from-royalty method, which calculates the present value of royalties' where we consider the contribution of the subject technology towards the business value created as the royalty rate. We look at how models and related support information (technology life, corporate (business) financial information, discount rate, industrial technology factors, etc.) can be used and linked in a intelligent manner. Based on the classification of information such as International Patent Classification (IPC) or Korea Standard Industry Classification (KSIC) for technology to be evaluated, the STAR-Value system automatically returns meta data such as technology cycle time (TCT), sales growth rate and profitability data of similar company or industry sector, weighted average cost of capital (WACC), indices of industrial technology factors, etc., and apply adjustment factors to them, so that the result of technology value calculation has high reliability and objectivity. Furthermore, if the information on the potential market size of the target technology and the market share of the commercialization subject refers to data-driven information, or if the estimated value range of similar technologies by industry sector is provided from the evaluation cases which are already completed and accumulated in database, the STAR-Value is anticipated that it will enable to present highly accurate value range in real time by intelligently linking various support modules. Including the explanation of the various valuation models and relevant primary variables as presented in this paper, the STAR-Value system intends to utilize more systematically and in a data-driven way by supporting the optimal model selection guideline module, intelligent technology value range reasoning module, and similar company selection based market share prediction module, etc. In addition, the research on the development and intelligence of the web-based STAR-Value system is significant in that it widely spread the web-based system that can be used in the validation and application to practices of the theoretical feasibility of the technology valuation field, and it is expected that it could be utilized in various fields of technology commercialization.