• Title/Summary/Keyword: multiple-access system

Search Result 1,216, Processing Time 0.029 seconds

Quality Reporting of Radiomics Analysis in Mild Cognitive Impairment and Alzheimer's Disease: A Roadmap for Moving Forward

  • So Yeon Won;Yae Won Park;Mina Park;Sung Soo Ahn;Jinna Kim;Seung-Koo Lee
    • Korean Journal of Radiology
    • /
    • v.21 no.12
    • /
    • pp.1345-1354
    • /
    • 2020
  • Objective: To evaluate radiomics analysis in studies on mild cognitive impairment (MCI) and Alzheimer's disease (AD) using a radiomics quality score (RQS) system to establish a roadmap for further improvement in clinical use. Materials and Methods: PubMed MEDLINE and EMBASE were searched using the terms 'cognitive impairment' or 'Alzheimer' or 'dementia' and 'radiomic' or 'texture' or 'radiogenomic' for articles published until March 2020. From 258 articles, 26 relevant original research articles were selected. Two neuroradiologists assessed the quality of the methodology according to the RQS. Adherence rates for the following six key domains were evaluated: image protocol and reproducibility, feature reduction and validation, biologic/clinical utility, performance index, high level of evidence, and open science. Results: The hippocampus was the most frequently analyzed (46.2%) anatomical structure. Of the 26 studies, 16 (61.5%) used an open source database (14 from Alzheimer's Disease Neuroimaging Initiative and 2 from Open Access Series of Imaging Studies). The mean RQS was 3.6 out of 36 (9.9%), and the basic adherence rate was 27.6%. Only one study (3.8%) performed external validation. The adherence rate was relatively high for reporting the imaging protocol (96.2%), multiple segmentation (76.9%), discrimination statistics (69.2%), and open science and data (65.4%) but low for conducting test-retest analysis (7.7%) and biologic correlation (3.8%). None of the studies stated potential clinical utility, conducted a phantom study, performed cut-off analysis or calibration statistics, was a prospective study, or conducted cost-effectiveness analysis, resulting in a low level of evidence. Conclusion: The quality of radiomics reporting in MCI and AD studies is suboptimal. Validation is necessary using external dataset, and improvements need to be made to feature reproducibility, feature selection, clinical utility, model performance index, and pursuits of a higher level of evidence.

Conditional Generative Adversarial Network based Collaborative Filtering Recommendation System (Conditional Generative Adversarial Network(CGAN) 기반 협업 필터링 추천 시스템)

  • Kang, Soyi;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.157-173
    • /
    • 2021
  • With the development of information technology, the amount of available information increases daily. However, having access to so much information makes it difficult for users to easily find the information they seek. Users want a visualized system that reduces information retrieval and learning time, saving them from personally reading and judging all available information. As a result, recommendation systems are an increasingly important technologies that are essential to the business. Collaborative filtering is used in various fields with excellent performance because recommendations are made based on similar user interests and preferences. However, limitations do exist. Sparsity occurs when user-item preference information is insufficient, and is the main limitation of collaborative filtering. The evaluation value of the user item matrix may be distorted by the data depending on the popularity of the product, or there may be new users who have not yet evaluated the value. The lack of historical data to identify consumer preferences is referred to as data sparsity, and various methods have been studied to address these problems. However, most attempts to solve the sparsity problem are not optimal because they can only be applied when additional data such as users' personal information, social networks, or characteristics of items are included. Another problem is that real-world score data are mostly biased to high scores, resulting in severe imbalances. One cause of this imbalance distribution is the purchasing bias, in which only users with high product ratings purchase products, so those with low ratings are less likely to purchase products and thus do not leave negative product reviews. Due to these characteristics, unlike most users' actual preferences, reviews by users who purchase products are more likely to be positive. Therefore, the actual rating data is over-learned in many classes with high incidence due to its biased characteristics, distorting the market. Applying collaborative filtering to these imbalanced data leads to poor recommendation performance due to excessive learning of biased classes. Traditional oversampling techniques to address this problem are likely to cause overfitting because they repeat the same data, which acts as noise in learning, reducing recommendation performance. In addition, pre-processing methods for most existing data imbalance problems are designed and used for binary classes. Binary class imbalance techniques are difficult to apply to multi-class problems because they cannot model multi-class problems, such as objects at cross-class boundaries or objects overlapping multiple classes. To solve this problem, research has been conducted to convert and apply multi-class problems to binary class problems. However, simplification of multi-class problems can cause potential classification errors when combined with the results of classifiers learned from other sub-problems, resulting in loss of important information about relationships beyond the selected items. Therefore, it is necessary to develop more effective methods to address multi-class imbalance problems. We propose a collaborative filtering model using CGAN to generate realistic virtual data to populate the empty user-item matrix. Conditional vector y identify distributions for minority classes and generate data reflecting their characteristics. Collaborative filtering then maximizes the performance of the recommendation system via hyperparameter tuning. This process should improve the accuracy of the model by addressing the sparsity problem of collaborative filtering implementations while mitigating data imbalances arising from real data. Our model has superior recommendation performance over existing oversampling techniques and existing real-world data with data sparsity. SMOTE, Borderline SMOTE, SVM-SMOTE, ADASYN, and GAN were used as comparative models and we demonstrate the highest prediction accuracy on the RMSE and MAE evaluation scales. Through this study, oversampling based on deep learning will be able to further refine the performance of recommendation systems using actual data and be used to build business recommendation systems.

T-Cache: a Fast Cache Manager for Pipeline Time-Series Data (T-Cache: 시계열 배관 데이타를 위한 고성능 캐시 관리자)

  • Shin, Je-Yong;Lee, Jin-Soo;Kim, Won-Sik;Kim, Seon-Hyo;Yoon, Min-A;Han, Wook-Shin;Jung, Soon-Ki;Park, Se-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.293-299
    • /
    • 2007
  • Intelligent pipeline inspection gauges (PIGs) are inspection vehicles that move along within a (gas or oil) pipeline and acquire signals (also called sensor data) from their surrounding rings of sensors. By analyzing the signals captured in intelligent PIGs, we can detect pipeline defects, such as holes and curvatures and other potential causes of gas explosions. There are two major data access patterns apparent when an analyzer accesses the pipeline signal data. The first is a sequential pattern where an analyst reads the sensor data one time only in a sequential fashion. The second is the repetitive pattern where an analyzer repeatedly reads the signal data within a fixed range; this is the dominant pattern in analyzing the signal data. The existing PIG software reads signal data directly from the server at every user#s request, requiring network transfer and disk access cost. It works well only for the sequential pattern, but not for the more dominant repetitive pattern. This problem becomes very serious in a client/server environment where several analysts analyze the signal data concurrently. To tackle this problem, we devise a fast in-memory cache manager, called T-Cache, by considering pipeline sensor data as multiple time-series data and by efficiently caching the time-series data at T-Cache. To the best of the authors# knowledge, this is the first research on caching pipeline signals on the client-side. We propose a new concept of the signal cache line as a caching unit, which is a set of time-series signal data for a fixed distance. We also provide the various data structures including smart cursors and algorithms used in T-Cache. Experimental results show that T-Cache performs much better for the repetitive pattern in terms of disk I/Os and the elapsed time. Even with the sequential pattern, T-Cache shows almost the same performance as a system that does not use any caching, indicating the caching overhead in T-Cache is negligible.

WHICH INFORMATION MOVES PRICES: EVIDENCE FROM DAYS WITH DIVIDEND AND EARNINGS ANNOUNCEMENTS AND INSIDER TRADING

  • Kim, Chan-Wung;Lee, Jae-Ha
    • The Korean Journal of Financial Studies
    • /
    • v.3 no.1
    • /
    • pp.233-265
    • /
    • 1996
  • We examine the impact of public and private information on price movements using the thirty DJIA stocks and twenty-one NASDAQ stocks. We find that the standard deviation of daily returns on information days (dividend announcement, earnings announcement, insider purchase, or insider sale) is much higher than on no-information days. Both public information matters at the NYSE, probably due to masked identification of insiders. Earnings announcement has the greatest impact for both DJIA and NASDAQ stocks, and there is some evidence of positive impact of insider asle on return volatility of NASDAQ stocks. There has been considerable debate, e.g., French and Roll (1986), over whether market volatility is due to public information or private information-the latter gathered through costly search and only revealed through trading. Public information is composed of (1) marketwide public information such as regularly scheduled federal economic announcements (e.g., employment, GNP, leading indicators) and (2) company-specific public information such as dividend and earnings announcements. Policy makers and corporate insiders have a better access to marketwide private information (e.g., a new monetary policy decision made in the Federal Reserve Board meeting) and company-specific private information, respectively, compated to the general public. Ederington and Lee (1993) show that marketwide public information accounts for most of the observed volatility patterns in interest rate and foreign exchange futures markets. Company-specific public information is explored by Patell and Wolfson (1984) and Jennings and Starks (1985). They show that dividend and earnings announcements induce higher than normal volatility in equity prices. Kyle (1985), Admati and Pfleiderer (1988), Barclay, Litzenberger and Warner (1990), Foster and Viswanathan (1990), Back (1992), and Barclay and Warner (1993) show that the private information help by informed traders and revealed through trading influences market volatility. Cornell and Sirri (1992)' and Meulbroek (1992) investigate the actual insider trading activities in a tender offer case and the prosecuted illegal trading cased, respectively. This paper examines the aggregate and individual impact of marketwide information, company-specific public information, and company-specific private information on equity prices. Specifically, we use the thirty common stocks in the Dow Jones Industrial Average (DJIA) and twenty one National Association of Securities Dealers Automated Quotations (NASDAQ) common stocks to examine how their prices react to information. Marketwide information (public and private) is estimated by the movement in the Standard and Poors (S & P) 500 Index price for the DJIA stocks and the movement in the NASDAQ Composite Index price for the NASDAQ stocks. Divedend and earnings announcements are used as a subset of company-specific public information. The trading activity of corporate insiders (major corporate officers, members of the board of directors, and owners of at least 10 percent of any equity class) with an access to private information can be cannot legally trade on private information. Therefore, most insider transactions are not necessarily based on private information. Nevertheless, we hypothesize that market participants observe how insiders trade in order to infer any information that they cannot possess because insiders tend to buy (sell) when they have good (bad) information about their company. For example, Damodaran and Liu (1993) show that insiders of real estate investment trusts buy (sell) after they receive favorable (unfavorable) appraisal news before the information in these appraisals is released to the public. Price discovery in a competitive multiple-dealership market (NASDAQ) would be different from that in a monopolistic specialist system (NYSE). Consequently, we hypothesize that NASDAQ stocks are affected more by private information (or more precisely, insider trading) than the DJIA stocks. In the next section, we describe our choices of the fifty-one stocks and the public and private information set. We also discuss institutional differences between the NYSE and the NASDAQ market. In Section II, we examine the implications of public and private information for the volatility of daily returns of each stock. In Section III, we turn to the question of the relative importance of individual elements of our information set. Further analysis of the five DJIA stocks and the four NASDAQ stocks that are most sensitive to earnings announcements is given in Section IV, and our results are summarized in Section V.

  • PDF

Empirical Study of Key Factors in Satisfaction with Subway Services (지하철 이용만족도 결정요인에 관한 실증적 연구 -서울지역을 중심으로-)

  • Shim, Jong-Seop;Jeon, Ki-Heung
    • Korean Business Review
    • /
    • v.13
    • /
    • pp.49-66
    • /
    • 2000
  • Despite the fact that understanding customers satisfaction with transportation services is a subject of great importance, authors, so far, found no systematic researches referred to that issue. From this point, studying the satisfaction with subways services can be extremely useful. Empirical study of key factors in the satisfaction with subway services is the departure point, which holds as objectives, and we believe, will contribute to overall increasing in the number of subways services used and in the amount of public benefits derived from that usage. In order to achieve these goals: First, several items referred to some key factors in the satisfaction of subway usage were systemized. Second, a research of specific weights attached to those key factors by subway passengers was conducted. Knowledge of the satisfaction variables system can provide deep insights into ones perceptual experience when using a subway. The results were as follows: Various interrelated factors compose a passengers satisfaction with subway services. People do not just use subway passively; a number of key factors, like physical and personal services, exact timing, easiness to access etc. determine the passengers satisfaction with subway. In order to find out specific weights of these key factors multiple regression analysis was employed. Results showed that satisfaction with subway is determined by (in order of importance) easiness to access, quality of physical services, friendliness of working stuff and timing exactness. According to the findings, passengers do not use subway as a simple mean of transportation, rather they perceive it as a complex combination of environmental elements and overall satisfaction depends on these various factors. Therefore, to learn passengers satisfaction with subways services, passengers subway experience must be thoroughly studied and analyzed, and this is where papers value resides.

  • PDF

Mammalian Reproduction and Pheromones (포유동물의 생식과 페로몬)

  • Lee, Sung-Ho
    • Development and Reproduction
    • /
    • v.10 no.3
    • /
    • pp.159-168
    • /
    • 2006
  • Rodents and many other mammals have two chemosensory systems that mediate responses to pheromones, the main and accessory olfactory system, MOS and AOS, respectively. The chemosensory neurons associated with the MOS are located in the main olfactory epithelium, while those associated with the AOS are located in the vomeronasal organ(VNO). Pheromonal odorants access the lumen of the VNO via canals in the roof of the mouth, and are largely thought to be nonvolatile. The main pheromone receptor proteins consist of two superfamilies, V1Rs and V2Rs, that are structurally distinct and unrelated to the olfactory receptors expressed in the main olfactory epithelium. These two type of receptors are seven transmembrane domain G-protein coupled proteins(V1R with $G_{{\alpha}i2}$, V2R with $G_{0\;{\alpha}}$). V2Rs are co-expressed with nonclassical MHC Ib genes(M10 and other 8 M1 family proteins). Other important molecular component of VNO neuron is a TrpC2, a cation channel protein of transient receptor potential(TRP) family and thought to have a crucial role in signal transduction. There are four types of pheromones in mammalian chemical communication - primers, signalers, modulators and releasers. Responses to these chemosignals can vary substantially within and between individuals. This variability can stem from the modulating effects of steroid hormones and/or non-steroid factors such as neurotransmitters on olfactory processing. Such modulation frequently augments or facilitates the effects that prevailing social and environmental conditions have on the reproductive axis. The best example is the pregnancy block effect(Bruce effect), caused by testosterone-dependent major urinary proteins(MUPs) in male mouse urine. Intriguingly, mouse GnRH neurons receive pheromone signals from both odor and pheromone relays in the brain and may also receive common odor signals. Though it is quite controversial, recent studies reveal a complex interplay between reproduction and other functions in which GnRH neurons appear to integrate information from multiple sources and modulate a variety of brain functions.

  • PDF

INFLUENCES OF APICOECTOMY AND RETROGRADE CAVITY PREPARATION METHODS ON THE APICAL LEAKAGE (치근단절제 및 역충전와동 형성방법이 치근단누출에 미치는 영향)

  • Yang, Jeong-Ok;Kim, Sung-Kyo;Kwon, Tae-Kyung
    • Restorative Dentistry and Endodontics
    • /
    • v.23 no.2
    • /
    • pp.537-549
    • /
    • 1998
  • The purpose of this study was to evaluate the influence of root resection and retrograde cavity preparation methods on the apical leakage in endodontic surgery. To investigate the effect of various root resection and retrograde cavity preparation methods on the apical leakage, 71 roots of extracted human maxillary anterior teeth and 44 mesiobuccal roots of extracted human maxillary first molars were used. Root canals of the all the specimens were prepared with step-back technique and filled with gutta-percha by lateral condensation method. Three millimeters of each root was resected at a 45 degree angle or perpendicular to the long axis of the tooth according to the groups. Retrograde cavities were prepared with ultrasonic instruments or a slow-speed round bur, and occlusal access cavities were filled with zinc oxide eugenol cement. Three coats of clear nail polish were placed on the lateral and coronal surfaces of the specimens except the apical cut one millimeter. All the specimens were immerged in 2% methylene blue solution for 7 days in an incubator at $37^{\circ}C$. The teeth were dissolved in 14 ml of 35% nitric acid solution and the dye present within the root canal system was returned to solution. The leakage of dye was quantitatively measured via spectrophotometric method. The obtained data were analysed statistically using two-way ANOVA and Duncans Multiple Range Test. The results were as follows: 1. No statistically significant difference was observed between ultrasonic retrograde cavity preparation method and slow-speed round bur technique, without apical bevel (p>0.05). 2. Ultrasonic retrograde preparation method showed significantly less apical leakage than slow-speed round bur technique, with bevel (p<0.0001). 3. No statistically significant difference was found between beveled resected root surface and non-beveled resected root surface, with ultrasonic technique (p>0.05). 4. Non-beveled resected root surface showed significantly less apical leakage than beveled resected root surface, with slow-speed round bur technique (p<0.0001). 5. No statistically significant difference in apical leakage was found between the group of retrograde cavity prepared parallel to the long axis of the tooth and the group of one prepared perpendicular to the long axis of the tooth (p>0.05). 6. Regarding isthmus preparation, ultrasonic retrograde preparation method showed significantly less apical leakage than slow-speed round bur technique, in the mesiobuccal root of maxillary molar, without bevel (p<0.0001).

  • PDF

EEPERF(Experiential Education PERFormance): An Instrument for Measuring Service Quality in Experiential Education (체험형 교육 서비스 품질 측정 항목에 관한 연구: 창의적 체험활동을 중심으로)

  • Park, Ky-Yoon;Kim, Hyun-Sik
    • Journal of Distribution Science
    • /
    • v.10 no.2
    • /
    • pp.43-52
    • /
    • 2012
  • As experiential education services are growing, the need for proper management is increasing. Considering that adequate measures are an essential factor for achieving success in managing something, it is important for managers to use a proper system of metrics to measure the performance of experiential education services. However, in spite of this need, little research has been done to develop a valid and reliable set of metrics for assessing the quality of experiential education services. The current study aims to develop a multi-item instrument for assessing the service quality of experiential education. The specific procedure is as follows. First, we generated a pool of possible metrics based on diverse literature on service quality. We elicited possiblemetric items not only from general service quality metrics such as SERVQUAL and SERVPERF but also from educational service quality metrics such as HEdPERF and PESPERF. Second, specialist teachers in the experiential education area screened the initial metrics to boost face validity. Third, we proceeded with multiple rounds of empirical validation of those metrics. Based on this processes, we refined the metrics to determine the final metrics to be used. Fourth, we examined predictive validity by checking the well-established positive relationship between each dimension of metrics and customer satisfaction. In sum, starting with the initial pool of scale items elicited from the previous literature and purifying them empirically through the surveying method, we developed a four-dimensional systemized scale to measure the superiority of experiential education and named it "Experiential Education PERFormance" (EEPERF). Our findings indicate that students (consumers) perceive the superiority of the experiential education (EE) service in the following four dimensions: EE-empathy, EE-reliability, EE-outcome, and EE-landscape. EE-empathy is a judgment in response to the question, "How empathetically does the experiential educational service provider interact with me?" Principal measures are "How well does the service provider understand my needs?," and "How well does the service provider listen to my voice?" Next, EE-reliability is a judgment in response to the question, "How reliably does the experiential educational service provider interact with me?" Major measures are "How reliable is the schedule here?," and "How credible is the service provider?" EE-outcome is a judgmentin response to the question, "What results could I get from this experiential educational service encounter?" Representative measures are "How good is the information that I will acquire form this service encounter?," and "How useful is this service encounter in helping me develop creativity?" Finally, EE-landscape is a judgment about the physical environment. Essential measures are "How convenient is the access to the service encounter?,"and "How well managed are the facilities?" We showed the reliability and validity of the system of metrics. All four dimensions influence customer satisfaction significantly. Practitioners may use the results in planning experiential educational service programs and evaluating each service encounter. The current study isexpected to act as a stepping-stone for future scale improvement. In this case, researchers may use the experience quality paradigm that has recently arisen.

  • PDF

Design and Performance Evaluation of Selective DFT Spreading Method for PAPR Reduction in Uplink OFDMA System (OFDMA 상향 링크 시스템에서 PAPR 저감을 위한 선택적 DFT Spreading 기법의 설계와 성능 평가)

  • Kim, Sang-Woo;Ryu, Heung-Gyoon
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.18 no.3 s.118
    • /
    • pp.248-256
    • /
    • 2007
  • In this paper, we propose a selective DFT spreading method to solve a high PAPR problem in uplink OFDMA system. A selective characteristic is added to the DFT spreading, so the DFT spreading method is mixed with SLM method. However, to minimize increment of computational complexity, differently with common SLM method, our proposed method uses only one DFT spreading block. After DFT, several copy branches are generated by multiplying with each different matrix. This matrix is obtained by linear transforming the each phase rotation in front of DFT block. And it has very lower computational complexity than one DFT process. For simulation, we suppose that the 512 point IFFT is used, the number of effective sub-carrier is 300, the number of allowed sub-carrier to each user's is 1/4 and 1/3 and QPSK modulation is used. From the simulation result, when the number of copy branch is 4, our proposed method has more than about 5.2 dB PAPR reduction effect. It is about 1.8 dB better than common DFT spreading method and 0.95 dB better than common SLM which uses 32 copy branches. And also, when the number of copy branch is 2, it is better than SLM using 32 copy branches. From the comparison, the proposed method has 91.79 % lower complexity than SLM using 32 copy branches in similar PAPR reduction performance. So, we can find a very good performance of our proposed method. Also, we can expect the similar performance when all number of sub-carrier is allocated to one user like the OFDM.

Mapping the Research Landscape of Wastewater Treatment Wetlands: A Bibliometric Analysis and Comprehensive Review (폐수 처리 위한 습지의 연구 환경 매핑: 서지학적 분석 및 종합 검토)

  • C. C. Vispo;N. J. D. G. Reyes;H. S. Choi;M.S. Jeon;L. H. Kim
    • Journal of Wetlands Research
    • /
    • v.25 no.2
    • /
    • pp.145-158
    • /
    • 2023
  • Constructed wetlands (CWs) are effective technologies for urban wastewater management, utilizing natural physico-chemical and biological processes to remove pollutants. This study employed a bibliometric analysis approach to investigate the progress and future research trends in the field of CWs. A comprehensive review of 100 most-recently published and open-access articles was performed to analyze the performance of CWs in treating wastewater. Spain, China, Italy, and the United States were among the most productive countries in terms of the number of published papers. The most frequently used keywords in publications include water quality (n=19), phytoremediation (n=13), stormwater (n=11), and phosphorus (n=11), suggesting that the efficiency of CWs in improving water quality and removal of nutrients were widely investigated. Among the different types of CWs reviewed, hybrid CWs exhibited the highest removal efficiencies for BOD (88.67%) and TSS (95.67%), whereas VSSF, and HSSF systems also showed high TSS removal efficiencies (83.25%, and 78.83% respectively). VSSF wetland displayed the highest COD removal efficiency (71.82%). Generally, physical processes (e.g., sedimentation, filtration, adsorption) and biological mechanisms (i.e., biodegradation) contributed to the high removal efficiency of TSS, BOD, and COD in CW systems. The hybrid CW system demonstrated highest TN removal efficiency (60.78%) by integrating multiple treatment processes, including aerobic and anaerobic conditions, various vegetation types, and different media configurations, which enhanced microbial activity and allowed for comprehensive nitrogen compound removal. The FWS system showed the highest TP removal efficiency (54.50%) due to combined process of settling sediment-bound phosphorus and plant uptake. Phragmites, Cyperus, Iris, and Typha were commonly used in CWs due to their superior phytoremediation capabilities. The study emphasized the potential of CWs as sustainable alternatives for wastewater management, particularly in urban areas.