• Title/Summary/Keyword: key block

Search Result 695, Processing Time 0.027 seconds

The Heterogeneity of Flow Distribution and Partition Coefficient in [15O-H2O] Myocardium Positron Emission Tomography ([15O-H2O] 심근 양전자 단층 촬영에서 혈류 분포의 비균일성과 분배계수)

  • Ahn, Ji Young;Lee, Dong Soo;Kim, Kyung Min;Jeong, Jae Min;Chung, June-Key;Shin, Seung-Ae;Lee, Myung Chul;Koh, Chang-Soon
    • The Korean Journal of Nuclear Medicine
    • /
    • v.32 no.1
    • /
    • pp.32-49
    • /
    • 1998
  • For estimation of regional myocardial blood flow with O-15 water PET, a few modifications considering partial volume effect based on single compartment model have been proposed. In this study, we attempted to quantify the degree of heterogeneity and to show the effect of tissue flow heterogeneity on partition coefficient(${\lambda}$) and to find the relation between perfusable tissue index(PTI) and ${\lambda}$ by computer simulation using two modified models. We simulated tissue curves for the regions with homogeneous and heterogeneous blood flow over a various flow range(0.2-4.0ml/g/min). Simulated heterogeneous tissue composed of 4 subregions of the same or different size of block which have different homogeneous flow and different degree of slope of distribution of blood flow. We measured the index representing heterogeneity of distribution of blood flow for each heterogeneous tissue by the constitution heterogeneity(CH). For model I, we assumed that tissue recovery coefficient ($F_{MME}$) was the product of partial volume effect($F_{MMF}$) and PTI. Using model I, PTI, flow, and $F_{MM}$ were estimated. For model II, we assumed that partition coefficient was another variable which could represent tissue characteristics of heterogeneity of flow distribution. Using model II, PTI, flow and ${\lambda}$ were estimated. For the simulated tissue with homogeneous flow, both models gave exactly the same estimates, of three parameters. For the simulated tissue with heterogeneous flow distribution, in model I, flow and $F_{MM}$ were correctly estimated as CH was increased moderately. In model II, flow and ${\lambda}$ were decreased curvi-linearly as CH was increased. The degree of underestimation of ${\lambda}$ obtained using model II, was correlated with CH. The degree of underestimation of flow was dependent on the degree of underestimation of ${\lambda}$. PTI was somewhat overestimated and did not change according to CH. We conclude that estimated ${\lambda}$ reflect the degree of tissue heterogeneity of flow distribution. We could use the degree of underestimation of ${\lambda}$ to find the characteristic heterogeneity of tissue flow and use ${\lambda}$ to recover the underestimated flow.

  • PDF

A Three-year Study on the Leaf and Soil Nitrogen Contents Influenced by Irrigation Frequency, Clipping Return or Removal and Nitrogen Rate in a Creeping Bentgrass Fairway (크리핑 벤트그라스 훼어웨이에서 관수회수.예지물과 질소시비수준이 엽조직 및 토양 질소함유량에 미치는 효과)

  • 김경남;로버트쉬어만
    • Asian Journal of Turfgrass Science
    • /
    • v.11 no.2
    • /
    • pp.105-115
    • /
    • 1997
  • Responses of 'Penncross' creeping bentgrass turf to various fairway cultural practices are not well-established or supported by research results. This study was initiated to evaluate the effects of irrigation frequency, clipping return or removal, and nitrogen rate on leaf and soil nitrogen con-tent in the 'Penncross' creeping bentgrass (Agrostis palustris Huds.) turf. A 'Penncross' creeping bentgrass turf was established in 1988 on a Sharpsburg silty-clay loam (Typic Argiudoll). The experiment was conducted from 1989 to 1991 under nontraffic conditions. A split-split-plot experimental design was used. Daily or biweekly irrigation, clipping return or removal, and 5, 15, or 25 g N $m-^2$ $yr-^1$ were the main-, sub-, and sub-sub-plot treatments, respectively. Treatments were replicated 3 times in a randomized complete block design. The turf was mowed 4 times weekly at a l3 mm height of cut. Leaf tissue nitrogen content was analyzed twice in 1989 and three times in both 1990 and 1991. Leaf samples were collected from turfgrass plants in the treatment plots, dried immediately at 70˚C for 48 hours, and evaluated for total-N content, using the Kjeldahl method. Concurrently, six soil cores (18mm diam. by 200 mm depth) were collected, air dried, and analyzed for total-N content. Nitrogen analysis on the soil and leaf samples were made in the Soil and Plant Analyical Laboratory, at the University of Nebraska, Lincoln, USA. Data were analyzed as a split-split-plot with analysis of variance (ANOVA), using the General Linear Model procedures of the Statistical Analysis System. The nitrogen content of the leaf tissue is variable in creeping bentgrass fairway turf with clip-ping recycles, nitrogen application rate and time after establishment. Leaf tissue nitrogen content increased with clipping return and nitrogen rate. Plots treated with clipping return had 8% and 5% more nitrogen content in the leaf tissue in 1989 and 1990, respectively, as compared to plots treated with clipping removal. Plots applied with high-N level (25g N $m-^2$ $yr-^1$)had 10%, 17%, and 13% more nitrogen content in leaf tissue in 1989, 1990, and 1991, respectively, when compared with plots applied with low-N level (5g N $m-^2$ $yr-^1$). Overall observations during the study indicated that leaf tissue nitrogen content increased at any nitrogen rate with time after establishment. At the low-N level treatment (5g N $m-^2$ $yr-^1$ ), plots sampled in 1991 had 15% more leaf nitrogen content, as compared to plots sampled in 1989. Similar responses were also found from the high-N level treatment (25g N $m-^2$ $yr-^1$ ).Plots analyzed in 1991 were 18% higher than that of plots analyzed in 1989. No significant treatment effects were observed for soil nitrogen content over the first 3 years after establishment. Strategic management application is necessary for the golf course turf, depending on whether clippings return or not. Different approaches should be addressed to turf fertilization program from a standpoint of clipping recycles. It is recommended that regular analysis of the soil and leaf tissue of golf course turf must be made and fertilization program should be developed through the interpretation of its analytic data result. In golf courses where clippings are recycled, the fertilization program need to be adjusted, being 20% to 30% less nitrogen input over the clipping-removed areas. Key words: Agrostis palustris Huds., 'Penncross' creeping bentgrass fairway, Irrigation frequency, Clipping return, Nitrogen rate, Leaf nitrogen content, Soil nitrogen content.

  • PDF

Prediction of a hit drama with a pattern analysis on early viewing ratings (초기 시청시간 패턴 분석을 통한 대흥행 드라마 예측)

  • Nam, Kihwan;Seong, Nohyoon
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.33-49
    • /
    • 2018
  • The impact of TV Drama success on TV Rating and the channel promotion effectiveness is very high. The cultural and business impact has been also demonstrated through the Korean Wave. Therefore, the early prediction of the blockbuster success of TV Drama is very important from the strategic perspective of the media industry. Previous studies have tried to predict the audience ratings and success of drama based on various methods. However, most of the studies have made simple predictions using intuitive methods such as the main actor and time zone. These studies have limitations in predicting. In this study, we propose a model for predicting the popularity of drama by analyzing the customer's viewing pattern based on various theories. This is not only a theoretical contribution but also has a contribution from the practical point of view that can be used in actual broadcasting companies. In this study, we collected data of 280 TV mini-series dramas, broadcasted over the terrestrial channels for 10 years from 2003 to 2012. From the data, we selected the most highly ranked and the least highly ranked 45 TV drama and analyzed the viewing patterns of them by 11-step. The various assumptions and conditions for modeling are based on existing studies, or by the opinions of actual broadcasters and by data mining techniques. Then, we developed a prediction model by measuring the viewing-time distance (difference) using Euclidean and Correlation method, which is termed in our study similarity (the sum of distance). Through the similarity measure, we predicted the success of dramas from the viewer's initial viewing-time pattern distribution using 1~5 episodes. In order to confirm that the model is shaken according to the measurement method, various distance measurement methods were applied and the model was checked for its dryness. And when the model was established, we could make a more predictive model using a grid search. Furthermore, we classified the viewers who had watched TV drama more than 70% of the total airtime as the "passionate viewer" when a new drama is broadcasted. Then we compared the drama's passionate viewer percentage the most highly ranked and the least highly ranked dramas. So that we can determine the possibility of blockbuster TV mini-series. We find that the initial viewing-time pattern is the key factor for the prediction of blockbuster dramas. From our model, block-buster dramas were correctly classified with the 75.47% accuracy with the initial viewing-time pattern analysis. This paper shows high prediction rate while suggesting audience rating method different from existing ones. Currently, broadcasters rely heavily on some famous actors called so-called star systems, so they are in more severe competition than ever due to rising production costs of broadcasting programs, long-term recession, aggressive investment in comprehensive programming channels and large corporations. Everyone is in a financially difficult situation. The basic revenue model of these broadcasters is advertising, and the execution of advertising is based on audience rating as a basic index. In the drama, there is uncertainty in the drama market that it is difficult to forecast the demand due to the nature of the commodity, while the drama market has a high financial contribution in the success of various contents of the broadcasting company. Therefore, to minimize the risk of failure. Thus, by analyzing the distribution of the first-time viewing time, it can be a practical help to establish a response strategy (organization/ marketing/story change, etc.) of the related company. Also, in this paper, we found that the behavior of the audience is crucial to the success of the program. In this paper, we define TV viewing as a measure of how enthusiastically watching TV is watched. We can predict the success of the program successfully by calculating the loyalty of the customer with the hot blood. This way of calculating loyalty can also be used to calculate loyalty to various platforms. It can also be used for marketing programs such as highlights, script previews, making movies, characters, games, and other marketing projects.

Performance Analysis and Comparison of Stream Ciphers for Secure Sensor Networks (안전한 센서 네트워크를 위한 스트림 암호의 성능 비교 분석)

  • Yun, Min;Na, Hyoung-Jun;Lee, Mun-Kyu;Park, Kun-Soo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.18 no.5
    • /
    • pp.3-16
    • /
    • 2008
  • A Wireless Sensor Network (WSN for short) is a wireless network consisting of distributed small devices which are called sensor nodes or motes. Recently, there has been an extensive research on WSN and also on its security. For secure storage and secure transmission of the sensed information, sensor nodes should be equipped with cryptographic algorithms. Moreover, these algorithms should be efficiently implemented since sensor nodes are highly resource-constrained devices. There are already some existing algorithms applicable to sensor nodes, including public key ciphers such as TinyECC and standard block ciphers such as AES. Stream ciphers, however, are still to be analyzed, since they were only recently standardized in the eSTREAM project. In this paper, we implement over the MicaZ platform nine software-based stream ciphers out of the ten in the second and final phases of the eSTREAM project, and we evaluate their performance. Especially, we apply several optimization techniques to six ciphers including SOSEMANUK, Salsa20 and Rabbit, which have survived after the final phase of the eSTREAM project. We also present the implementation results of hardware-oriented stream ciphers and AES-CFB fur reference. According to our experiment, the encryption speeds of these software-based stream ciphers are in the range of 31-406Kbps, thus most of these ciphers are fairly acceptable fur sensor nodes. In particular, the survivors, SOSEMANUK, Salsa20 and Rabbit, show the throughputs of 406Kbps, 176Kbps and 121Kbps using 70KB, 14KB and 22KB of ROM and 2811B, 799B and 755B of RAM, respectively. From the viewpoint of encryption speed, the performances of these ciphers are much better than that of the software-based AES, which shows the speed of 106Kbps.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.