• Title/Summary/Keyword: a monitoring

Search Result 21,802, Processing Time 0.059 seconds

A New Exploratory Research on Franchisor's Provision of Exclusive Territories (가맹본부의 배타적 영업지역보호에 대한 탐색적 연구)

  • Lim, Young-Kyun;Lee, Su-Dong;Kim, Ju-Young
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.37-63
    • /
    • 2012
  • In franchise business, exclusive sales territory (sometimes EST in table) protection is a very important issue from an economic, social and political point of view. It affects the growth and survival of both franchisor and franchisee and often raises issues of social and political conflicts. When franchisee is not familiar with related laws and regulations, franchisor has high chance to utilize it. Exclusive sales territory protection by the manufacturer and distributors (wholesalers or retailers) means sales area restriction by which only certain distributors have right to sell products or services. The distributor, who has been granted exclusive sales territories, can protect its own territory, whereas he may be prohibited from entering in other regions. Even though exclusive sales territory is a quite critical problem in franchise business, there is not much rigorous research about the reason, results, evaluation, and future direction based on empirical data. This paper tries to address this problem not only from logical and nomological validity, but from empirical validation. While we purse an empirical analysis, we take into account the difficulties of real data collection and statistical analysis techniques. We use a set of disclosure document data collected by Korea Fair Trade Commission, instead of conventional survey method which is usually criticized for its measurement error. Existing theories about exclusive sales territory can be summarized into two groups as shown in the table below. The first one is about the effectiveness of exclusive sales territory from both franchisor and franchisee point of view. In fact, output of exclusive sales territory can be positive for franchisors but negative for franchisees. Also, it can be positive in terms of sales but negative in terms of profit. Therefore, variables and viewpoints should be set properly. The other one is about the motive or reason why exclusive sales territory is protected. The reasons can be classified into four groups - industry characteristics, franchise systems characteristics, capability to maintain exclusive sales territory, and strategic decision. Within four groups of reasons, there are more specific variables and theories as below. Based on these theories, we develop nine hypotheses which are briefly shown in the last table below with the results. In order to validate the hypothesis, data is collected from government (FTC) homepage which is open source. The sample consists of 1,896 franchisors and it contains about three year operation data, from 2006 to 2008. Within the samples, 627 have exclusive sales territory protection policy and the one with exclusive sales territory policy is not evenly distributed over 19 representative industries. Additional data are also collected from another government agency homepage, like Statistics Korea. Also, we combine data from various secondary sources to create meaningful variables as shown in the table below. All variables are dichotomized by mean or median split if they are not inherently dichotomized by its definition, since each hypothesis is composed by multiple variables and there is no solid statistical technique to incorporate all these conditions to test the hypotheses. This paper uses a simple chi-square test because hypotheses and theories are built upon quite specific conditions such as industry type, economic condition, company history and various strategic purposes. It is almost impossible to find all those samples to satisfy them and it can't be manipulated in experimental settings. However, more advanced statistical techniques are very good on clean data without exogenous variables, but not good with real complex data. The chi-square test is applied in a way that samples are grouped into four with two criteria, whether they use exclusive sales territory protection or not, and whether they satisfy conditions of each hypothesis. So the proportion of sample franchisors which satisfy conditions and protect exclusive sales territory, does significantly exceed the proportion of samples that satisfy condition and do not protect. In fact, chi-square test is equivalent with the Poisson regression which allows more flexible application. As results, only three hypotheses are accepted. When attitude toward the risk is high so loyalty fee is determined according to sales performance, EST protection makes poor results as expected. And when franchisor protects EST in order to recruit franchisee easily, EST protection makes better results. Also, when EST protection is to improve the efficiency of franchise system as a whole, it shows better performances. High efficiency is achieved as EST prohibits the free riding of franchisee who exploits other's marketing efforts, and it encourages proper investments and distributes franchisee into multiple regions evenly. Other hypotheses are not supported in the results of significance testing. Exclusive sales territory should be protected from proper motives and administered for mutual benefits. Legal restrictions driven by the government agency like FTC could be misused and cause mis-understandings. So there need more careful monitoring on real practices and more rigorous studies by both academicians and practitioners.

  • PDF

A Comparative Study of Food Habits and Body Satisfaction of Middle School Students According to Clinical Symptoms (일부 남녀 중학생의 건강 관련 임상증상에 따른 식습관과 체헝관심도에 관한 연구)

  • Sung, Chung-Ja
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.34 no.2
    • /
    • pp.202-208
    • /
    • 2005
  • This study was conducted to examine the food habits, knowledge of nutrition and actual conditions of food ingestion of adolescent middle school students according to questionnaire answers. Questionnaires were completed by 524 students, divided into a healthy group (n=289) and an unhealthy group (n=235) according to clinical signs. Further questions were asked of the two groups in the areas of food habits, knowledge of nutrition and nutritional attitude. The results were as follows: Mean age of all subjects was 14, heights for male and female students were 162.0 em, and 157.2 cm, weights were 53.4 kg, and 49.4, respectively. Heights and weights of male students were greater than those of female students. The body mass index (BMI) for male and female students was 20.3 kg/$m^2$ and 20.0 kg/$m^2$, respectively, and all data were within normal ranges. There were no significant differences in mean age, height, weight, and BMI between the healthy and unhealthy groups. There was no significant difference in body image recognition between the two groups, although the ratio of dissatisfaction with their own body shape was significantly higher in the female unhealthy group (46.1%), than in the female healthy group (33.0%) (p<0.05). In the area of the struggle to control body weight during the previous year, the female unhealthy group (59.4%) was higher than the female healthy group (38.4%) (p<0.01). There was no significant difference in the scores between the two groups in the areas of knowledge of nutrition and the nutritional attitude. Meal frequency and meal patterns were showed that having breakfast less than 4x/week was significantly higher in the female unhealthy group (44.0%), than in the female healthy group (30.7%) (p<0.01). Meal frequency for suppers<4x/week showed that the female unhealthy group (18.8%) was also higher than the female healthy group (10.7%). Therefore, the unhealthy group exhibited a higher pattern of missing both breakfast and supper. The male unhealthy group (16.7%) dined out more frequently than the male healthy group (12.3%) (p<0.01), and female unhealthy group also indulged in snacking significantly more frequently than the female healthy group. The unhealthy group also ate only 1 item for meals more frequently than the healthy group and no significant difference. The conclusion of this study is that adolescent Korean middle school students, who showed a higher incidence of clinical symptoms, representing an unhealthy status, missed breakfast and supper, and dined out and indulged in snacking more frequently. Their quality of breakfast and satisfaction of body image were also lower than the healthy group. These results indicated that there is a high correlation between a Korean adolescent's health status, food habits and body image satisfaction. It is recommended that a more intense program of nutritional education and monitoring be introduce into the current Korean middle-school system in order to optimally support and maximize the health potential of the current population of Korean student.

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF

Evaluation of the Positional Uncertainty of a Liver Tumor using 4-Dimensional Computed Tomography and Gated Orthogonal Kilovolt Setup Images (사차원전산화단층촬영과 호흡연동 직각 Kilovolt 준비 영상을 이용한 간 종양의 움직임 분석)

  • Ju, Sang-Gyu;Hong, Chae-Seon;Park, Hee-Chul;Ahn, Jong-Ho;Shin, Eun-Hyuk;Shin, Jung-Suk;Kim, Jin-Sung;Han, Young-Yih;Lim, Do-Hoon;Choi, Doo-Ho
    • Radiation Oncology Journal
    • /
    • v.28 no.3
    • /
    • pp.155-165
    • /
    • 2010
  • Purpose: In order to evaluate the positional uncertainty of internal organs during radiation therapy for treatment of liver cancer, we measured differences in inter- and intra-fractional variation of the tumor position and tidal amplitude using 4-dimentional computed radiograph (DCT) images and gated orthogonal setup kilovolt (KV) images taken on every treatment using the on board imaging (OBI) and real time position management (RPM) system. Materials and Methods: Twenty consecutive patients who underwent 3-dimensional (3D) conformal radiation therapy for treatment of liver cancer participated in this study. All patients received a 4DCT simulation with an RT16 scanner and an RPM system. Lipiodol, which was updated near the target volume after transarterial chemoembolization or diaphragm was chosen as a surrogate for the evaluation of the position difference of internal organs. Two reference orthogonal (anterior and lateral) digital reconstructed radiograph (DRR) images were generated using CT image sets of 0% and 50% into the respiratory phases. The maximum tidal amplitude of the surrogate was measured from 3D conformal treatment planning. After setting the patient up with laser markings on the skin, orthogonal gated setup images at 50% into the respiratory phase were acquired at each treatment session with OBI and registered on reference DRR images by setting each beam center. Online inter-fractional variation was determined with the surrogate. After adjusting the patient setup error, orthogonal setup images at 0% and 50% into the respiratory phases were obtained and tidal amplitude of the surrogate was measured. Measured tidal amplitude was compared with data from 4DCT. For evaluation of intra-fractional variation, an orthogonal gated setup image at 50% into the respiratory phase was promptly acquired after treatment and compared with the same image taken just before treatment. In addition, a statistical analysis for the quantitative evaluation was performed. Results: Medians of inter-fractional variation for twenty patients were 0.00 cm (range, -0.50 to 0.90 cm), 0.00 cm (range, -2.40 to 1.60 cm), and 0.00 cm (range, -1.10 to 0.50 cm) in the X (transaxial), Y (superior-inferior), and Z (anterior-posterior) directions, respectively. Significant inter-fractional variations over 0.5 cm were observed in four patients. Min addition, the median tidal amplitude differences between 4DCTs and the gated orthogonal setup images were -0.05 cm (range, -0.83 to 0.60 cm), -0.15 cm (range, -2.58 to 1.18 cm), and -0.02 cm (range, -1.37 to 0.59 cm) in the X, Y, and Z directions, respectively. Large differences of over 1 cm were detected in 3 patients in the Y direction, while differences of more than 0.5 but less than 1 cm were observed in 5 patients in Y and Z directions. Median intra-fractional variation was 0.00 cm (range, -0.30 to 0.40 cm), -0.03 cm (range, -1.14 to 0.50 cm), 0.05 cm (range, -0.30 to 0.50 cm) in the X, Y, and Z directions, respectively. Significant intra-fractional variation of over 1 cm was observed in 2 patients in Y direction. Conclusion: Gated setup images provided a clear image quality for the detection of organ motion without a motion artifact. Significant intra- and inter-fractional variation and tidal amplitude differences between 4DCT and gated setup images were detected in some patients during the radiation treatment period, and therefore, should be considered when setting up the target margin. Monitoring of positional uncertainty and its adaptive feedback system can enhance the accuracy of treatments.

Analysis and Implication on the International Regulations related to Unmanned Aircraft -with emphasis on ICAO, U.S.A., Germany, Australia- (세계 무인항공기 운용 관련 규제 분석과 시사점 - ICAO, 미국, 독일, 호주를 중심으로 -)

  • Kim, Dong-Uk;Kim, Ji-Hoon;Kim, Sung-Mi;Kwon, Ky-Beom
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.32 no.1
    • /
    • pp.225-285
    • /
    • 2017
  • In regard to the regulations related to the RPA(Remotely Piloted Aircraft), which is sometimes called in other countries as UA(Unmanned Aircraft), ICAO stipulates the regulations in the 'RPAS manual (2015)' in detail based on the 'Chicago Convention' in 1944, and enacts provisions for the Rules of UAS or RPAS. Other contries stipulates them such as the Federal Airline Rules (14 CFR), Public Law (112-95) in the United States, the Air Transport Act, Air Transport Order, Air Transport Authorization Order (through revision in "Regulations to operating Rules on unmanned aerial System") based on EASA Regulation (EC) No.216/2008 in the case of unmanned aircaft under 150kg in Germany, and Civil Aviation Act (CAA 1998), Civil Aviation Act 101 (CASR Part 101) in Australia. Commonly, these laws exclude the model aircraft for leisure purpose and require pilots on the ground, not onboard aricraft, capable of controlling RPA. The laws also require that all managements necessary to operate RPA and pilots safely and efficiently under the structure of the unmanned aircraft system within the scope of the regulations. Each country classifies the RPA as an aircraft less than 25kg. Australia and Germany further break down the RPA at a lower weight. ICAO stipulates all general aviation operations, including commercial operation, in accordance with Annex 6 of the Chicago Convention, and it also applies to RPAs operations. However, passenger transportation using RPAs is excluded. If the operational scope of the RPAs includes the airspace of another country, the special permission of the relevant country shall be required 7 days before the flight date with detail flight plan submitted. In accordance with Federal Aviation Regulation 107 in the United States, a small non-leisure RPA may be operated within line-of-sight of a responsible navigator or observer during the day in the speed range up to 161 km/hr (87 knots) and to the height up to 122 m (400 ft) from surface or water. RPA must yield flight path to other aircraft, and is prohibited to load dangerous materials or to operate more than two RPAs at the same time. In Germany, the regulations on UAS except for leisure and sports provide duty to avoidance of airborne collisions and other provisions related to ground safety and individual privacy. Although commercial UAS of 5 kg or less can be freely operated without approval by relaxing the existing regulatory requirements, all the UAS regardless of the weight must be operated below an altitude of 100 meters with continuous monitoring and pilot control. Australia was the first country to regulate unmanned aircraft in 2001, and its regulations have impacts on the unmanned aircraft laws of ICAO, FAA, and EASA. In order to improve the utiliity of unmanned aircraft which is considered to be low risk, the regulation conditions were relaxed through the revision in 2016 by adding the concept "Excluded RPA". In the case of excluded RPA, it can be operated without special permission even for commercial purpose. Furthermore, disscussions on a new standard manual is being conducted for further flexibility of the current regulations.

  • PDF

Synthesis and Preliminary Evaluation of $9-(4-[^{18}F]Fluoro-3-hydroxymethylbutyl)$ Guanine $([^{18}F]FHBG)$ in HSV1-tk Gene Transduced Hepatoma Cell (9-(4-$[^{18}F]Fluoro-3-hydroxymethylbutyl)$guanine $([^{18}F]FHBG)$의 합성과 헤르페스 단순 바이러스 티미딘 키나아제 이입 간암 세포주에서의 기초 연구)

  • Moon, Byung-Seok;Lee, Tae-Sup;Lee, Myoung-Keun;Lee, Kyo-Chul;An, Gwang-Il;Chun, Kwon-Soo;Awh, Ok-Doo;Chi, Dae-Yoon;Choi, Chang-Woon;Lim, Sang-Moo;Cheon, Gi-Jeong
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.40 no.4
    • /
    • pp.218-227
    • /
    • 2006
  • Purpose: The HSV1-tk reporter gene system is the most widely used system because of its advantage that direct monitoring is possible without the introduction of a separate reporter gene in case of HSV1-tk suicide gene therapy. In this study, we investigate the usefulness of the reporter probe (substrate), $9-(4-[^{18}F]Fluoro-3-hydroxymethylbutyl)$guanine ($[^{18}F]FHBG$) for non-invasive reporter gene imaging using PET in HSV1-tk expressing hepatoma model. Materials and Methods: Radiolabeled FHBG was prepared in 8 steps from a commercially available triester. The labeling reaction was carried out by NCA nucleophilic substitution with $K[^{18}F]/K2.2.2.$ in acetonitrile using N2-monomethoxytrityl-9-14-(tosyl)-3-monomethoxytritylmethylbutyl]guanine as a precursor, followed by deprotection with 1 N HCl. Preliminary biological properties of the probe were evaluated with MCA cells and MCA-tk cells transduced with HSV1-tk reporter gene. In vitro uptake and release-out studies of $[^{18}F]FHBG$ were performed, and was analyzed correlation between $[^{18}F]FHBG$ uptake ratio according to increasing numeric count of MCA-tk cells and degree of gene expression. MicroPET scan image was obtained with MCA and MCA-tk tumor bearing Balb/c-nude mouse model. Results: $[^{18}F]FHBG$ was purified by reverse phase semi-HPLC system and collected at around 16-18 min. Radiothemical yield was about 20-25%) (corrected for decay), radiochemical purity was >95% and specific activity was around >55.5 $GBq/{\mu}\;mol$. Specific accumulation of $[^{18}F]FHBG$ was observed in HSV1-tk gene transduced MCA-tk cells but not in MCA cells, and consecutive 1 hour release-out results showed more than 86% of uptaked $[^{18}F]FHBG$ was retained inside of cells. The uptake of $[^{18}F]FHBG$ was showed a highly significant linear correlation ($R^2=0.995$) with increasing percentage of MCA-tk numeric cell count. In microPET scan images, remarkable difference of accumulation was observed for the two type of tumors. Conclusion: $[^{18}F]FHBG$ appears to be a useful as non-invasive PET imaging substrate in HSV1-tk expressing hepatoma model.

The Effects of Pergola Wisteria floribunda's LAI on Thermal Environment (그늘시렁 Wisteria floribunda의 엽면적지수가 온열환경에 미치는 영향)

  • Ryu, Nam-Hyong;Lee, Chun-Seok
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.45 no.6
    • /
    • pp.115-125
    • /
    • 2017
  • This study was to investigate the user's thermal environments under the pergola($L\;7,200{\times}W\;4,200{\times}H\;2,700mn$) covered with Wisteria floribunda(Willd.) DC. according to the variation of leaf area index(LAI). We carried out detailed measurements with two human-biometeorological stations on a popular square Jinju, Korea($N35^{\circ}10^{\prime}59.8^{{\prime}{\prime}}$, $E\;128^{\circ}05^{\prime}32.0^{{\prime}{\prime}}$, elevation: 38m). One of the stations stood under a pergola, while the other in the sun. The measurement spots were instrumented with microclimate monitoring stations to continuously measure air temperature and relative humidity, wind speed, shortwave and longwave radiation from the six cardinal directions at the height of 0.6m so as to calculate the Universal Thermal Climate Index(UTCI) from $9^{th}$ April to $27^{th}$ September 2017. The LAI was measured using the LAI-2200C Plant Canopy Analyzer. The analysis results of 18 day's 1 minute term human-biometeorological data absorbed by a man in sitting position from 10am to 4pm showed the following. During the whole observation period, daily average air temperatures under the pergola were respectively $0.7{\sim}2.3^{\circ}C$ lower compared with those in the sun, daily average wind speed and relative humidity under the pergola were respectively 0.17~0.38m/s and 0.4~3.1% higher compared with those in the sun. There was significant relationship in LAI, Julian day number and were expressed in the equation $y=-0.0004x^2+0.1719x-11.765(R^2=0.9897)$. The average $T_{mrt}$ under the pergola were $11.9{\sim}25.4^{\circ}C$ lower and maximum ${\Delta}T_{mrt}$ under the pergola were $24.1{\sim}30.2^{\circ}C$ when compared with those in the sun. There was significant relationship in LAI, reduction ratio(%) of daily average $T_{mrt}$ compared with those in the sun and was expressed in the equation $y=0.0678{\ln}(x)+0.3036(R^2=0.9454)$. The average UTCI under the pergola were $4.1{\sim}8.3^{\circ}C$ lower and maximum ${\Delta}UTCI$ under the pergola were $7.8{\sim}10.2^{\circ}C$ when compared with those in the sun. There was significant relationship in LAI, reduction ratio(%) of daily average UTCI compared with those in the sun and were expressed in the equation $y=0.0322{\ln}(x)+0.1538(R^2=0.8946)$. The shading by the pergola covered with vines was very effective for reducing daytime UTCI absorbed by a man in sitting position at summer largely through a reduction in mean radiant temperature from sun protection, lowering thermal stress from very strong(UTCI >$38^{\circ}C$) and strong(UTCI >$32^{\circ}C$) down to strong(UTCI >$32^{\circ}C$) and moderate(UTCI >$26^{\circ}C$). Therefore the pergola covered with vines used for shading outdoor spaces is essential to mitigate heat stress and can create better human thermal comfort especially in cities during summer. But the thermal environments under the pergola covered with vines during the heat wave supposed to user "very strong heat stress(UTCI>$38^{\circ}C$)". Therefore users must restrain themselves from outdoor activities during the heat waves.

A Study on the Growth Diagnosis and Management Prescription for Population of Retusa Fringe Trees in Pyeongji-ri, Jinan(Natural Monument No. 214) (진안 평지리 이팝나무군(천연기념물 제214호)의 생육진단 및 관리방안)

  • Rho, Jae-Hyun;Oh, Hyun-Kyung;Han, Sang-Yub;Choi, Yung-Hyun;Son, Hee-Kyung
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.36 no.3
    • /
    • pp.115-127
    • /
    • 2018
  • This study was attempted to find out the value of cultural assets through the clear diagnosis and prescription of the dead and weakness factors of the Population of Retusa Fringe Trees in Pyeongji-ri, Jinan(Natural Monument No. 214), The results are as follows. First, Since the designation of 13 natural monuments in 1968, since 1973, many years have passed since then. In particular, despite the removal of some of the buried soil during the maintenance process, such as retreating from the fence of the primary school after 2010, Second, The first and third surviving tree of the designated trees also have many branches that are dead, the leaves are dull, and the amount of leaves is small. vitality of tree is 'extremely bad', and the first branch has already been faded by a large number of branches, and the amount of leaves is considerably low this year, so that only two flowers are bloomed. The second is also in a 'bad'state, with small leaves, low leaf density, and deformed water. The largest number 1 in the world is added to the concern that the s coverd oil is assumed to be paddy soils. Third, It is found that the composition ratio of silt is high because it is known as '[silty loam(SiL)]'. In addition, the pH of the northern soil at pH 1 was 6.6, which was significantly different from that of the other soil. In addition, the organic matter content was higher than the appropriate range, which is considered to reflect the result of continuous application for protection management. Fourth, It is considered that the root cause of failure and growth of Jinan pyeongji-ri Population of Retusa Fringe Trees group is chronic syndrome of serious menstrual deterioration due to covered soil. This can also be attributed to the newly planted succession and to some of the deaths. Fifthly, It is urgent to gradually remove the subsoil part, which is estimated to be the cause of the initial damage. Above all, it is almost impossible to remove the coverd soil after grasping the details of the soil, such as clayey soil, which is buried in the rootstock. After removal of the coverd soil, a pestle is installed to improve the respiration of the roots and the ground with Masato. And the dead 4th dead wood and the 5th and 6th dead wood are the best, and the lower layer vegetation is mown. The viable neck should be removed from the upper surface, and the bark defect should undergo surgery and induce the development of blindness by vestibule below the growth point. Sixth, The underground roots should be identified to prepare a method to improve the decompression of the root and the respiration of the soil. It is induced by the shortening of rotten roots by tracing the first half of the rootstock to induce the generation of new roots. Seventh, We try mulching to suppress weed occurrence, trampling pressure, and soil moisturizing effect. In addition, consideration should be given to the fertilization of the foliar fertilizer, the injection of the nutrients, and the soil management of the inorganic fertilizer for the continuous nutrition supply. Future monitoring and forecasting plans should be developed to check for changes continuously.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.