• Title/Summary/Keyword: Single image

Search Result 2,233, Processing Time 0.031 seconds

Results of Stereotactic Radiosurgerv with Linear Accelerator for Intracranial Arteriovenous Malformation (두개강내 동정맥기형에서 선형가속기를 이용한 방사선수술의 결과)

  • Lee Kang Kyoo;Park Kyung Ran;Lee Jong Young;Lee Yong Ha
    • Radiation Oncology Journal
    • /
    • v.15 no.3
    • /
    • pp.215-224
    • /
    • 1997
  • Purpose : Stereotactic radiosurgery with external beam irradiation successfully obliterates carefully selected intracranial arteriovenous malformation (AVM) . We Present clinical and radiological long term results after treatment with a single high dose irradiation using a linear accelerator. Materials and Methods : Rrom January 1991 to June 1994, fifteen patients with intracranial AVM were treated in our hospital with the stereotactic radiosurgery using a linear accelerator. The radiation was delivered using a 6 MV linear accelerator. The prescribed doses at the isocenter varied from 1800 to 2500cGy (median : 2000cGy) and were given as a sin91e fraction. The radiation doses at the periphery of the lesion typically corresponded to the 80-90% isodose line. In 14 patients, complete clinical and/or radiological follow-up examination were available. Results : Angiography was available in 13 patients with a follow-up Period from 18 months to 27 months. Of 13 patients, the overall complete obliteration rate was 92.3% (12 patients). This incidence did not correlate with lesion size. Seizure, headache and progressive neurologic deficit were complete recovered. One Patient experienced hemorrhage at 2 months after treatment. One patient developed radiation induced brain edema in the white matter surrounding nidus at 16 months after treatment and showed complete resolution of the edema in MR image obtained at 27 months after treatment. After a follow-up period of up to 6 years, no radiation induced severe late complications occurred. Conclusion : We conclude that stereotactic radiosurgery using a linear accelerator is an effective and safe therapy for symptomatic and surgically inaccessible intracranial AVMs and the results compare favorably to the more expensive and elaborate systems that are currently available for stereotactic radiosurgery.

  • PDF

Performance Measurement of Diagnostic X Ray System (진단용 X선 발생장치의 성능 측정)

  • You, Ingyu;Lim, Cheonghwan;Lee, Sangho;Lee, Mankoo
    • Journal of the Korean Society of Radiology
    • /
    • v.6 no.6
    • /
    • pp.447-454
    • /
    • 2012
  • To examine the performance of a diagnostic X-ray system, we tested a linearity, reproducibility, and Half Value Layer(HVL). The linearity was examined 4 times of irradiation with a given condition, and we recorded a level of radiation. We then calculated the mR/mAs. And the measured value should not be more than 0.1. If the measured value was more than 0.1, we could know that the linearity was decreased. The reproducibility was analyzed 10 times of irradiations at 80kVp, 200mA, 20mAs and 120kVp, 300mA, 8mAs. The values from these analyses were integrated into CV equation, and we could get outputs. The reproducibility was good if the output was lower than 0.05. HVL was measured 3 times of irradiation without a filter, and we inserted additional HLV filters with 0, 1, 2, 4 mm of thickness. We tested the values until we get the measured value less than a half of the value measured without additional filter. We tested the linearity, the reproducibility, and HVL of 5 diagnostic X-ray generators in this facilities. The linearity of No. 1 and No. 5 generator didn't satisfy the standard for radiation safety around 300mA~400mA and 100mA~200mA, respectively. HVL of No.1 generator was not satisfied at 80kVp. The outputs were higher in the three-phase equipment than the single-phase equipment. The old generators need to maintain and exchange of components based on the these results. Then, we could contribute to getting more exact diagnosis increasing a quality of the image and decreasing an expose dose of radiation.

THE EFFECT OF LOW DIETARY CALCIUM AND IRRADIATION ON MANDIBLE IN RATS (저칼슘식이와 방사선조사가 백서 악골에 미치는 영향의 실험적 연구)

  • Lee Sun-Ki;Lee Sang-Rae
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.23 no.2
    • /
    • pp.229-250
    • /
    • 1993
  • This study was performed to investigate the morphological and structural changes of bone tissues and the effects of irradiation on the mandibular bodies of rats which were fed low calcium diets. In order to carry out this experiment, 160 seven-week old Sprague-Dawley strain rats weighing about 150 gm were selected and equally divided into one normal diet group of 80 rats and one low calcium diet group with the remainder. These groups were then subdivided into two groups, 40 were assigned rats for each subdivided group, exposed to radiation. The Group 1 was composed of forty non-irradiated rats with normal diet, Group 2 of forty irradiated rats with normal diet, Group 3 forty non-irradiated rats with low calcium diet, and Group 4 forty irradiated rats with low calcium diet. The two irradiation groups received a single dose of 20 Gy on the jaw area only and irradiated with a cobalt-50 teletherapy unit. The rats with normal and low calcium diet groups were serially terminated by ten on the 3rd, the 7th, the 14th, and the 21st day after irradiation. After termination, both sides of the dead rats mandible were removed and fixed with 10% neutral formalin. The bone density of mandibular body was measured by use of bone mineral densitometer(Model DPX -alpha, Lunar Corp., U.SA). Triga Mark ill nuclear reactor in Korea Atomic Research Institute was used for neutron activation and then calcium contents of mandibular body were measured by using a 4096 multichannel analyzer (EG and G ORTEC 919 MCA, U.SA). Also the mandibular body was radiographed with a soft X-ray apparatus(Hitex Co., Ltd., Japan). Thereafter, the obtained microradiograms were observed by a light microscope and were used for the morphometric analysis using a image analyzer(Leco 2001 System, Leco Co., Canada). The morphometric analysis was performed for parameters such as the total area, the bone area, the inner and outer perimeters of the bone. The obtained results were as follows: 1. In the morphometric analysis, total area and outer perimeter of the mandibular bodies of Group 3 were a little smaller than that of Group 1. The mean bone width and bone area were much smaller than that of Group 1 and the inner perimeter of Group 3 was much longer than that of Group 1. The total area and outer perimeter of Group 2 and Group 4 showed little difference. The mean bone width and bone area of Group 4 were smaller than that of Group 2 and the inner perimeter of Group 4 was longer than that of Group 2. 2. The remarkable decreases of the number and thickness of trabeculae and also the resorption of endosteal surface of cortical bone could be seen in the microradiogram of Group 3, Group 4 since the 3rd day of experiment. On the 21st day of experiment, the above findings could be more clearly seen in Group 4 than in Group 3. 3. The bone mineral density of Group 3 was lesser than that of Group 1 and the bone mineral density of Group 4 was lesser than that of Group 2 on the 7th, 14th, 21st days. The irradiation caused the bone mineral density to be decreased regardless of diet. In the case of Groups with low calcium diet, the bone mineral density was much decreased on the 21st day than on the 3rd day of experiment. 4. The calcium content in mandible of Group 3 was smaller than that of Group 1 throughout the experiment. roup 4 showed the least amount of calcium content. The irradiation caused the calcium content to be decreased regardless of diet. In the case of Groups with low calcium diet, the calcium content was much decreased on the 21st day than on the 3rd day of experiment. In conclusion, the present study demonstrated that morphological changs and decrease of bone mass due to resorption of bone by low calcium diet, and that the resorption of bone could be found in the spongeous bone and endosteal surface of cortical bone. So the problem of resorption of bone must be considered when the old and the postmenopausal women are taken radiotherapy because the irradiation seems to be accelerated the resorption of osteoporotic bone.

  • PDF

Case study on the lake-land combined seismic survey for underground LPG storage construction (LPG 지하저장기지 건설을 위한 수륙혼합 탄성파탐사 사례)

  • Cha Seong-Soo;Park Keun-Pil;Lee Ho-Young;Lee Hee-Il;Kim Ho-Young
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2002.09a
    • /
    • pp.101-125
    • /
    • 2002
  • A lake seismic survey was carried out to investigate possible geohazards for construction of the underground LPG storage at Namyang Lake. The proposed survey site has a land-lake combined geography and furthermore water depth of the lake is shallow. Therefore, various seismic methods such as marine single channel high resolution seismic reflection survey, sonobuoy refraction survey, land refraction survey and land-lake combined refraction survey were applied. Total survey amounts are 34 line-km of high resolution lake seismic survey, 14 lines of sonobuoy refraction survey, 890 m of land refraction survey and 8 lines of land-lake combined refraction survey. During the reflection survey, there were severe water reverberations from the lake bottom obscured subsurface profiling. These strong multiple events appeared in most of the survey area except the northern and southern area near the embankment where seems to be accumulated mainly mud dominated depositions. The sonobuoy refraction profiles also showed the same Phenomena as those of reflection survey. Meanwhile the results of the land-lake combined refraction survey showed relatively better qualities. However, the land refraction survey did not so due to low velocity soil layer and electrical noise. Summarized results from the lake seismic survey are that acoustic basement with relatively flat pattern appeared 30m below water level and showed three types of bedrock such as fresh, moderately weathered and weathered type. According to the results of the combined refraction survey, a velocity distribution pattern of the lake bottom shows three types of seismic velocity zone such as >4.5 km/s, 4.5-4.0km/s and <4.0km/s. The major fault lineament in the area showed NW-SE trend which was different from the Landsat image interpretation. A drilling was confirmed estimated faults by seismic survey.

  • PDF

Assembly and Testing of a Visible and Near-infrared Spectrometer with a Shack-Hartmann Wavefront Sensor (샤크-하트만 센서를 이용한 가시광 및 근적외선 분광기 조립 및 평가)

  • Hwang, Sung Lyoung;Lee, Jun Ho;Jeong, Do Hwan;Hong, Jin Suk;Kim, Young Soo;Kim, Yeon Soo;Kim, Hyun Sook
    • Korean Journal of Optics and Photonics
    • /
    • v.28 no.3
    • /
    • pp.108-115
    • /
    • 2017
  • We report the assembly procedure and performance evaluation of a visible and near-infrared spectrometer in the wavelength region of 400-900 nm, which is later to be combined with fore-optics (a telescope) to form a f/2.5 imaging spectrometer with a field of view of ${\pm}7.68^{\circ}$. The detector at the final image plane is a $640{\times}480$ charge-coupled device with a $24{\mu}m$ pixel size. The spectrometer is in an Offner relay configuration consisting of two concentric, spherical mirrors, the secondary of which is replaced by a convex grating mirror. A double-pass test method with an interferometer is often applied in the assembly process of precision optics, but was excluded from our study due to a large residual wavefront error (WFE) in optical design of 210 nm ($0.35{\lambda}$ at 600 nm) root-mean-square (RMS). This results in a single-path test method with a Shack-Hartmann sensor. The final assembly was tested to have a RMS WFE increase of less than 90 nm over the entire field of view, a keystone of 0.08 pixels, a smile of 1.13 pixels and a spectral resolution of 4.32 nm. During the procedure, we confirmed the validity of using a Shack-Hartmann wavefront sensor to monitor alignment in the assembly of an Offner-like spectrometer.

A Study of Disposition of Archaeological Remains in Wolseong Fortress of Gyeongju : Using Ground Penetration Radar(GPR) (GPR탐사를 통해 본 경주 월성의 유적 분포 현황 연구)

  • Oh, Hyun Dok;Shin, Jong Woo
    • Korean Journal of Heritage: History & Science
    • /
    • v.43 no.3
    • /
    • pp.306-333
    • /
    • 2010
  • Previous studies on Wolseong fortress have focused on capital system of Silla Dynasty and on the recreation of Wolseong fortress due to the excavations in and around Wolseong moat. Since the report on the Geographical Survey of Wolseong fortress was published and GPR survey in Wolseong fortress was executed as a trial test in 2004, the academic interest in the site has now expanded to the inside of the fortress. From such context, the preliminary research on the fortress including geophysical survey had been commenced. GPR survey had been conducted for a year from March, 2007. The principal purpose of the recent 3D GPR survey was to provide visualization of subsurface images of the entire Wolseong fortress area. In order to obtain 3D GPR data, dense profile lines were laid in grid-form. The total area surveyed was $112,535m^2$. Depth slice was applied to analyse each level to examine how the layers of the remains had changed and overlapped over time. In addition, slice overlay analysis methodology was used to gather reflects of each depth on a single map. Isolated surface visualization, which is one of 3D analysis methods, was also employed to gain more in-depth understanding and more accurate interpretations of the remain The GPR survey has confirmed that there are building sites whose archaeological features can be classified into 14 different groups. Three interesting areas with huge public building arrangement have been found in Zone 2 in the far west, Zone 9 in the middle, and Zone 14 in the far east. It is recognized that such areas must had been used for important public functions. This research has displayed that 3D GPR survey can be effective for a vast area of archaeological remains and that slice overlay images can provide clearer image with high contrast for objects and remains buried the site.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

Estimation of Chlorophyll Contents in Pear Tree Using Unmanned AerialVehicle-Based-Hyperspectral Imagery (무인기 기반 초분광영상을 이용한 배나무 엽록소 함량 추정)

  • Ye Seong Kang;Ki Su Park;Eun Li Kim;Jong Chan Jeong;Chan Seok Ryu;Jung Gun Cho
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.669-681
    • /
    • 2023
  • Studies have tried to apply remote sensing technology, a non-destructive survey method, instead of the existing destructive survey, which requires relatively large labor input and a long time to estimate chlorophyll content, which is an important indicator for evaluating the growth of fruit trees. This study was conducted to non-destructively evaluate the chlorophyll content of pear tree leaves using unmanned aerial vehicle-based hyperspectral imagery for two years(2021, 2022). The reflectance of the single bands of the pear tree canopy extracted through image processing was band rationed to minimize unstable radiation effects depending on time changes. The estimation (calibration and validation) models were developed using machine learning algorithms of elastic-net, k-nearest neighbors(KNN), and support vector machine with band ratios as input variables. By comparing the performance of estimation models based on full band ratios, key band ratios that are advantageous for reducing computational costs and improving reproducibility were selected. As a result, for all machine learning models, when calibration of coefficient of determination (R2)≥0.67, root mean squared error (RMSE)≤1.22 ㎍/cm2, relative error (RE)≤17.9% and validation of R2≥0.56, RMSE≤1.41 ㎍/cm2, RE≤20.7% using full band ratios were compared, four key band ratios were selected. There was relatively no significant difference in validation performance between machine learning models. Therefore, the KNN model with the highest calibration performance was used as the standard, and its key band ratios were 710/714, 718/722, 754/758, and 758/762 nm. The performance of calibration showed R2=0.80, RMSE=0.94 ㎍/cm2, RE=13.9%, and validation showed R2=0.57, RMSE=1.40 ㎍/cm2, RE=20.5%. Although the performance results based on validation were not sufficient to estimate the chlorophyll content of pear tree leaves, it is meaningful that key band ratios were selected as a standard for future research. To improve estimation performance, it is necessary to continuously secure additional datasets and improve the estimation model by reproducing it in actual orchards. In future research, it is necessary to continuously secure additional datasets to improve estimation performance, verify the reliability of the selected key band ratios, and upgrade the estimation model to be reproducible in actual orchards.

Contrast Media in Abdominal Computed Tomography: Optimization of Delivery Methods

  • Joon Koo Han;Byung Ihn Choi;Ah Young Kim;Soo Jung Kim
    • Korean Journal of Radiology
    • /
    • v.2 no.1
    • /
    • pp.28-36
    • /
    • 2001
  • Objective: To provide a systematic overview of the effects of various parameters on contrast enhancement within the same population, an animal experiment as well as a computer-aided simulation study was performed. Materials and Methods: In an animal experiment, single-level dynamic CT through the liver was performed at 5-second intervals just after the injection of contrast medium for 3 minutes. Combinations of three different amounts (1, 2, 3 mL/kg), concentrations (150, 200, 300 mgI/mL), and injection rates (0.5, 1, 2 mL/sec) were used. The CT number of the aorta (A), portal vein (P) and liver (L) was measured in each image, and time-attenuation curves for A, P and L were thus obtained. The degree of maximum enhancement (Imax) and time to reach peak enhancement (Tmax) of A, P and L were determined, and times to equilibrium (Teq) were analyzed. In the computed-aided simulation model, a program based on the amount, flow, and diffusion coefficient of body fluid in various compartments of the human body was designed. The input variables were the concentrations, volumes and injection rates of the contrast media used. The program generated the time-attenuation curves of A, P and L, as well as liver-to-hepatocellular carcinoma (HCC) contrast curves. On each curve, we calculated and plotted the optimal temporal window (time period above the lower threshold, which in this experiment was 10 Hounsfield units), the total area under the curve above the lower threshold, and the area within the optimal range. Results: A. Animal Experiment: At a given concentration and injection rate, an increased volume of contrast medium led to increases in Imax A, P and L. In addition, Tmax A, P, L and Teq were prolonged in parallel with increases in injection time The time-attenuation curve shifted upward and to the right. For a given volume and injection rate, an increased concentration of contrast medium increased the degree of aortic, portal and hepatic enhancement, though Tmax A, P and L remained the same. The time-attenuation curve shifted upward. For a given volume and concentration of contrast medium, changes in the injection rate had a prominent effect on aortic enhancement, and that of the portal vein and hepatic parenchyma also showed some increase, though the effect was less prominent. A increased in the rate of contrast injection led to shifting of the time enhancement curve to the left and upward. B. Computer Simulation: At a faster injection rate, there was minimal change in the degree of hepatic attenuation, though the duration of the optimal temporal window decreased. The area between 10 and 30 HU was greatest when contrast media was delivered at a rate of 2 3 mL/sec. Although the total area under the curve increased in proportion to the injection rate, most of this increase was above the upper threshould and thus the temporal window was narrow and the optimal area decreased. Conclusion: Increases in volume, concentration and injection rate all resulted in improved arterial enhancement. If cost was disregarded, increasing the injection volume was the most reliable way of obtaining good quality enhancement. The optimal way of delivering a given amount of contrast medium can be calculated using a computer-based mathematical model.

  • PDF

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF