• Title/Summary/Keyword: Embedded-system

Search Result 4,515, Processing Time 0.036 seconds

SHEAR BOND STRENGTH AND MICROLEAKAGE OF COMPOSITE RESIN ACCORDING TO TREATMENT METHODS OF CONTAMINATED SURFACE AFTER APPLYING A BONDING AGENT (접착제 도포후 오염된 표면의 처리방법에 따른 복합레진의 전단결합강도와 미세누출)

  • Park, Joo-Sik;Lee, Suck-Jong;Moon, Joo-Hoon;Cho, Young-Gon
    • Restorative Dentistry and Endodontics
    • /
    • v.24 no.4
    • /
    • pp.647-656
    • /
    • 1999
  • The purpose of this study was to investigate the shear bond strength and marginal microleakage of composite to enamel and dentin according to different treatment methods when the applied bonding agent was contaminated by artificial saliva. For the shear bond strength test, the buccal and occlusal surfaces of one hundred twenty molar teeth were ground to expose enamel(n=60) and dentin surfaces(n=60). The specimens were randomly assigned into control and 5 experimental groups with 10 samples in each group. In control group, a bonding system(Scotchbond$^{TM}$ Multi-Purpose plus) and a composite resin(Z-100$^{TM}$) was bonded on the specimens according to manufacture's directions. Experimental groups were subdivided into 5 groups. After polymerization of an adhesive, they were contaminated with at artificial saliva on enamel and dentin surfaces: Experimental group 1 ; artificial saliva was dried with compressed air. Experimental group 2 ; artificial saliva was rinsed with air-water spray and dried. Experimental group 3 ; artificial saliva was rinsed, dried and applied an adhesive. Experimental group 4 ; artificial saliva was rinsed, dried, and then etched using phosphoric acid followed by an adhesive. Experimental group 5, artificial saliva was rinsed, dried, and then etched with phosphoric acid followed by consecutive application of both a primer and an adhesive. Composite resin(Z-100$^{TM}$) was bonded on saliva-treated enamel and dentin surfaces. The shear bond strengths were measured by universal testing machine(AGS-1000 4D, Shimaduzu Co. Japan) with a crosshead speed of 5mm/minute under 50kg load cell. Failure modes of fracture sites were examined under stereomicroscope. The data were analyzed by one-way ANOVA and Tukey's test. For the marginal microleakage test, Class V cavities were prepared on the buccal surfaces of sixty molars. The specimens were divided into control and experimental groups. Cavities in experimental group were contaminated with artificial saliva and those surfaces in each experimental groups received the same treatments as for the shear test. Cavities were filled with Z-100. Specimens were immersed in 0.5% basic fuchsin dye for 24 hours and embedded in transparent acrylic resin and sectioned buccolingually with diamond wheel saw. Four sections were obtained from the one specimen. Marginal microleakages of enamel and dentin were scored under streomicroscope and averaged from four sections. The data were analyzed by Kruskal-Wallis test and Fisher's LSD. The results of this study were as follows. 1. The shear bond strength to enamel showed lower value in experimental group 1(13.20${\pm}$2.94MPa) and experimental group 2(13.20${\pm}$2.94MPa) than in control(20.03${\pm}$4.47MPa), experimental group 4(20.96${\pm}$4.25MPa) and experimental group 5(21.25${\pm}$4.48MPa) (p<0.05). 2. The shear bond strength to dentin showed lower value in experimental group 1(9.35${\pm}$4.11MPa) and experimental group 2(9.83${\pm}$4.11MPa) than in control group(17.86${\pm}$4.03MPa), experimental group 4(15.04${\pm}$3.22MPa) and experimental group 5(14.33${\pm}$3.00MPa) (p<0.05). 3. Both on enamel and dentin surfaces, experimental group 1 and 2 showed many adhesive failures, but control and experimental group 3, 4 and 5 showed mixed and cohesive failures. 4. Enamel marginal microleakage was the highest in experimental group 1 and there was a significant difference in comparison with other groups (p<0.05). 5. Dentin marginal microleakages of experimental group 1 and 2 were higher than those of other groups (p<0.05). This result suggests that treatment methods, re-etching with 35% phosphoric acid followed by re-application of adhesive or repeating all adhesive procedures, will produce good effect on both shear bond strength and microleakage of composite to enamel and dentin if the polymerized bonding agent was contaminated by saliva.

  • PDF

FRACTURE RESISTANCE OF THE THREE TYPES OF UNDERMINED CAVITY FILLED WITH COMPOSITE RESIN (복합 레진으로 수복된 세 가지 첨와형태 와동의 파절 저항성에 관한 연구)

  • Choi, Hoon-Soo;Shin, Dong-Hoon
    • Restorative Dentistry and Endodontics
    • /
    • v.33 no.3
    • /
    • pp.177-183
    • /
    • 2008
  • It was reported that esthetic composite resin restoration reinforces the strength of remaining tooth structure with preserving the natural tooth structure. However, it is unknown how much the strength would be recovered. The purpose of this study was to compare the fracture resistance of three types of undermined cavity filled with composite resin with that of non-cavitated natural tooth. Forty sound upper molars were allocated randomly into four groups of 10 teeth. After flattening occlusal enamel, undermined cavities were prepared in thirty teeth to make three types of specimens with various thickness of occlusal structure (Group $1{\sim}3$). All the cavity have the 5 mm width mesiodistally and 7 mm depth bucco-lingually. Another natural 10 teeth (Group 4) were used as a control group. Teeth in group 1 have remaining occlusal structure about 1 mm thickness, which was composed of mainly enamel and small amount of dentin. In Group 2, remained thickness was about 1.5 mm, including 0.5 mm thickness dentin. In Group 3, thickness was about 2.0 mm, including 1 mm thickness dentin. Every effort was made to keep the remaining dentin thickness about 0.5 mm from the pulp space in cavitated groups. All the thickness was evaluated with radiographic Length Analyzer program. After acid etching with 37% phosphoric acid, one-bottle adhesive (Single $Bond^{TM}$, 3M/ESPE, USA) was applied following the manufacturer's recommendation and cavities were incrementally filled with hybrid composite resin (Filtek $Z-250^{TM}$, 3M/ESPE, USA). Teeth were stored in distilled water for one day at room temperature, after then, they were finished and polished with Sof-Lex system. All specimens were embedded in acrylic resin and static load was applied to the specimens with a 3 mm diameter stainless steel rod in an Universal testing machine and cross-head speed was 1 mm/min. Maximum load in case of fracture was recorded for each specimen. The data were statistically analyzed using one-way analysis of variance (ANOVA) and a Tukey test at the 95% confidence level. The results were as follows: 1. Fracture resistance of the undermined cavity filled with composite resin was about 75% of the natural tooth. 2. No significant difference in fracture loads of composite resin restoration was found among the three types of cavitated groups. Within the limits of this study, it can be concluded the fracture resistance of the undermined cavity filled with composite resin was lower than that of natural teeth, however remaining tooth structure may be supported and saved by the reinforcement with adhesive restoration, even if that portion consists of mainly enamel and a little dentin structure.

CROSS-SECTIONAL MORPHOLOGY AND MINIMUM CANAL WALL WIDTHS IN C-SHAPED ROOT OF MANDIBULAR MOLARS (C-shaped canal의 절단면 분석을 통한 근관형태의 변화와 근관과 치아외벽간의 최소거리 분석에 관한 연구)

  • Song, Byung-Chul;Cho, Yong-Bum
    • Restorative Dentistry and Endodontics
    • /
    • v.32 no.1
    • /
    • pp.37-46
    • /
    • 2007
  • The C-shaped canal system is an anatomical variation mostly seen in mandibular second molars, although it can also occur in maxillary and other mandibular molars. The main anatomical feature of C-shaped canals is the presence of fins or web connecting the individual root canals. The complexity of C-shaped canals prevents these canals from being cleaned, shaped, and obturated effectively during root canal therapy, and sometimes it leads to an iatrogenic perforation from the extravagant preparation. The purpose of this study was to provide further knowledge of the anatomical configuration and the minimal thickness of dentinal wall according to the level of the root. Thirty extracted mandibular second molars with fused roots and longitudinal grooves on lingual or buccal surface of the root were collected from a native Korean population. The photo images and radiographs from buccal, lingual, apical direction were taken. After access cavity was prepared, teeth were placed in 5.25% sodium hypochlorite solution for 2 hours to dissolve the organic tissue of the root surface and from the root canal system. After bench dried and all the teeth were embedded in a self-curing resin. Each block was sectioned using a microtome (Accutom-50, Struers, Denmark) at interval of 1 mm. The sectioned surface photograph was taken using a digital camera (Coolpix 995, Nikon, Japan) connected to the microscope. 197 images were evaluated for canal configurations and the minimal thickness of dentinal wall between canal and external wall using 'Root Thickness Gauge Program' designed with Visual Basic. The results were as follows : 1. At the orifice level of all teeth, the most frequent observed configuration was Melton's Type C I (73%), however the patterns were changed to type C II and C III when the sections were observed at the apical third. On the other hand, the type C III was observed at the orifice level of only 2 teeth but this type could be seen at apical region of the rest of the teeth. 2. The C-shaped canal showed continuous and semi-colon shape at the orifice level, but at the apical portion of the canal there was high possibility of having 2 or 3 canals 3. Lingual wall was thinner than buccal wall at coronal, middle, apical thirds of root but there was no statistical differences.

An Interdisciplinary Approach to the Human/Posthuman Discourses Emerging From Cybernetics and Artificial Intelligence Technology (4차 산업혁명 시대의 사이버네틱스와 휴먼·포스트휴먼에 관한 인문학적 지평 연구)

  • Kim, Dong-Yoon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.836-848
    • /
    • 2019
  • This paper aims at providing a critical view over the cybernetics theory especially of first generation on which the artificial intelligence heavily depends nowadays. There has been a commonly accepted thought that the conception of artificial intelligence could not has been possible without being influenced by N. Wiener's cybernetic feedback based information system. Despite the founder of contemporary cybernetics' ethical concerns in order to avoid an increasing entropy phenomena(social violence, economic misery, wars) produced through a negative dynamics of the western modernity regarded as the most advanced form of humanism. In this civilizationally changing atmosphere, the newly born cybernetic technology was thus firmly believed as an antidote to these vices deeply rooted in humanism itself. But cybernetics has been turned out to be a self-organizing, self-controlling mechanical system that entails the possibility of telegraphing human brain (which are transformed into patterns) through the uploading of human brain neurons digitalized by the artificial intelligence embedded into computing technology. On this background emerges posthuman (or posthumanism) movement of which concepts have been theorized mainly by its ardent apostles like N. K. Hayles, Neil Bedington, Laurent Alexandre, Donna J. Haraway. The converging of NBIC Technologies leading to the opening of a much more digitalizing society has served as a catalyst to promote the posthuman representations and different narratives especially in the contemporary visual arts as well as in the study of humanities including philosophy and fictional literature. Once Bruno Latour wrote "Modernity is often defined in terms of humanism, either as a way of saluting the birth of 'man' or as a way of announcing his death. But this habit is itself modern, because it remains asymmetrical. It overlooks the simultaneous birth of 'nonhumaniy' - things, or objects, or beasts, - and the equally strange beginning of a crossed-out God, relegated to the sidelines."4) These highly suggestive ideas enable us to better understand what kind of human beings would emerge following the dazzlingly accelerating advancement of artificial intelligence technology. We wonder whether or not this newly born humankind would become essentially Homo Artificialis as a neuronal man stripping off his biological apparatus. However due to this unprecedented situation humans should deal with enormous challenges involving ethical, metaphysical, existential implications on their life.

A Study on the Aspects of Anti-Japanese and Pro-Japanese Literature Shown in Japanese Korean Literature History (일본 한국문학사에 나타난 항일문학과 친일문학 기술양상)

  • Son, Jiyoun
    • Cross-Cultural Studies
    • /
    • v.52
    • /
    • pp.133-164
    • /
    • 2018
  • This purpose of this paper is to focus on anti-Japanese literature and pro-Japanese literature skills among Korean literary history written in Japan, and to observe the differences between Korean and Japanese perception surrounding anti-Japanese and pro-Japanese literature. Analyzed texts are "Taste Korean Literature" by Saegusa Dosikatsu and "The Footsteps of Modern Literature of Chosun" by Shirakawa Yutaka, the earnest modern Korean literary historians written from the perspective of Japanese writers, and though there's no overall written history of literature, they were seen through with the perspective of Omura Masuo, at the forefront of Japanese researchers in modern and contemporary Korean literature. The main results of the review are as follow: First, In Korean literary history by Japan, the frame "pro-Japanese literature" is clearly embedded. It is clearly distinctive from the aspect of China or North Korea, and though it follows the narration system of South Korean literature, it also forms the breaking (turning) point of anti-Japanese and pro-Japanese literature relative to anti-Japanese and pro-Japanese literature. Second, even if it follows the narration system of South Korean literature, that question was constantly raised on existing Korean academic evaluation of anti-Japanese and pro-Japanese literature, and different interpretations of reading were practiced. For example, Korean academic circles highly regard literature of writers such as Kim, Jong han or Lee, Seok hoon, while Korean academics do not place much importance on Lee, Gwang Soo's pro-Japanese elements that are important. The third point is that generous marks are credited to writers with outstanding Japanese or to Japanese creative writing. As a result, they dissolve internal logic in different pro-Japanese collaborators such as Chang, Hyuk Ju, Kim, Sa Ryang, Lee, Seok hoon, or Kim, Yong Jae by melting the same "Japanese literature" in a cage. The last point is reading different inner thoughts of Kim, Jong-han or Lee, Seok-hoon unlike outspoken pro-Japanese collaborators such as Lee, Gwang soo, Jang, Hyuk Joo or Kim, Yong je. These points require more in-depth analysis, and will be continued in follow-up tasks.

Development of a Model of Brain-based Evolutionary Scientific Teaching for Learning (뇌기반 진화적 과학 교수학습 모형의 개발)

  • Lim, Chae-Seong
    • Journal of The Korean Association For Science Education
    • /
    • v.29 no.8
    • /
    • pp.990-1010
    • /
    • 2009
  • To derive brain-based evolutionary educational principles, this study examined the studies on the structural and functional characteristics of human brain, the biological evolution occurring between- and within-organism, and the evolutionary attributes embedded in science itself and individual scientist's scientific activities. On the basis of the core characteristics of human brain and the framework of universal Darwinism or universal selectionism consisted of generation-test-retention (g-t-r) processes, a Model of Brain-based Evolutionary Scientific Teaching for Learning (BEST-L) was developed. The model consists of three components, three steps, and assessment part. The three components are the affective (A), behavioral (B), and cognitive (C) components. Each component consists of three steps of Diversifying $\rightarrow$ Emulating (Executing, Estimating, Evaluating) $\rightarrow$ Furthering (ABC-DEF). The model is 'brain-based' in the aspect of consecutive incorporation of the affective component which is based on limbic system of human brain associated with emotions, the behavioral component which is associated with the occipital lobes performing visual processing, temporal lobes performing functions of language generation and understanding, and parietal lobes, which receive and process sensory information and execute motor activities of the body, and the cognitive component which is based on the prefrontal lobes involved in thinking, planning, judging, and problem solving. On the other hand, the model is 'evolutionary' in the aspect of proceeding according to the processes of the diversifying step to generate variants in each component, the emulating step to test and select useful or valuable things among the variants, and the furthering step to extend or apply the selected things. For three components of ABC, to reflect the importance of emotional factors as a starting point in scientific activity as well as the dominant role of limbic system relative to cortex of brain, the model emphasizes the DARWIN (Driving Affective Realm for Whole Intellectual Network) approach.

A Study on Developing a VKOSPI Forecasting Model via GARCH Class Models for Intelligent Volatility Trading Systems (지능형 변동성트레이딩시스템개발을 위한 GARCH 모형을 통한 VKOSPI 예측모형 개발에 관한 연구)

  • Kim, Sun-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.19-32
    • /
    • 2010
  • Volatility plays a central role in both academic and practical applications, especially in pricing financial derivative products and trading volatility strategies. This study presents a novel mechanism based on generalized autoregressive conditional heteroskedasticity (GARCH) models that is able to enhance the performance of intelligent volatility trading systems by predicting Korean stock market volatility more accurately. In particular, we embedded the concept of the volatility asymmetry documented widely in the literature into our model. The newly developed Korean stock market volatility index of KOSPI 200, VKOSPI, is used as a volatility proxy. It is the price of a linear portfolio of the KOSPI 200 index options and measures the effect of the expectations of dealers and option traders on stock market volatility for 30 calendar days. The KOSPI 200 index options market started in 1997 and has become the most actively traded market in the world. Its trading volume is more than 10 million contracts a day and records the highest of all the stock index option markets. Therefore, analyzing the VKOSPI has great importance in understanding volatility inherent in option prices and can afford some trading ideas for futures and option dealers. Use of the VKOSPI as volatility proxy avoids statistical estimation problems associated with other measures of volatility since the VKOSPI is model-free expected volatility of market participants calculated directly from the transacted option prices. This study estimates the symmetric and asymmetric GARCH models for the KOSPI 200 index from January 2003 to December 2006 by the maximum likelihood procedure. Asymmetric GARCH models include GJR-GARCH model of Glosten, Jagannathan and Runke, exponential GARCH model of Nelson and power autoregressive conditional heteroskedasticity (ARCH) of Ding, Granger and Engle. Symmetric GARCH model indicates basic GARCH (1, 1). Tomorrow's forecasted value and change direction of stock market volatility are obtained by recursive GARCH specifications from January 2007 to December 2009 and are compared with the VKOSPI. Empirical results indicate that negative unanticipated returns increase volatility more than positive return shocks of equal magnitude decrease volatility, indicating the existence of volatility asymmetry in the Korean stock market. The point value and change direction of tomorrow VKOSPI are estimated and forecasted by GARCH models. Volatility trading system is developed using the forecasted change direction of the VKOSPI, that is, if tomorrow VKOSPI is expected to rise, a long straddle or strangle position is established. A short straddle or strangle position is taken if VKOSPI is expected to fall tomorrow. Total profit is calculated as the cumulative sum of the VKOSPI percentage change. If forecasted direction is correct, the absolute value of the VKOSPI percentage changes is added to trading profit. It is subtracted from the trading profit if forecasted direction is not correct. For the in-sample period, the power ARCH model best fits in a statistical metric, Mean Squared Prediction Error (MSPE), and the exponential GARCH model shows the highest Mean Correct Prediction (MCP). The power ARCH model best fits also for the out-of-sample period and provides the highest probability for the VKOSPI change direction tomorrow. Generally, the power ARCH model shows the best fit for the VKOSPI. All the GARCH models provide trading profits for volatility trading system and the exponential GARCH model shows the best performance, annual profit of 197.56%, during the in-sample period. The GARCH models present trading profits during the out-of-sample period except for the exponential GARCH model. During the out-of-sample period, the power ARCH model shows the largest annual trading profit of 38%. The volatility clustering and asymmetry found in this research are the reflection of volatility non-linearity. This further suggests that combining the asymmetric GARCH models and artificial neural networks can significantly enhance the performance of the suggested volatility trading system, since artificial neural networks have been shown to effectively model nonlinear relationships.

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF

A comparative study on the fit and screw joint stability of ready-made abutment and CAD-CAM custom-made abutment (기성 지대주와 맞춤형 CAD-CAM 지대주의 적합 및 나사 안정성 비교)

  • Kim, Jong-Wook;Heo, Yu-Ri;Kim, Hee-Jung;Chung, Chae-Heon
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.51 no.4
    • /
    • pp.276-283
    • /
    • 2013
  • Purpose: The purpose of this study was to investigate the fit and screw joint stability between Ready-made abutment and CAD-CAM custom-made abutment. Materials and methods: Osstem implant system was used. Ready-made abutment (Transfer abutment, Osstem Implant Co. Ltd, Busan, Korea), CAD-CAM custom-made abutment (CustomFit abutment, Osstem Implant Co. Ltd, Busan, Korea) and domestically manufactured CAD-CAM custom-made abutment (Myplant, Raphabio Co., Seoul, Korea) were fabricated five each and screws were provided by each company. Fixture and abutments were tightening with 30Ncm according to the manufacturer's instruction and then preloding reverse torque values were measured 3 times repeatedly. Kruskal-Wallis test was used for statistical analysis of the preloading reverse torque values (${\alpha}=.05$). After specimens were embedded into epoxy resin, wet cutting and polishing was performed and FE-SEM imaging was performed, on the contact interface. Results: The pre-loading reverse torque values were $26.0{\pm}0.30Ncm$ (ready-made abutment; Transfer abutment) and $26.3{\pm}0.32Ncm$ (CAD-CAM custom-made abutment; CustomFit abutment) and $24.7{\pm}0.67Ncm$ (CAD-CAM custom-made abutment; Myplant). The domestically manufactured CAD-CAM custom-made abutment (Myplant abutment) presented lower pre-loading reverse torque value with statistically significant difference than that of the ready-made abutment (Transfer abutment) and CAD-CAM custom-made abutment (CustomFit abutment) manufactured from the same company (P=.027) and showed marginal gap in the fixture-abutment interface. Conclusion: Within the limitation of the present in-vitro study, in domestically manufactured CAD-CAM custom-made abutment (Myplant abutment) showed lower screw joint stability and fitness between fixture and abutment.

The nanoleakage patterns of experimental hydrophobic adhesives after load cycling (Load cycling에 따른 소수성 실험용 상아질 접착제의 nanoleakage 양상)

  • Sohn, Suh-Jin;Chang, Ju-Hae;Kang, Suk-Ho;Yoo, Hyun-Mi;Cho, Byeong-Hoon;Son, Ho-Hyun
    • Restorative Dentistry and Endodontics
    • /
    • v.33 no.1
    • /
    • pp.9-19
    • /
    • 2008
  • The purpose of this study was: (1) to compare nanoleakage patterns of a conventional 3-step etch and rinse adhesive system and two experimental hydrophobic adhesive systems and (2) to investigate the change of the nanoleakage patterns after load cycling. Two kinds of hydrophobic experimental adhesives, ethanol containing adhesive (EA) and methanol containing adhesive (MA), were prepared. Thirty extracted human molars were embedded in resin blocks and occlusal thirds of the crowns were removed. The polished dentin surfaces were etched with a 35 % phosphoric acid etching gel and rinsed with water. Scotchbond Multi-Purpose (MP), EA and MA were used for bonding procedure. Z-250 composite resin was built-up on the adhesive-treated surfaces. Five teeth of each dentin adhesive group were subjected to mechanical load cycling. The teeth were sectioned into 2 mm thick slabs and then stained with 50 % ammoniacal silver nitrate. Ten specimens for each group were examined under scanning electron microscope in backscattering electron mode. All photographs were analyzed using image analysis software. Three regions of each specimen were used for evaluation of the silver uptake within the hybrid layer. The area of silver deposition was calculated and expressed in gray value. Data were statistically analyzed by two-way ANOVA and post-hoc testing of multiple comparisons was done with the Scheffe's test. Silver particles were observed in all the groups. However, silver particles were more sparsely distributed in the EA group and the MA group than in the MP group (p < .0001). There were no changes in nanoleakage patterns after load cycling.