• 제목/요약/키워드: Ad meter

검색결과 10건 처리시간 0.02초

가청주파수 궤도회로의 진단 및 시험 장비 개선에 대한 연구 (A Study on the Improvement of Test and Diagnosis Device for Audio Frequency Track Circuit)

  • 강장규;김재철
    • 조명전기설비학회논문지
    • /
    • 제24권12호
    • /
    • pp.147-155
    • /
    • 2010
  • We studied on performance improvement of TTM(TI21 Test Meter) that is test and diagnosis devices for jointless audio frequency track circuit on Korean electric railway TI21 standard. Upgraded devices is AD-TTM(Advanced TI21 Test Meter). This can measure alternating frequency USB(Upper signal band) and LSB(Lower signal band). In the audio frequency track circuit, ${\pm}17[Hz]$ of nominal frequency are demodulated and supplied to track relay through AND gate. It is important that measurement function which is error between USB and LSB. Need of AD-TTM will stand out in the electric railway system because this is simple and accurate rather than the former device.

슈퍼볼 애드미터와 광고 어필사이의 문화측면에서 연관성 연구 (A Study on the Relationships between Super Bowl Ad Meter and Advertising Appeals in Cultural Dimensions)

  • 김진설;이윤철
    • 국제지역연구
    • /
    • 제20권1호
    • /
    • pp.183-208
    • /
    • 2016
  • 이 연구의 목적은 Ad 미터 점수가 높은 슈퍼볼 광고가 홉스테드 모델과 관련있는 광고 어필을 더 많이 가지는지를 테스트하는 것이다. 이 연구의 의의는 대표적인 미국적인 스포츠인 슈퍼볼 이벤트에서 미국 소비자의 광고에 대한 선호도를 문화적 측면에서 분석한데 있다. 이 연구는 2005년부터 2011년까지 7년간의 슈퍼볼 광고 중 상위 10위와 하위 10위 광고를 대상으로 한 내용 분석에 기초하였다. 상위 및 하위 하위 광고는 USA 투데이가 발표한 Ad 미터점수를 참고하였고 2명의 코더가 저자와 함께 총 139개 광고를 코딩하였다. 상위 10 그룹의 광고들이 홉스테드 모델 중 낮은 불확실성 회피 측면과 연관된 젊음, 길들여지지않음, 마법 같은 어필을 더 많이 보여주는 것으로 밝혀졌다. 반면, 높은 권력차이의 측면과 관련된 장식, 허영, 지위 같은 어필들이 주로 하위 10위권 그룹의 광고에서 많이 나타났다. 내용 분석에 기초한 이번 발견에 근거하여, 특정한 어필과 점수 수준의 관계가 슈퍼볼을 TV로 시청하는 사람들을 타켓으로 한 광고를 제작할 시 어떤 문화적인 측면에서 어필들을 채택해야하는지에 대한 제안점을 제공한다.

A Case Study on Energy focused Smart City, London of the UK: Based on the Framework of 'Business Model Innovation'

  • Song, Minzheong
    • International journal of advanced smart convergence
    • /
    • 제9권2호
    • /
    • pp.8-19
    • /
    • 2020
  • We see an energy fucused smart city evolution of the UK along with the project of "Smart London Plan (SLP)." A theoretical logic of business model innovation has been discussed and a research framework of evolving energy focused smart city is formulated. The starting point is the silo system. In the second stage, the private investment in smart meters establishes a basement for next stages. As results, the UK's smart energy sector has evolved from smart meter installation through smart grid to new business models such as water-energy nexus and microgrid. Before smart meter installation of the government, the electricity system was centralized. However, after consumer engagement plan has been set to make them understand benefits that they can secure through smart meters, the customer behavior has been changed. The data analytics firm enables greater understanding of consumer behavior and it helps energy industry to be smart via controlling, securing and using that data to improve the energy system. In the third stage, distribution network operators (DNOs)' access to smart meter data has been allowed and the segmentation starts. In the fourth stage, with collaboration of Ofwat and Ofgem, it is possible to eliminate unnecessary duplication of works and reduce interest conflict between water and electricity. In the fifth stage, smart meter and grid has been integrated as an "adaptive" system and a transition from DNO to DSO is accomplished for the integrated operation. Microgrid is a prototype for an "adaptive" smart grid. Previous steps enable London to accomplish a platform leadership to support the increasing electrification of the heating and transport sector and smart home.

Cassava Tops Ensiled With or Without Molasses as Additive Effects on Quality, Feed Intake and Digestibility by Heifers

  • Van Man, Ngo;Wiktorsson, Hans
    • Asian-Australasian Journal of Animal Sciences
    • /
    • 제14권5호
    • /
    • pp.624-630
    • /
    • 2001
  • Two experiments on the effects of molasses additive on cassava tops silage quality to its feed intake and digestibility by growing Holstein$\times$local crossbred heifers were carried out. Sixteen plastic bags of one meter diameter and two meters length were allocated in a $2{\times}2$ factorial design with four replicates in the ensiling study, with and without the molasses additive and with two storage times (2 and 3 months). The silage produced in the first experiment was used in the feed intake and digestibility study. Six crossbred Holstein heifers, 160-180 kg live weight, were randomly allocated in a $3{\times}2$ change-over design to three treatments: Guinea grass ad libitum, 70% of grass ad libitum with a supplement of non-molasses cassava silage ad libitum, and 70% of grass ad libitum with a supplement of molasses cassava silage ad libitum. Ensiling was shown to be a satisfactory method for preservation of cassava tops. The HCN content was significantly reduced from $840mg\;kg^{-1}$ to 300 or $130mg\;kg^{-1}$, depending on storage period. The tannin content was not significantly changed. Molasses additive resulted in lower pH, Crude Protein (CP), NDF and higher DM content but did not otherwise affect chemical composition. The voluntary feed intake per 100 kg live weight of the heifers was 2.59, 2.65 and 2.91 kg DM of Guinea grass, non-molasses cassava tops silage and molasses cassava tops silage diet, respectively. Crude protein intake was significantly improved in the cassava tops silage diets. The apparent digestibility of DM, OM, CP, NDF and ADF decreased with the silage supplement diets. No significant difference in digestibility was found between the non-molasses and molasses silage diets. The digestibility coefficient of DM, OM, CP, NDF, ADF in non-molasses cassava tops silage and molasses cassava tops silage was 49.4, 52.1, 45.81, 36.6, 27.7 and 49.7, 51.9, 47.55, 28.1, 19.5, respectively. It is concluded that cassava tops can be preserved successfully by ensiling and that cassava tops silage is a good feed resource for cattle.

300m급 수중ROV 개발에 관한 연구 (A study on Development of 300m Class Underwater ROV)

  • 이종식;이판묵;홍석원
    • 한국해양공학회지
    • /
    • 제8권1호
    • /
    • pp.50-61
    • /
    • 1994
  • A 300 meter class ROV(CROV300) is composed of three parts : a surface unit, a tether cable and an underwater vehicle. The vehicle controller is based on two processors : an Intel 8097-16-bit one chip micro-processor and a Texas Instruments TMS320E25 digital signal processor. In this paper, the surface controller, the vehicle controller and peripheral devices interfaced with the processors are described. These controllers transmit/receive measured status data and control commands through RS422 serial communication. Depth, heading, trimming, camera tilting, and leakage signals are acquired through the embedded AD converters of the 8097. On the other hand, altitude of ROV and lbstacle avoidance signals are processed by the DSP processor and periodically fetched by the 8097. The processor is interfaced with a 4-channel 12-bit D/A converter to generate control signals for DC motors an dseveral transistors to handle the relays for on/off switching of external devices.

  • PDF

Time Slot Exchange Protocol in a Reservation Based MAC for MANET

  • Koirala, Mamata;Ji, Qi;Choi, Jae-Ho
    • 융합신호처리학회논문지
    • /
    • 제10권3호
    • /
    • pp.181-185
    • /
    • 2009
  • Recently, much attention to a self-organizing mobile ad-hoc network is escalating along with progressive deployment of wireless networks in our everyday life. Being readily deployable, the MANET (mobile ad hoc network) can find its applications to emergency medical service, customized calling service, group-based communications, and military purposes. In this paper we investigate a time slot exchange problem found in the time slot based MAC, that is designed for IEEE 802.11b interfaces composing a MANET. The paper provides a method to maintain the quality of voice call by providing a new time slot when the channel assigned for that time slot gets noisy with interferences induced from other nodes, which belong to the same and/or other subgroups. In order to assess the performance of the proposed algorithm, a set of simulations using the OPNET modeler has been performed assuming that the IEEE 802.11b interfaces are operating under a modified MAC, which is a time slot based reservation MAC implemented in the PCF part of the superframe. In a real-time voice call service over a MANET of a size 500 ${\times}$ 500 meter squares with the number of nodes up to 100, the simulation results are collected and analyzed with respect to the packet loss rate and packet delay. The results show us that the proposed time slot exchange protocol improves the quality of voice call over that of plain DCF.

  • PDF

김해지방의 강수의 산도 및 화학적 성분 특성 (The Characteristics of Chemical Components and Acidity in the Precipitation at Kimhae Area)

  • 박종길;황용식
    • 한국환경과학회지
    • /
    • 제6권5호
    • /
    • pp.461-472
    • /
    • 1997
  • This study was carried out to investigate the characteristics of chemical components and precipitation at Kimhae area from March, 1992 to June, 1994. The pH values, concentration of soluble ions($Cl^-$, $NO_2^-}$ $NO_3^-}$, $NO_4^{2-}$-, $PO_4^{3-}$. $F^-$, $Mg^{2+}$, $Ca^{2+}$, $Mn^{2+}$, $K^+) and non-soluble metals(Cr.Si. Zn, Pb, Cu, Fe, Mn, Mg, Ad. V. Cal were measured by pH meter, IC (ion Chromatography) and ICP(Inductively Coupled Plasma). The data were analyzed by the dally. hourly distribution characteristics of acidity and chemical components, as well as the correlation between them. The results are as follows. 1. The pH range of precipitation was from 3.45 to 6.80 in Kimhae area. and average value was pH 4.62 and main chemical components were $SO_4^{2-}$, $Cl^-$, $NO_3^-$. The highest pH value and concentration appeared in initial rain, which might result from urbanlzation and industrialization in this area and long term transportation from China. 2. The hourly correction distribution of main anions related to pH value In the rainwater showed $SO_4^{2-}$ > $NO_3^-$ > $Cl^-$. Hourly concentration of heavy metal and each ion was highly correlated with pH in the precipitation.

  • PDF

유역특성에 의한 합성단위도의 유도에 관한 연구 (Derivation of the Synthetic Unit Hydrograph Based on the Watershed Characteristics)

  • 서승덕
    • 한국농공학회지
    • /
    • 제17권1호
    • /
    • pp.3642-3654
    • /
    • 1975
  • The purpose of this thesis is to derive a unit hydrograph which may be applied to the ungaged watershed area from the relations between directly measurable unitgraph properties such as peak discharge(qp), time to peak discharge (Tp), and lag time (Lg) and watershed characteristics such as river length(L) from the given station to the upstream limits of the watershed area in km, river length from station to centroid of gravity of the watershed area in km (Lca), and main stream slope in meter per km (S). Other procedure based on routing a time-area diagram through catchment storage named Instantaneous Unit Hydrograph(IUH). Dimensionless unitgraph also analysed in brief. The basic data (1969 to 1973) used in these studies are 9 recording level gages and rating curves, 41 rain gages and pluviographs, and 40 observed unitgraphs through the 9 sub watersheds in Nak Oong River basin. The results summarized in these studies are as follows; 1. Time in hour from start of rise to peak rate (Tp) generally occured at the position of 0.3Tb (time base of hydrograph) with some indication of higher values for larger watershed. The base flow is comparelatively higher than the other small watershed area. 2. Te losses from rainfall were divided into initial loss and continuing loss. Initial loss may be defined as that portion of storm rainfall which is intercepted by vegetation, held in deppression storage or infiltrated at a high rate early in the storm and continuing loss is defined as the loss which continues at a constant rate throughout the duration of the storm after the initial loss has been satisfied. Tis continuing loss approximates the nearly constant rate of infiltration (${\Phi}$-index method). The loss rate from this analysis was estimated 50 Per cent to the rainfall excess approximately during the surface runoff occured. 3. Stream slope seems approximate, as is usual, to consider the mainstreamonly, not giving any specific consideration to tributary. It is desirable to develop a single measure of slope that is representative of the who1e stream. The mean slope of channel increment in 1 meter per 200 meters and 1 meter per 1400 meters were defined at Gazang and Jindong respectively. It is considered that the slopes are low slightly in the light of other river studies. Flood concentration rate might slightly be low in the Nak Dong river basin. 4. It found that the watershed lag (Lg, hrs) could be expressed by Lg=0.253 (L.Lca)0.4171 The product L.Lca is a measure of the size and shape of the watershed. For the logarithms, the correlation coefficient for Lg was 0.97 which defined that Lg is closely related with the watershed characteristics, L and Lca. 5. Expression for basin might be expected to take form containing theslope as {{{{ { L}_{g }=0.545 {( { L. { L}_{ca } } over { SQRT {s} } ) }^{0.346 } }}}} For the logarithms, the correlation coefficient for Lg was 0.97 which defined that Lg is closely related with the basin characteristics too. It should be needed to take care of analysis which relating to the mean slopes 6. Peak discharge per unit area of unitgraph for standard duration tr, ㎥/sec/$\textrm{km}^2$, was given by qp=10-0.52-0.0184Lg with a indication of lower values for watershed contrary to the higher lag time. For the logarithms, the correlation coefficient qp was 0.998 which defined high sign ificance. The peak discharge of the unitgraph for an area could therefore be expected to take the from Qp=qp. A(㎥/sec). 7. Using the unitgraph parameter Lg, the base length of the unitgraph, in days, was adopted as {{{{ {T}_{b } =0.73+2.073( { { L}_{g } } over {24 } )}}}} with high significant correlation coefficient, 0.92. The constant of the above equation are fixed by the procedure used to separate base flow from direct runoff. 8. The width W75 of the unitgraph at discharge equal to 75 per cent of the peak discharge, in hours and the width W50 at discharge equal to 50 Per cent of the peak discharge in hours, can be estimated from {{{{ { W}_{75 }= { 1.61} over { { q}_{b } ^{1.05 } } }}}} and {{{{ { W}_{50 }= { 2.5} over { { q}_{b } ^{1.05 } } }}}} respectively. This provides supplementary guide for sketching the unitgraph. 9. Above equations define the three factors necessary to construct the unitgraph for duration tr. For the duration tR, the lag is LgR=Lg+0.2(tR-tr) and this modified lag, LgRis used in qp and Tb It the tr happens to be equal to or close to tR, further assume qpR=qp. 10. Triangular hydrograph is a dimensionless unitgraph prepared from the 40 unitgraphs. The equation is shown as {{{{ { q}_{p } = { K.A.Q} over { { T}_{p } } }}}} or {{{{ { q}_{p } = { 0.21A.Q} over { { T}_{p } } }}}} The constant 0.21 is defined to Nak Dong River basin. 11. The base length of the time-area diagram for the IUH routing is {{{{C=0.9 {( { L. { L}_{ca } } over { SQRT { s} } ) }^{1/3 } }}}}. Correlation coefficient for C was 0.983 which defined a high significance. The base length of the T-AD was set to equal the time from the midpoint of rain fall excess to the point of contraflexure. The constant K, derived in this studies is K=8.32+0.0213 {{{{ { L} over { SQRT { s} } }}}} with correlation coefficient, 0.964. 12. In the light of the results analysed in these studies, average errors in the peak discharge of the Synthetic unitgraph, Triangular unitgraph, and IUH were estimated as 2.2, 7.7 and 6.4 per cent respectively to the peak of observed average unitgraph. Each ordinate of the Synthetic unitgraph was approached closely to the observed one.

  • PDF

지리정보기반의 재해 관리시스템 구축(I) -민간 보험사의 사례, 태풍의 경우- (GIS-based Disaster Management System for a Private Insurance Company in Case of Typhoons(I))

  • 장은미
    • 대한지리학회지
    • /
    • 제41권1호
    • /
    • pp.106-120
    • /
    • 2006
  • 자연재해 및 인위적 재해는 지리학에서 인문지리와 자연지리를 통합할 수 있는 주제로 기대되고 있으나 실제로 지리정보를 이용한 분석방법에 대한 연구와 시스템이 개발된 사례는 많지 않다. 태풍 루사와 매미가 국내 개인 및 국가에 입힌 손실만큼 손보사에게 끼친 손실이 막대하여, 보다 과학적이고 합리적인 자연재해 피해액에 대한 추정과 재보험 가격산정을 위한 시나리오 구성이 요구되었다. 태풍을 사례로 한 본 연구에서는 태풍경로에 따른 풍속예측모델을 적용하기 위하여 전국단위의 필요한 지리정보를 구축하였다. 1: 5,000 수치 지도를 기본지도로 사용하였으며, 기상자료 및 계약물건의 소재지에 대한 주소자료를 점형 자료로 구축하였으며, 과거 관측된 태풍의 주요 기압의 변화 값을 속성으로 하여 경위도 좌표로 선형 자료로 구축하였으며, 토지피복도는 풍속의 정확도를 높이기 위한 자료로 모델의 변수 조정에 사용하였다. 모든 자료를 전국을 1km 간격의 격자형자료로 변형하여 중첩할 수 있고, 태풍 풍속모델과 격자별 피해가능정도를 구할 수 있도록 하였다. 풍속에 대한모델의 정확도는 실제 기상측정지점의 측정값과 비교하여 검증과정을 거쳤으며(전체 평균 $R^2=0.68$), 변이가 큰 기상측정지점 변화를 준 보정과정을 통해 예측시스템의 정확도를 향상시켰다. 풍속에 따른 피해율을 적용한 피해민감도곡선을 주거지역, 공업지역, 기타지역으로 나누어 적용하고 실제 손해배상액과 비교해 본 결과, 과대평가된 부분과 과소평가된 부분을 동시에 관찰할 수 있었다 본 연구와 시스템 구축으로 민간보험사는 재보험 요율에 근거자료를 보유할 수 있을 뿐더러 유사 재해 시 대응할 수 있는 시나리오를 작동함으로 자원의 배분계획을 수립할 수 있고 대외적 신인도를 제고할 수 있을 것으로 예측된다. 향후 하천범람모형 및 태풍과 지진으로 인한 해일 모형, 내수 침수모형을 추가하여 종합적인 재해모형으로 완성할 예정이다.

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • 유통과학연구
    • /
    • 제8권3호
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF