• Title/Summary/Keyword: Running Cost

Search Result 387, Processing Time 0.023 seconds

Development of a n-path algorithm for providing travel information in general road network (일반가로망에서 교통정보제공을 위한 n-path 알고리듬의 개발)

  • Lim, Yong-Taek
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.4 s.75
    • /
    • pp.135-146
    • /
    • 2004
  • For improving the effectiveness of travel information, some rational paths are needed to provide them to users driving in real road network. To meet it, k-shortest path algorithms have been used in general. Although the k-shortest path algorithm can provide several alternative paths, it has inherent limit of heavy overlapping among derived paths, which nay lead to incorrect travel information to the users. In case of considering the network consisting of several turn prohibitions popularly adopted in real world network, it makes difficult for the traditional network optimization technique to deal with. Banned and penalized turns are not described appropriately for in the standard node/link method of network definition with intersections represented by nodes only. Such problem could be solved by expansion technique adding extra links and nodes to the network for describing turn penalties, but this method could not apply to large networks as well as dynamic case due to its overwhelming additional works. This paper proposes a link-based shortest path algorithm for the travel information in real road network where exists turn prohibitions. It enables to provide efficient alternative paths under consideration of overlaps among paths. The algorithm builds each path based on the degree of overlapping between each path and stops building new path when the degree of overlapping ratio exceeds its criterion. Because proposed algorithm builds the shortest path based on the link-end cost instead or node cost and constructs path between origin and destination by link connection, the network expansion does not require. Thus it is possible to save the time or network modification and of computer running. Some numerical examples are used for test of the model proposed in the paper.

Estimate on Economical Optimum Scale of Public Livestock Manure Treatment Plant (가축분뇨 공공처리시설의 경제적 적정규모 설정)

  • Kim, J.H.;Park, C.H.;Kwag, J.H.;Choi, D.Y.;Jeong, K.H.;Chung, U.S.;Chung, Y.B.;Yoo, Y.H.
    • Journal of Animal Environmental Science
    • /
    • v.14 no.1
    • /
    • pp.23-30
    • /
    • 2008
  • The objective of this study was to estimate the optimum scale of PLMTP (Public Livestock Manure Treatment Plant) for the efficient management of public sector by long-run cost function. An economic analysis was performed using the survey of 52 PLMTP records collected by Ministry of Environment in 2007. The main results obtained in this study can be summed up as follows. The optimum scale under given environmental conditions turned out to be $180{\sim}200m^3$/day which is almost $1.5{\sim}1.6$ times of the average scale of sample plants, $146m^3$/day. This gap between the optimum and current scale suggests that there remains the possibility of further expansion of scale.

  • PDF

A Task Scheduling Algorithm with Environment-specific Performance Enhancement Method (환경 특성에 맞는 성능 향상 기법을 사용하는 태스크 스케줄링 알고리즘)

  • Song, Inseong;Yoon, Dongsung;Park, Taeshin;Choi, Sangbang
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.5
    • /
    • pp.48-61
    • /
    • 2017
  • An IaaS service of a cloud computing environment makes itself attractive for running large scale parallel application thanks to its innate characteristics that a user can utilize a desired number of high performance virtual machines without maintenance cost. The total execution time of a parallel application on a high performance computing environment depends on a task scheduling algorithm. Most studies on task scheduling algorithms on cloud computing environment try to reduce a user cost, and studies on task scheduling algorithms that try to reduce total execution time are rarely carried out. In this paper, we propose a task scheduling algorithm called an HAGD and a performance enhancement method called a group task duplication method of which the HAGD utilizes. The group task duplication method simplifies previous task duplication method, and the HAGD uses the group task duplication method or a task insertion method according to the characteristics of a computing environment and an application. We found that the proposed algorithm provides superior performance regardless of the characteristics in terms of normalized total execution time through performance evaluations.

The Monitoring Study of Exchange Cycle of Automatic Transmission Fluid (자동변속기유(ATF) 교환주기 모니터링 연구)

  • Lim, Young-Kwan;Jung, Choong-Sub;Lee, Jeong-Min;Han, Kwan-Wook;Na, Byung-Ki
    • Applied Chemistry for Engineering
    • /
    • v.24 no.3
    • /
    • pp.274-278
    • /
    • 2013
  • Automatic transmission fluid (ATF) is used as an automatic transmission in the vehicle or as a characterized fluid for automatic transmission. Recently, vehicle manufacturers usually guarantee for changing fluids over 80000~100000 km mileage or no exchange. However, most drivers usually change ATF below every 50000 km driving distance when driving in Republic of Korea according to a survey from the Korea Institute of Petroleum Management which can cause both a serious environmental contamination by the used ATF and an increase in the cost of driving. In this study, various physical properties such as flash point, pour point, kinematic viscosity, dynamic viscosity at low temperature, total acid number and four-ball test were investigated for both fresh ATF and used ATF after the actual vehicle driving distance of 50000 km and 100000 km. It was shown that most physical properties were suitable for the specification of ATF, but the foam characteristics of the used oil after running 100000 km was unsuitable for the specification of fresh ATF. Therefore, the exchange cycle of ATF every 80000~100000 km driving distance is recommended considering great positive contributions to preventing environmental pollution and reducing driving cost.

Intelligence e-Learning System Supporting Participation of Students based on Face Recognition (학습자 참여를 유도하기 위한 얼굴인식 기반 지능형 e-Learning 시스템)

  • Bae, Kyoung-Yul;Joung, Jin-Oo;Min, Seung-Wook
    • Journal of Intelligence and Information Systems
    • /
    • v.13 no.2
    • /
    • pp.43-53
    • /
    • 2007
  • e-Learning education system as the next educational trend supporting remote and multimedia education. However, the students stay mainly at remote place and it is hard to certificate whether he is really studying now or not. To solve this problem, some solutions were proposed such as instructor's supervision by real time motion picture or message exchanging. Unhappily, as you can see, it needs much cost to establish the motion exchanging system and trampling upon human rights could occasion to reduce the student's will. Accordingly, we propose the new intelligent system based on face recognition to reduce the system cost. The e-Learning system running on the web page can check the student's status by motion image, and the images transfer to the instructor. For this study, 20 students and one instructor takes part in capturing and recognizing the face images. And the result produces the prevention the leave of students from lecture and improvement of attention.

  • PDF

A study of Modeling and Simulation for Analyzing DDoS Attack Damage Scale and Defence Mechanism Expense (DDoS 공격 피해 규모 및 대응기법 비용분석을 위한 모델링 및 시뮬레이션 기술연구)

  • Kim, Ji-Yeon;Lee, Ju-Li;Park, Eun-Ji;Jang, Eun-Young;Kim, Hyung-Jong
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.4
    • /
    • pp.39-47
    • /
    • 2009
  • Recently, the threat of DDoS attacks is increasing and many companies are planned to deploy the DDoS defense solutions in their networks. The DDoS attack usually transmits heavy traffic data to networks or servers and they cannot handle the normal service requests because of running out of resources. Since it is very hard to prevent the DDoS attack beforehand, the strategic plan is very important. In this work, we have conducted modeling and simulation of the DDoS attack by changing the number of servers and estimated the duration that services are available. In this work, the modeling and simulation is conducted using OPNET Modeler. The simulation result can be used as a parameter of trade-off analysis of DDoS defense cost and the service's value. In addition, we have presented a way of estimating the cost effectiveness in deployment of the DDoS defense system.

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF

Analyzing Contextual Polarity of Unstructured Data for Measuring Subjective Well-Being (주관적 웰빙 상태 측정을 위한 비정형 데이터의 상황기반 긍부정성 분석 방법)

  • Choi, Sukjae;Song, Yeongeun;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.83-105
    • /
    • 2016
  • Measuring an individual's subjective wellbeing in an accurate, unobtrusive, and cost-effective manner is a core success factor of the wellbeing support system, which is a type of medical IT service. However, measurements with a self-report questionnaire and wearable sensors are cost-intensive and obtrusive when the wellbeing support system should be running in real-time, despite being very accurate. Recently, reasoning the state of subjective wellbeing with conventional sentiment analysis and unstructured data has been proposed as an alternative to resolve the drawbacks of the self-report questionnaire and wearable sensors. However, this approach does not consider contextual polarity, which results in lower measurement accuracy. Moreover, there is no sentimental word net or ontology for the subjective wellbeing area. Hence, this paper proposes a method to extract keywords and their contextual polarity representing the subjective wellbeing state from the unstructured text in online websites in order to improve the reasoning accuracy of the sentiment analysis. The proposed method is as follows. First, a set of general sentimental words is proposed. SentiWordNet was adopted; this is the most widely used dictionary and contains about 100,000 words such as nouns, verbs, adjectives, and adverbs with polarities from -1.0 (extremely negative) to 1.0 (extremely positive). Second, corpora on subjective wellbeing (SWB corpora) were obtained by crawling online text. A survey was conducted to prepare a learning dataset that includes an individual's opinion and the level of self-report wellness, such as stress and depression. The participants were asked to respond with their feelings about online news on two topics. Next, three data sources were extracted from the SWB corpora: demographic information, psychographic information, and the structural characteristics of the text (e.g., the number of words used in the text, simple statistics on the special characters used). These were considered to adjust the level of a specific SWB. Finally, a set of reasoning rules was generated for each wellbeing factor to estimate the SWB of an individual based on the text written by the individual. The experimental results suggested that using contextual polarity for each SWB factor (e.g., stress, depression) significantly improved the estimation accuracy compared to conventional sentiment analysis methods incorporating SentiWordNet. Even though literature is available on Korean sentiment analysis, such studies only used only a limited set of sentimental words. Due to the small number of words, many sentences are overlooked and ignored when estimating the level of sentiment. However, the proposed method can identify multiple sentiment-neutral words as sentiment words in the context of a specific SWB factor. The results also suggest that a specific type of senti-word dictionary containing contextual polarity needs to be constructed along with a dictionary based on common sense such as SenticNet. These efforts will enrich and enlarge the application area of sentic computing. The study is helpful to practitioners and managers of wellness services in that a couple of characteristics of unstructured text have been identified for improving SWB. Consistent with the literature, the results showed that the gender and age affect the SWB state when the individual is exposed to an identical queue from the online text. In addition, the length of the textual response and usage pattern of special characters were found to indicate the individual's SWB. These imply that better SWB measurement should involve collecting the textual structure and the individual's demographic conditions. In the future, the proposed method should be improved by automated identification of the contextual polarity in order to enlarge the vocabulary in a cost-effective manner.

Effects of Secondary Task on Driving Performance -Control of Vehicle and Analysis of Motion signal- (동시과제가 운전 수행 능력에 미치는 영향 -차량 통제 및 동작신호 해석을 중심으로-)

  • Mun, Kyung-Ryoul;Choi, Jin-Seung;Kang, Dong-Won;Bang, Yun-Hwan;Kim, Han-Soo;Lee, Su-Jung;Yang, Jae-Woong;Kim, Ji-Hye;Choi, Mi-Hyun;Ji, Doo-Hwan;Min, Byung-Chan;Chung, Soon-Cheol;Taek, Gye-Rae
    • Science of Emotion and Sensibility
    • /
    • v.13 no.4
    • /
    • pp.613-620
    • /
    • 2010
  • The purpose of this study was to quantitatively evaluate the effects of the secondary task while simulated driving using the variable indicating control of vehicle and smoothness of motion. Fifteen healthy adults having 1~2years driving experience were participated. 9 markers were attached on the subjects' upper(shoulder, elbow, Wrist) and lower(knee, ankle, toe) limbs and all subjects were instructed to keep the 30m distance with the front vehicle running at 80km/hr speed. Sending text message(STM) and searching navigation(SN) were selected as the secondary task. Experiment consisted of driving alone for 1 min and driving with secondary task for 1 min, and was defined driving and cognition blocks respectively. To indicate the effects of secondary task, coefficient of variation of distance between vehicles and lane keeping(APCV and MLCV) and jerk-cost function(JC) were analyzed. APCV was increased by 222.1% in SN block. MLCV was increased by 318.2% in STM and 308.4% in SN. JC were increased at the drivers' elbow, knee, ankle and toe, especially the total mean JC of lower limbs were increased by 218.2% in STM and 294.7% in SN. Conclusively, Performing secondary tasks while driving decreased the smoothness of motion with increased JC and disturbed the control of vehicle with increased APCV and MLCV.

  • PDF

Vessel Tracking Algorithm using Multiple Local Smooth Paths (지역적 다수의 경로를 이용한 혈관 추적 알고리즘)

  • Jeon, Byunghwan;Jang, Yeonggul;Han, Dongjin;Shim, Hackjoon;Park, Hyungbok;Chang, Hyuk-Jae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.6
    • /
    • pp.137-145
    • /
    • 2016
  • A novel tracking method is proposed to find coronary artery using high-order curve model in coronary CTA(Computed Tomography Angiography). The proposed method quickly generates numerous artificial trajectories represented by high-order curves, and each trajectory has its own cost. The only high-ranked trajectories, located in the target structure, are selected depending on their costs, and then an optimal curve as the centerline will be found. After tracking, each optimal curve segment is connected, where optimal curve segments share the same point, to a single curve and it is a piecewise smooth curve. We demonstrated the high-order curve is a proper model for classification of coronary artery. The experimental results on public data set sho that the proposed method is comparable at both accuracy and running time to the state-of-the-art methods.