• Title/Summary/Keyword: speed distribution

Search Result 2,393, Processing Time 0.035 seconds

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

An Exploratory Study on Channel Equity of Electronic Goods (가전제품 소비자의 Channel Equity에 관한 탐색적 연구)

  • Suh, Yong-Gu;Lee, Eun-Kyung
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.3
    • /
    • pp.1-25
    • /
    • 2008
  • Ⅰ. Introduction Retailers in the 21st century are being told that future retailers are those who can execute seamless multi-channel access. The reason is that retailers should be where shoppers want them, when they want them anytime, anywhere and in multiple formats. Multi-channel access is considered one of the top 10 trends of all business in the next decade (Patricia T. Warrington, et al., 2007) And most firms use both direct and indirect channels in their markets. Given this trend, we need to evaluate a channel equity more systematically than before as this issue is expected to get more attention to consumers as well as to brand managers. Consumers are becoming very much confused concerning the choice of place where they shop for durable goods as there are at least 6-7 retail options. On the other hand, manufacturers have to deal with category killers, their dealers network, Internet shopping malls, and other avenue of distribution channels and they hope their retail channel behave like extensions of their own companies. They would like their products to be foremost in the retailer's mind-the first to be proposed and effectively communicated to potential customers. To enable this hope to come reality, they should know each channel's advantages and disadvantages from consumer perspectives. In addition, customer satisfaction is the key determinant of retail customer loyalty. However, there are only a few researches regarding the effects of shopping satisfaction and perceptions on consumers' channel choices and channels. The purpose of this study was to assess Korean consumers' channel choice and satisfaction towards channels they prefer to use in the case of electronic goods shopping. Korean electronic goods retail market is one of good example of multi-channel shopping environments. As the Korea retail market has been undergoing significant structural changes since it had opened to global retailers in 1996, new formats such as hypermarkets, Internet shopping malls and category killers have arrived for the last decade. Korean electronic goods shoppers have seven major channels : (1)category killers (2) hypermarket (3) manufacturer dealer shop (4) Internet shopping malls (5) department store (6) TV home-shopping (7) speciality shopping arcade. Korean retail sector has been modernized with amazing speed for the last decade. Overall summary of major retail channels is as follows: Hypermarket has been number 1 retailer type in sales volume from 2003 ; non-store retailing has been number 2 from 2007 ; department store is now number 3 ; small scale category killers are growing rapidly in the area of electronics and office products in particular. We try to evaluate each channel's equity using a consumer survey. The survey was done by telephone interview with 1000 housewife who lives nationwide. Sampling was done according to 2005 national census and average interview time was 10 to 15 minutes. Ⅱ. Research Summary We have found that seven major retail channels compete with each other within Korean consumers' minds in terms of price and service. Each channel seem to have its unique selling points. Department stores were perceived as the best electronic goods shopping destinations due to after service. Internet shopping malls were perceived as the convenient channel owing to price checking. Category killers and hypermarkets were more attractive in both price merits and location conveniences. On the other hand, manufacturers dealer networks were pulling customers mainly by location and after service. Category killers and hypermarkets were most beloved retail channel for Korean consumers. However category killers compete mainly with department stores and shopping arcades while hypermarkets tend to compete with Internet and TV home shopping channels. Regarding channel satisfaction, the top 3 channels were service-driven retailers: department stores (4.27); dealer shop (4.21); and Internet shopping malls (4.21). Speciality shopping arcade(3.98) were the least satisfied channels among Korean consumers. Ⅲ. Implications We try to identify the whole picture of multi-channel retail shopping environments and its implications in the context of Korean electronic goods. From manufacturers' perspectives, multi-channel may cause channel conflicts. Furthermore, inter-channel competition draws much more attention as hypermarkets and category killers have grown rapidly in recent years. At the same time, from consumers' perspectives, 'buy where' is becoming an important buying decision as it would decide the level of shopping satisfaction. We need to develop the concept of 'channel equity' to manage multi-channel distribution effectively. Firms should measure and monitor their prime channel equity in regular basis to maximize their channel potentials. Prototype channel equity positioning map has been developed as follows. We expect more studies to develop the concept of 'channel equity' in the future.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Comparative Study on the Methodology of Motor Vehicle Emission Calculation by Using Real-Time Traffic Volume in the Kangnam-Gu (자동차 대기오염물질 산정 방법론 설정에 관한 비교 연구 (강남구의 실시간 교통량 자료를 이용하여))

  • 박성규;김신도;이영인
    • Journal of Korean Society of Transportation
    • /
    • v.19 no.4
    • /
    • pp.35-47
    • /
    • 2001
  • Traffic represents one of the largest sources of primary air pollutants in urban area. As a consequence. numerous abatement strategies are being pursued to decrease the ambient concentration of pollutants. A characteristic of most of the these strategies is a requirement for accurate data on both the quantity and spatial distribution of emissions to air in the form of an atmospheric emission inventory database. In the case of traffic pollution, such an inventory must be compiled using activity statistics and emission factors for vehicle types. The majority of inventories are compiled using passive data from either surveys or transportation models and by their very nature tend to be out-of-date by the time they are compiled. The study of current trends are towards integrating urban traffic control systems and assessments of the environmental effects of motor vehicles. In this study, a methodology of motor vehicle emission calculation by using real-time traffic data was studied. A methodology for estimating emissions of CO at a test area in Seoul. Traffic data, which are required on a street-by-street basis, is obtained from induction loops of traffic control system. It was calculated speed-related mass of CO emission from traffic tail pipe of data from traffic system, and parameters are considered, volume, composition, average velocity, link length. And, the result was compared with that of a method of emission calculation by VKT(Vehicle Kilometer Travelled) of vehicles of category.

  • PDF

A Polarization-based Frequency Scanning Interferometer and the Measurement Processing Acceleration based on Parallel Programing (편광 기반 주파수 스캐닝 간섭 시스템 및 병렬 프로그래밍 기반 측정 고속화)

  • Lee, Seung Hyun;Kim, Min Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.253-263
    • /
    • 2013
  • Frequency Scanning Interferometry(FSI) system, one of the most promising optical surface measurement techniques, generally results in superior optical performance comparing with other 3-dimensional measuring methods as its hardware structure is fixed in operation and only the light frequency is scanned in a specific spectral band without vertical scanning of the target surface or the objective lens. FSI system collects a set of images of interference fringe by changing the frequency of light source. After that, it transforms intensity data of acquired image into frequency information, and calculates the height profile of target objects with the help of frequency analysis based on Fast Fourier Transform(FFT). However, it still suffers from optical noise on target surfaces and relatively long processing time due to the number of images acquired in frequency scanning phase. 1) a Polarization-based Frequency Scanning Interferometry(PFSI) is proposed for optical noise robustness. It consists of tunable laser for light source, ${\lambda}/4$ plate in front of reference mirror, ${\lambda}/4$ plate in front of target object, polarizing beam splitter, polarizer in front of image sensor, polarizer in front of the fiber coupled light source, ${\lambda}/2$ plate between PBS and polarizer of the light source. Using the proposed system, we can solve the problem of fringe image with low contrast by using polarization technique. Also, we can control light distribution of object beam and reference beam. 2) the signal processing acceleration method is proposed for PFSI, based on parallel processing architecture, which consists of parallel processing hardware and software such as Graphic Processing Unit(GPU) and Compute Unified Device Architecture(CUDA). As a result, the processing time reaches into tact time level of real-time processing. Finally, the proposed system is evaluated in terms of accuracy and processing speed through a series of experiment and the obtained results show the effectiveness of the proposed system and method.

Design and Implementation of Game Server using the Efficient Load Balancing Technology based on CPU Utilization (게임서버의 CPU 사용율 기반 효율적인 부하균등화 기술의 설계 및 구현)

  • Myung, Won-Shig;Han, Jun-Tak
    • Journal of Korea Game Society
    • /
    • v.4 no.4
    • /
    • pp.11-18
    • /
    • 2004
  • The on-line games in the past were played by only two persons exchanging data based on one-to-one connections, whereas recent ones (e.g. MMORPG: Massively Multi-player Online Role-playings Game) enable tens of thousands of people to be connected simultaneously. Specifically, Korea has established an excellent network infrastructure that can't be found anywhere in the world. Almost every household has a high-speed Internet access. What made this possible was, in part, high density of population that has accelerated the formation of good Internet infrastructure. However, this rapid increase in the use of on-line games may lead to surging traffics exceeding the limited Internet communication capacity so that the connection to the games is unstable or the server fails. expanding the servers though this measure is very costly could solve this problem. To deal with this problem, the present study proposes the load distribution technology that connects in the form of local clustering the game servers divided by their contents used in each on-line game reduces the loads of specific servers using the load balancer, and enhances performance of sewer for their efficient operation. In this paper, a cluster system is proposed where each Game server in the system has different contents service and loads are distributed efficiently using the game server resource information such as CPU utilization. Game sewers having different contents are mutually connected and managed with a network file system to maintain information consistency required to support resource information updates, deletions, and additions. Simulation studies show that our method performs better than other traditional methods. In terms of response time, our method shows shorter latency than RR (Round Robin) and LC (Least Connection) by about 12%, 10% respectively.

  • PDF

Numerical Simulation of Residual Currents and tow Salinity Dispersions by Changjiang Discharge in the Yellow Sea and the East China Sea (황해 및 동중국해에서 양쯔강의 담수유입량 변동에 따른 잔차류 및 저염분 확산 수치모의)

  • Lee, Dae-In;Kim, Jong-Kyu
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.10 no.2
    • /
    • pp.67-85
    • /
    • 2007
  • A three-dimensional hydrodynamic model with the fine grid is applied to simulate the barotropic tides, tidal currents, residual currents and salinity dispersions in the Yellow Sea and the East China Sea. Data inputs include seasonal hydrography, mean wind and river input, and oceanic tides. Computed tidal distributions of four major tides($M_2,\;S_2,\;K_1$ and $O_1$) are presented and results are in good agreement with the observations in the domain. The model reproduces well the tidal charts. The tidal residual current is relatively strong around west coast of Korea including the Cheju Island and southern coast of China. The current by $M_2$ has a maximum speed of 10 cm/s in the vicinity of Cheju Island with a anti-clockwise circulation in the Yellow Sea. General tendency of the current, however, is to flow eastward in the South Sea. Surface residual current simulated with $M_2$ and with $M_2+S_2+K_1+O_1$ tidal forcing shows slightly different patterns in the East China Sea. The model shows that the southerly wind reduces the southward current created by freshwater discharge. In summer during high runoff(mean discharge about $50,000\;m^3/s$ of Yangtze), low salinity plume-like structure(with S < 30.0 psu) extending some 160 km toward the northeast and Changjiang Diluted Water(CDW), below salinity 26 psu, was found within about 95 km. The offshore dispersion of the Changjiang outflow water is enhanced by the prevailing southerly wind. It is estimated that the inertia of the river discharge cannot exclusively reach the around sea of Cheju Island. It is noted that spatial and temporal distribution of salinity and the other materials are controlled by mixture of Changjiang discharge, prevailing wind, advection by flowing warm current and tidal current.

  • PDF

UNDERWATER DISTRIBUTION OF VESSEL NOISE (선박소음의 수중분포에 관한 연구)

  • PARK Jung Hee
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.10 no.4
    • /
    • pp.227-235
    • /
    • 1977
  • The noise pressure scattered underwater on account of the engine revolution of a pole and liner, Kwan-Ak-San(G. T. 234.96), was measured at the locations of Lat. $34^{\circ}47'N$, Long. $128^{\circ}53'E$ on the 16th of August 1976 and Lat. $34^{\circ}27'N$, Long. $128^{\circ}23'E$ on the 28th of July, 1977. The noise pressure passed through each observation point (Nos. 1 to 5), which was established at every 10m distance at circumference of outside hull was recorded when the vessel was cruising and drifted. In case of drifting, the revolution of engine was fixed at 600 r. p. m. and the noise was recorded at every 10 m distance apart from observation point No. 3 in both horizontal and vertical directions with $90^{\circ}$ toward the stern-bow line. In case of cruising, the engine was kept in a full speed at 700 r.p.m. and the sounds passed through underwater in 1 m depth were also recorded while the vessel moved back and forth. The noise pressure was analyzed with sound level meter (Bruel & Kjar 2205, measuring range 37-140 dB) at the anechoic chamber in the Institute of Marine Science, National Fisheries University of Busan. The frequency and sound waves of the noise were analyzed in the Laboratory of Navigation Instrument. From the results, the noise pressure was closely related to the engine revolution shelving that the noise pressure marked 100 dB when .400 r. p. m. and increase of 100 r. p. m. resulted in 1 dB increase in noise pressure and the maximum appeared at 600 r. p. m. (Fig.5). When the engine revolution was fixed at 700 r. p. m., the noise pressures passed through each observation point (Nos. 1 to 5) placed at circumference of out side hull were 75,78,76,74 and 68 dB, the highest at No.2, in case of keeping under way while 75,76,77,70 and 67 dB, the highest at No.3 in case of drifting respectively (Fig.5). When the vessel plyed 1,400 m distance at 700 r.p.m., the noise pressure were 67 dB at the point 0 m, 64 dB at 600m and 56 dB at 1,400m on forward while 72 at 0 m, 66 at 600 m and 57 dB at 1,400 m on backward respectively indicating the Doppler effects 5 dB at 0 m and 3 dB at 200 m(Fig.6). The noise pressures passed through the points apart 1,10,20,30,40 and 50 m depth underwater from the observation point No.7 (horizontal distance 20 m from the point No.3) were 68,75,62,59,55 and 51 dB respectively as the vessel was being drifted maintaining the engine revolution at 600 r. p. m. (Fig. 8-B) whereas the noise pressures at the observation points Nos.6,7,8,9 and 10 of 10 m depth underwater were 64,75,55,58,58 and 52 dB respectively(Fig.8-A).

  • PDF

EFFECT OF CHLORHEXIDINE ON MICROTENSILE BOND STRENGTH OF DENTIN BONDING SYSTEMS (Chlorhexidine 처리가 상아질 접착제의 미세인장결합강도에 미치는 영향)

  • Oh, Eun-Hwa;Choi, Kyoung-Kyu;Kim, Jong-Ryul;Park, Sang-Jin
    • Restorative Dentistry and Endodontics
    • /
    • v.33 no.2
    • /
    • pp.148-161
    • /
    • 2008
  • The purpose of this study was to evaluate the effect of chlorhexidine (CHX) on microtensile bond strength (${\mu}TBS$) of dentin bonding systems. Dentin collagenolytic and gelatinolytic activities can be suppressed by protease inhibitors, indicating that MMPs (Matrix metalloproteinases) inhibition could be beneficial in the preservation of hybrid layers. Chlorhexidine (CHX) is known as an inhibitor of MMPs activity in vitro. The experiment was proceeded as follows: At first, flat occlusal surfaces were prepared on mid-coronal dentin of extracted third molars. GI (Glass Ionomer) group was treated with dentin conditioner, and then, applied with 2 % CHX. Both SM (Scotchbond Multipurpose) and SB (Single Bond) group were applied with CHX after acid-etched with 37% phosphoric acid. TS (Clearfil Tri-S) group was applied with CHX, and then, with adhesives. Hybrid composite Z-250 and resin-modified glass ionomer Fuji-II LC was built up on experimental dentin surfaces. Half of them were subjected to 10,000 thermocycle, while the others were tested immediately. With the resulting data, statistically two-way ANOVA was performed to assess the ${\mu}TBS$ before and after thermo cycling and the effect of CHX. All statistical tests were carried out at the 95 % level of confidence. The failure mode of the testing samples was observed under a scanning electron microscopy (SEM). Within limited results, the results of this study were as follows; 1. In all experimental groups applied with 2 % chlorhexidine, the microtensile bond strength increased, and thermo cycling decreased the micro tensile bond strength (P > 0.05). 2. Compared to the thermocycling groups without chlorhexidine, those with both thermocycling and chlorhexidine showed higher microtensile bond strength, and there was significant difference especially in GI and TS groups. 3. SEM analysis of failure mode distribution revealed the adhesive failure at hybrid layer in most of the specimen. and the shift of the failure site from bottom to top of the hybrid layer with chlorhexidine groups. 2 % chlorhexidine application after acid-etching proved to preserve the durability of the hybrid layer and microtensile bond strength of dentin bonding systems.

A Study on the Stability and Sludge Energy Efficiency Evaluation of Torrefied Wood Flour Natural Material Based Coagulant (반탄화목분 천연재료 혼합응집제의 안정성 및 슬러지 에너지화 가능성 평가에 관한 연구)

  • PARK, Hae Keum;KANG, Seog Goo
    • Journal of the Korean Wood Science and Technology
    • /
    • v.48 no.3
    • /
    • pp.271-282
    • /
    • 2020
  • Sewage treatment plants are social infrastructure of cities. The sewage distribution rate in Korea is reaching 94% based on the sewage statistics based in the year of 2017. In Korean sewage treatment plants, use of PAC (Poly Aluminum Chloride) accounts for 58%. It contains a large amount of impurities (heavy metal) according to the quality standards, however, there have been insufficient efforts to reinforce the standards or technically improve the quality, which resulted in secondary pollution problems from injecting excessive coagulant. Also, the increase in the use of chemicals is leading to the increases in the annual amount of sewage sludge generated in 2017 and the need to reuse sludge. As such, this study aims to verify the possibility of reusing sludge by evaluating the stability of heavy metals based on the injection of coagulant mixture during water treatment which uses the torrefield wood powder and natural materials, and evaluating the sedimentation and heating value of sewage sludge. As a result of analyzing heavy metals (Cr, Fe, Zn, Cu, Cd, As, Pb, and Ni) from the coagulant mixture and PAC (10%), Cr, Cd, Pb, Ni, and Hg were not detected. As for Zn, while its concentration notified in the quality standards for drinking water is 3 mg/L, only a small amount of 0.007 mg/L was detected in the coagulant mixture. Maximum amounts of over double amounts of Fe, Cu, and As were found with PAC (10%) compared to the coagulant mixture. Also, an analysis of sludge sedimentation found that the coagulant mixture showed a better performance of up to double the speed of the conventional coagulant, PAC (10%). The dry-basis lower heating value of sewage sludge produced by injecting the coagulant mixture was 3,378 kcal/kg, while that of sewage sludge generated due to PAC (10%) was 3,171 kcal/kg; although both coagulants met the requirements to be used as auxiliary fuel at thermal power plants, the coagulant mixture developed in this study could secure heating values 200 kal/kg higher than the counterpart. Therefore, utilization of the coagulant mixture for water treatment rather than PAC (10%) is expected to be more environmentally stable and effective, as it helps generating sludge with better stability against heavy metals, having a faster sludge sedimentation, and higher heating value.