• Title/Summary/Keyword: Data Management Method

Search Result 8,121, Processing Time 0.045 seconds

Establishment of Sample Preparation Method to Enhance Recovery of Food-borne Pathogens from Produce (농산물 중 식중독세균 검출을 위한 전처리법 확립)

  • Kim, Se-Ri;Choi, Song-Yi;Seo, Min-Kyoung;Lee, Ji-Young;Kim, Won-Il;Yoon, Yohan;Ryu, Kyoung Yul;Yun, Jong-Cul;Kim, Byung-Seok
    • Journal of Food Hygiene and Safety
    • /
    • v.28 no.3
    • /
    • pp.279-285
    • /
    • 2013
  • To establish sample preparation method for detection of food-borne pathogens from lettuce, perilla leaves, cucumber, pepper, and cherry tomato, the influences of diluent composition, processing time, and proportion of diluent to sample were examined. Each produce was inoculated with 6.0 log $CFU/cm^2$ of Escherichia coli O157:H7, Salmonella Typhimurium, Listeria monocytogenes, Staphylococcus aureus, and Bacillus cereus. Each produce was treated with 0.1% peptone water, and D/E neutralizing broth. Processing time of produce was 30, 60, 90, and 120s, and the proportion of diluent to sample was 2 : 1, 4 : 1, 9 : 1, and 19 : 1. The number of bacteria after treatment of D/E neutralizing broth was higher than that of 0.1% peptone water (P<0.05). In cherry tomato, the population of S. typhimurium recovered from treated with D/E broth was higher than that recovered from treated with 0.1% peptone water by 1.05 log $CFU/cm^2$ (P<0.05). No difference in numbers of pathogens was observed in processing time. Optimum proportion of diluent to perilla leaf, iceberg lettuce, cucumber, green pepper, and tomato was 9 : 1, 4 : 1, 2 : 1, 2 : 1, and 2 : 1, respectively. These data suggest that D/E neutralizing broth should be recommend as diluent, and the diluent volume applied to produce should be determined in proportion to produce surface area per weight (g).

An Embedding /Extracting Method of Audio Watermark Information for High Quality Stereo Music (고품질 스테레오 음악을 위한 오디오 워터마크 정보 삽입/추출 기술)

  • Bae, Kyungyul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.21-35
    • /
    • 2018
  • Since the introduction of MP3 players, CD recordings have gradually been vanishing, and the music consuming environment of music users is shifting to mobile devices. The introduction of smart devices has increased the utilization of music through music playback, mass storage, and search functions that are integrated into smartphones and tablets. At the time of initial MP3 player supply, the bitrate of the compressed music contents generally was 128 Kbps. However, as increasing of the demand for high quality music, sound quality of 384 Kbps appeared. Recently, music content of FLAC (Free License Audio Codec) format using lossless compression method is becoming popular. The download service of many music sites in Korea has classified by unlimited download with technical protection and limited download without technical protection. Digital Rights Management (DRM) technology is used as a technical protection measure for unlimited download, but it can only be used with authenticated devices that have DRM installed. Even if music purchased by the user, it cannot be used by other devices. On the contrary, in the case of music that is limited in quantity but not technically protected, there is no way to enforce anyone who distributes it, and in the case of high quality music such as FLAC, the loss is greater. In this paper, the author proposes an audio watermarking technology for copyright protection of high quality stereo music. Two kinds of information, "Copyright" and "Copy_free", are generated by using the turbo code. The two watermarks are composed of 9 bytes (72 bits). If turbo code is applied for error correction, the amount of information to be inserted as 222 bits increases. The 222-bit watermark was expanded to 1024 bits to be robust against additional errors and finally used as a watermark to insert into stereo music. Turbo code is a way to recover raw data if the damaged amount is less than 15% even if part of the code is damaged due to attack of watermarked content. It can be extended to 1024 bits or it can find 222 bits from some damaged contents by increasing the probability, the watermark itself has made it more resistant to attack. The proposed algorithm uses quantization in DCT so that watermark can be detected efficiently and SNR can be improved when stereo music is converted into mono. As a result, on average SNR exceeded 40dB, resulting in sound quality improvements of over 10dB over traditional quantization methods. This is a very significant result because it means relatively 10 times improvement in sound quality. In addition, the sample length required for extracting the watermark can be extracted sufficiently if the length is shorter than 1 second, and the watermark can be completely extracted from music samples of less than one second in all of the MP3 compression having a bit rate of 128 Kbps. The conventional quantization method can extract the watermark with a length of only 1/10 compared to the case where the sampling of the 10-second length largely fails to extract the watermark. In this study, since the length of the watermark embedded into music is 72 bits, it provides sufficient capacity to embed necessary information for music. It is enough bits to identify the music distributed all over the world. 272 can identify $4*10^{21}$, so it can be used as an identifier and it can be used for copyright protection of high quality music service. The proposed algorithm can be used not only for high quality audio but also for development of watermarking algorithm in multimedia such as UHD (Ultra High Definition) TV and high-resolution image. In addition, with the development of digital devices, users are demanding high quality music in the music industry, and artificial intelligence assistant is coming along with high quality music and streaming service. The results of this study can be used to protect the rights of copyright holders in these industries.

A Study on Risk Assessment Method for Earthquake-Induced Landslides (지진에 의한 산사태 위험도 평가방안에 관한 연구)

  • Seo, Junpyo;Eu, Song;Lee, Kihwan;Lee, Changwoo;Woo, Choongshik
    • Journal of the Society of Disaster Information
    • /
    • v.17 no.4
    • /
    • pp.694-709
    • /
    • 2021
  • Purpose: In this study, earthquake-induced landslide risk assessment was conducted to provide basic data for efficient and preemptive damage prevention by selecting the erosion control work before the earthquake and the prediction and restoration priorities of the damaged area after the earthquake. Method: The study analyzed the previous studies abroad to examine the evaluation methodology and to derive the evaluation factors, and examine the utilization of the landslide hazard map currently used in Korea. In addition, the earthquake-induced landslide hazard map was also established on a pilot basis based on the fault zone and epicenter of Pohang using seismic attenuation. Result: The earthquake-induced landslide risk assessment study showed that China ranked 44%, Italy 16%, the U.S. 15%, Japan 10%, and Taiwan 8%. As for the evaluation method, the statistical model was the most common at 59%, and the physical model was found at 23%. The factors frequently used in the statistical model were altitude, distance from the fault, gradient, slope aspect, country rock, and topographic curvature. Since Korea's landslide hazard map reflects topography, geology, and forest floor conditions, it has been shown that it is reasonable to evaluate the risk of earthquake-induced landslides using it. As a result of evaluating the risk of landslides based on the fault zone and epicenter in the Pohang area, the risk grade was changed to reflect the impact of the earthquake. Conclusion: It is effective to use the landslide hazard map to evaluate the risk of earthquake-induced landslides at the regional scale. The risk map based on the fault zone is effective when used in the selection of a target site for preventive erosion control work to prevent damage from earthquake-induced landslides. In addition, the risk map based on the epicenter can be used for efficient follow-up management in order to prioritize damage prevention measures, such as to investigate the current status of landslide damage after an earthquake, or to restore the damaged area.

Evaluation for applicability of river depth measurement method depending on vegetation effect using drone-based spatial-temporal hyperspectral image (드론기반 시공간 초분광영상을 활용한 식생유무에 따른 하천 수심산정 기법 적용성 검토)

  • Gwon, Yeonghwa;Kim, Dongsu;You, Hojun
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.4
    • /
    • pp.235-243
    • /
    • 2023
  • Due to the revision of the River Act and the enactment of the Act on the Investigation, Planning, and Management of Water Resources, a regular bed change survey has become mandatory and a system is being prepared such that local governments can manage water resources in a planned manner. Since the topography of a bed cannot be measured directly, it is indirectly measured via contact-type depth measurements such as level survey or using an echo sounder, which features a low spatial resolution and does not allow continuous surveying owing to constraints in data acquisition. Therefore, a depth measurement method using remote sensing-LiDAR or hyperspectral imaging-has recently been developed, which allows a wider area survey than the contact-type method as it acquires hyperspectral images from a lightweight hyperspectral sensor mounted on a frequently operating drone and by applying the optimal bandwidth ratio search algorithm to estimate the depth. In the existing hyperspectral remote sensing technique, specific physical quantities are analyzed after matching the hyperspectral image acquired by the drone's path to the image of a surface unit. Previous studies focus primarily on the application of this technology to measure the bathymetry of sandy rivers, whereas bed materials are rarely evaluated. In this study, the existing hyperspectral image-based water depth estimation technique is applied to rivers with vegetation, whereas spatio-temporal hyperspectral imaging and cross-sectional hyperspectral imaging are performed for two cases in the same area before and after vegetation is removed. The result shows that the water depth estimation in the absence of vegetation is more accurate, and in the presence of vegetation, the water depth is estimated by recognizing the height of vegetation as the bottom. In addition, highly accurate water depth estimation is achieved not only in conventional cross-sectional hyperspectral imaging, but also in spatio-temporal hyperspectral imaging. As such, the possibility of monitoring bed fluctuations (water depth fluctuation) using spatio-temporal hyperspectral imaging is confirmed.

An Analysis of Accessibility to Hydrogen Charging Stations in Seoul Based on Location-Allocation Models (입지배분모형 기반의 서울시 수소충전소 접근성 분석)

  • Sang-Gyoon Kim;Jong-Seok Won;Yong-Beom Pyeon;Min-Kyung Cho
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.2
    • /
    • pp.339-350
    • /
    • 2024
  • Purpose: This study analyzes accessibility of 10 hydrogen charging stations in Seoul and identifies areas that were difficult to access. The purpose is to re-analyze accessibility by adding a new location in terms of equity and safety of location placement, and then draw implications by comparing the improvement effects. Method: By applying the location-allocation model and the service area model based on network analysis of the ArcGIS program, areas with weak access were identified. The location selection method applied the 'Minimize Facilities' method in consideration of the need for rapid arrival to insufficient hydrogen charging stations. The limit distance for arrival within a specific time was analyzed by applying the average vehicle traffic speed(23.1km/h, Seoul Open Data Square) in 2022 to three categories: 3,850m(10minutes), 5,775m(15minutes), 7,700m(20minutes). In order to minimize conflicts over the installation of hydrogen charging stations, special standards of the Ministry of Trade, Industry and Energy applied to derive candidate sites for additional installation of hydrogen charging stations among existing gas stations and LPG/CNG charging stations. Result: As a result of the analysis, it was confirmed that accessibility was significantly improved by installing 5 new hydrogen charging stations at relatively safe gas stations and LPG/CNG charging stations in areas where access to the existing 10 hydrogen charging stations is weak within 20 minutes. Nevertheless, it was found that there are still areas where access remains difficult. Conclusion: The location allocation model is used to identify areas where access to hydrogen charging stations is difficult and prioritize installation, decision-making to select locations for hydrogen charging stations based on scientific evidence can be supported.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

The Causes of Conflict and the Effect of Control Mechanisms on Conflict Resolution between Manufacturer and Supplier (제조-공급자간 갈등 원인과 거래조정 방식의 갈등관리 효과)

  • Rhee, Jin Hwa
    • Journal of Distribution Research
    • /
    • v.17 no.4
    • /
    • pp.55-80
    • /
    • 2012
  • I. Introduction Developing the relationships between companies is very important issue to ensure a competitive advantage in today's business environment (Bleeke & Ernst 1991; Mohr & Spekman 1994; Powell 1990). Partnerships between companies are based on having same goals, pursuing mutual understanding, and having a professional level of interdependence. By having such a partnerships and cooperative efforts between companies, they will achieve efficiency and effectiveness of their business (Mohr and Spekman, 1994). However, it is difficult to expect these ideal results only in the B2B corporate transaction. According to agency theory which is the well-accepted theory in various fields of business strategy, organization, and marketing, the two independent companies have fundamentally different corporate purposes. Also there is a higher chance of developing opportunism and conflict due to natures of human(organization), such as self-interest, bounded rationality, risk aversion, and environment factor as imbalance of information (Eisenhardt 1989). That is, especially partnerships between principal(or buyer) and agent(or supplier) of companies within supply chain, the business contract itself will not provide competitive advantage. But managing partnership between companies is the key to success. Therefore, managing partnership between manufacturer and supplier, and finding causes of conflict are essential to improve B2B performance. In conclusion, based on prior researches and Agency theory, this study will clarify how business hazards cause conflicts on supply chain and then identify how developed conflicts have been managed by two control mechanisms. II. Research model III. Method In order to validate our research model, this study gathered questionnaires from small and medium sized enterprises(SMEs). In Korea, SMEs mean the firms whose employee is under 300 and capital is under 8 billion won(about 7.2 million dollar). We asked the manufacturer's perception about the relationship with the biggest supplier, and our key informants are denied to a person responsible for buying(ex)CEO, executives, managers of purchasing department, and so on). In detail, we contact by telephone to our initial sample(about 1,200 firms) and introduce our research motivation and send our questionnaires by e-mail, mail, and direct survey. Finally we received 361 data and eliminate 32 inappropriate questionnaires. We use 329 manufactures' data on analysis. The purpose of this study is to identify the anticipant role of business hazard (environmental dynamism, asset specificity) and investigate the moderating effect of control mechanism(formal control, social control) on conflict-performance relationship. To find out moderating effect of control methods, we need to compare the regression weight between low versus. high group(about level of exercised control methods). Therefore we choose the structural equation modeling method that is proper to do multi-group analysis. The data analysis is performed by AMOS 17.0 software, and model fits are good statically (CMIN/DF=1.982, p<.000, CFI=.936, IFI=.937, RMSEA=.056). IV. Result V. Discussion Results show that the higher environmental dynamism and asset specificity(on particular supplier) buyer(manufacturer) has, the more B2B conflict exists. And this conflict affect relationship quality and financial outcomes negatively. In addition, social control and formal control could weaken the negative effect of conflict on relationship quality significantly. However, unlikely to assure conflict resolution effect of control mechanisms on relationship quality, financial outcomes are changed by neither social control nor formal control. We could explain this results with the characteristics of our sample, SMEs(Small and Medium sized Enterprises). Financial outcomes of these SMEs(manufacturer or principal) are affected by their customer(usually major company) more easily than their supplier(or agent). And, in recent few years, most of companies have suffered from financial problems because of global economic recession. It means that it is hard to evaluate the contribution of supplier(agent). Therefore we also support the suggestion of Gladstein(1984), Poppo & Zenger(2002) that relational performance variable can capture the focal outcomes of relationship(exchange) better than financial performance variable. This study has some implications that it tests the sources of conflict and investigates the effect of resolution methods of B2B conflict empirically. And, especially, it finds out the significant moderating effect of formal control which past B2B management studies have ignored in Korea.

  • PDF

Estimation of SCS Runoff Curve Number and Hydrograph by Using Highly Detailed Soil Map(1:5,000) in a Small Watershed, Sosu-myeon, Goesan-gun (SCS-CN 산정을 위한 수치세부정밀토양도 활용과 괴산군 소수면 소유역의 물 유출량 평가)

  • Hong, Suk-Young;Jung, Kang-Ho;Choi, Chol-Uong;Jang, Min-Won;Kim, Yi-Hyun;Sonn, Yeon-Kyu;Ha, Sang-Keun
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.43 no.3
    • /
    • pp.363-373
    • /
    • 2010
  • "Curve number" (CN) indicates the runoff potential of an area. The US Soil Conservation Service (SCS)'s CN method is a simple, widely used, and efficient method for estimating the runoff from a rainfall event in a particular area, especially in ungauged basins. The use of soil maps requested from end-users was dominant up to about 80% of total use for estimating CN based rainfall-runoff. This study introduce the use of soil maps with respect to hydrologic and watershed management focused on hydrologic soil group and a case study resulted in assessing effective rainfall and runoff hydrograph based on SCS-CN method in a small watershed. The ratio of distribution areas for hydrologic soil group based on detailed soil map (1:25,000) of Korea were 42.2% (A), 29.4% (B), 18.5% (C), and 9.9% (D) for HSG 1995, and 35.1% (A), 15.7% (B), 5.5% (C), and 43.7% (D) for HSG 2006, respectively. The ratio of D group in HSG 2006 accounted for 43.7% of the total and 34.1% reclassified from A, B, and C groups of HSG 1995. Similarity between HSG 1995 and 2006 was about 55%. Our study area was located in Sosu-myeon, Goesan-gun including an approx. 44 $km^2$-catchment, Chungchungbuk-do. We used a digital elevation model (DEM) to delineate the catchments. The soils were classified into 4 hydrologic soil groups on the basis of measured infiltration rate and a model of the representative soils of the study area reported by Jung et al. 2006. Digital soil maps (1:5,000) were used for classifying hydrologic soil groups on the basis of soil series unit. Using high resolution satellite images, we delineated the boundary of each field or other parcel on computer screen, then surveyed the land use and cover in each. We calculated CN for each and used those data and a land use and cover map and a hydrologic soil map to estimate runoff. CN values, which are ranged from 0 (no runoff) to 100 (all precipitation runs off), of the catchment were 73 by HSG 1995 and 79 by HSG 2006, respectively. Each runoff response, peak runoff and time-to-peak, was examined using the SCS triangular synthetic unit hydrograph, and the results of HSG 2006 showed better agreement with the field observed data than those with use of HSG 1995.

Factors Related to Waiting and Staying Time for Patient Care in Emergency Care Center (응급의료센터 내원환자 진료시 소요시간과 관련된 요인)

  • Han, Nam Sook;Park, Jae Yong;Lee, Sam Beom;Do, Byung Soo;Kim, Seok Beom
    • Quality Improvement in Health Care
    • /
    • v.7 no.2
    • /
    • pp.138-155
    • /
    • 2000
  • Background: Factors related to waiting and staying time for patient care in emergency care center (ECC) were examined during 1 month from Apr. 1 to Apr. 30, 1997 at an ECC of Yeungnam university hospital in Taegu metropolitan city, to obtain the baseline data on the strategy of effective management of emergency patients. Method: The study subjects consisted of the 1,742 patients who visited at ECC and the data were obtained from the medical records of ECC and direct surveys. Results: The mean interval between ECC admission time and initial care time by each ECC duty residents was 83.1 minutes for male patients and 84.9 minutes for female patients, and mean ECC staying time (time interval between admission and final disposition from ECC) was 718.0 minutes in men and 670.5 minutes in women. As the results, the mean staying time in ECC was higher in older age, and especially the both of initial care time and staying time were highest in patients of medical aid, and shortest in patients of worker's accident compensation insurance. The on admission or not, previously endotracheal-intubation state of patient. The ECC staying ti initial care time was much more delayed in patients of not having previous medical records and the ECC staying time was higher in referred patients from out-patient department, in transferred patients from the other hospitals and patients having previous records, and in patients partly used the order-communicating system. The factors associated with the initial care time were the numbers of ECC patients and the existence of any true emergent patients, being cardiopulmonary resuscitation (CPR) statusme was much more longer in patients of drug intoxication, in CPR patients, in medical department patients, in transfused patients and in patients related to 3 or more departments. And according to the numbers of duty internships, the ECC staying time for four internships was more longer than for five internships and after admission ordering was done, also-more longer in status being of no available beds. As above mentioned results, the factors for the ECC staying time were thought to be statistically significant (P<0.01) according to the patient's age and the laboratory orders and the X-ray films checked. And also the factor for the ECC staying time were thought to be statistically significant (P<0.01) according to the status being of no available beds, the laboratory orders and/or the special laboratory orders, the X-ray films checked, final disposing department, transferred to other hospital or not, home medication or not, admission or not, the grades of beds, the year grades of residents, the causes of ECC visit, the being CPR status on admission or not, the surgical operation or not, being known personells in our hospital. Conclution: Authors concluded that the relieving method of long-staying time in ECC was being establishing the legally proved apparatus which could differentiate the true emergency or non-emergency patients, and that the methods of shortening ECC staying time were doing definitely necessary laboratory orders and managing beds more flexibly to admit for ECC patients and finally this methods were thought to be a method of unloading for ECC personnels and improving the quality of care in emergency patients.

  • PDF

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF