• Title/Summary/Keyword: Input Duration Time

Search Result 103, Processing Time 0.024 seconds

Characteristics of Exposure to Humidifier Disinfectants and Their Association with the Presence of a Person Who Experienced Adverse Health Effects in General Households in Korea (일반 가구의 가습기살균제 노출 특성 및 건강이상 경험과의 연관성)

  • Lee, Eunsun;Cheong, Hae-Kwan;Paek, Domyung;Kim, Solhwee;Leem, Jonghan;Kim, Pangyi;Lee, Kyoung-Mu
    • Journal of Environmental Health Sciences
    • /
    • v.46 no.3
    • /
    • pp.285-296
    • /
    • 2020
  • Objective: The objective of this study was to describe the characteristics of exposure to humidifier disinfectants (HDs) and their association with the presence of a person who experienced the adverse health effects in general households in Korea. Methods: During the month of December 2016, a nationwide online survey was conducted on adults over 20 years of age who had experience of using HDs. It provided information on exposure characteristics and the experience of health effects. The final survey respondents consisted of 1,555 people who provided information on themselves and their household members during the use of HD. Exposure characteristics at the household level included average days of HD use per week, average hours of HD use per day, the duration within which one bottle of HD was emptied, average input frequency of HD, amount of HD (cc) per one time used, and active ingredients of HD products (PHMG, CMIT/MIT, PGH, or others). The risk of the presence of a person who experienced adverse health effects in the household was evaluated by estimating odds ratios (ORs) and 95% confidence intervals (CIs) adjusted for monthly income and region using a multiple logistic regression model. Subgroup analyses were conducted for households with a child (≤7 years) and households with a newborn infant during HD use. Results: The level of exposure to HD tended to be higher for households with a child or newborn infant for several variables including average days of HD use per week (P<0.0001) and average hours of HD use per day (P<0.0001). The proportion of households in which there was at least one person who experienced adverse health effects such as rhinitis, asthma, pneumonia, atopy/skin disease, etc. was 20.6% for all households, 25.3% for households with children, and 29.9% for households with newborn infants. The presence of a person who experienced adverse health effects in the household was significantly associated with average hours of HD use per day (Ptrend<0.001), duration within which one bottle of HD was emptied (Ptrend<0.001), average input frequency of HD (Ptrend<0.001), amount of HD per one use (Ptrend=0.01), and use of HDs containing PHMG (OR=2.23, 95% CI=1.45-3.43). Similar results were observed in subgroup analyses. Conclusion: Our results suggest that level of exposure to HD tended to be higher for households with a child or newborn infant and that exposure to HD is significantly associated with the presence of a person who experienced adverse health effects in the household.

A Study on the Automatic Speech Control System Using DMS model on Real-Time Windows Environment (실시간 윈도우 환경에서 DMS모델을 이용한 자동 음성 제어 시스템에 관한 연구)

  • 이정기;남동선;양진우;김순협
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.3
    • /
    • pp.51-56
    • /
    • 2000
  • Is this paper, we studied on the automatic speech control system in real-time windows environment using voice recognition. The applied reference pattern is the variable DMS model which is proposed to fasten execution speed and the one-stage DP algorithm using this model is used for recognition algorithm. The recognition vocabulary set is composed of control command words which are frequently used in windows environment. In this paper, an automatic speech period detection algorithm which is for on-line voice processing in windows environment is implemented. The variable DMS model which applies variable number of section in consideration of duration of the input signal is proposed. Sometimes, unnecessary recognition target word are generated. therefore model is reconstructed in on-line to handle this efficiently. The Perceptual Linear Predictive analysis method which generate feature vector from extracted feature of voice is applied. According to the experiment result, but recognition speech is fastened in the proposed model because of small loud of calculation. The multi-speaker-independent recognition rate and the multi-speaker-dependent recognition rate is 99.08% and 99.39% respectively. In the noisy environment the recognition rate is 96.25%.

  • PDF

Contrast Media in Abdominal Computed Tomography: Optimization of Delivery Methods

  • Joon Koo Han;Byung Ihn Choi;Ah Young Kim;Soo Jung Kim
    • Korean Journal of Radiology
    • /
    • v.2 no.1
    • /
    • pp.28-36
    • /
    • 2001
  • Objective: To provide a systematic overview of the effects of various parameters on contrast enhancement within the same population, an animal experiment as well as a computer-aided simulation study was performed. Materials and Methods: In an animal experiment, single-level dynamic CT through the liver was performed at 5-second intervals just after the injection of contrast medium for 3 minutes. Combinations of three different amounts (1, 2, 3 mL/kg), concentrations (150, 200, 300 mgI/mL), and injection rates (0.5, 1, 2 mL/sec) were used. The CT number of the aorta (A), portal vein (P) and liver (L) was measured in each image, and time-attenuation curves for A, P and L were thus obtained. The degree of maximum enhancement (Imax) and time to reach peak enhancement (Tmax) of A, P and L were determined, and times to equilibrium (Teq) were analyzed. In the computed-aided simulation model, a program based on the amount, flow, and diffusion coefficient of body fluid in various compartments of the human body was designed. The input variables were the concentrations, volumes and injection rates of the contrast media used. The program generated the time-attenuation curves of A, P and L, as well as liver-to-hepatocellular carcinoma (HCC) contrast curves. On each curve, we calculated and plotted the optimal temporal window (time period above the lower threshold, which in this experiment was 10 Hounsfield units), the total area under the curve above the lower threshold, and the area within the optimal range. Results: A. Animal Experiment: At a given concentration and injection rate, an increased volume of contrast medium led to increases in Imax A, P and L. In addition, Tmax A, P, L and Teq were prolonged in parallel with increases in injection time The time-attenuation curve shifted upward and to the right. For a given volume and injection rate, an increased concentration of contrast medium increased the degree of aortic, portal and hepatic enhancement, though Tmax A, P and L remained the same. The time-attenuation curve shifted upward. For a given volume and concentration of contrast medium, changes in the injection rate had a prominent effect on aortic enhancement, and that of the portal vein and hepatic parenchyma also showed some increase, though the effect was less prominent. A increased in the rate of contrast injection led to shifting of the time enhancement curve to the left and upward. B. Computer Simulation: At a faster injection rate, there was minimal change in the degree of hepatic attenuation, though the duration of the optimal temporal window decreased. The area between 10 and 30 HU was greatest when contrast media was delivered at a rate of 2 3 mL/sec. Although the total area under the curve increased in proportion to the injection rate, most of this increase was above the upper threshould and thus the temporal window was narrow and the optimal area decreased. Conclusion: Increases in volume, concentration and injection rate all resulted in improved arterial enhancement. If cost was disregarded, increasing the injection volume was the most reliable way of obtaining good quality enhancement. The optimal way of delivering a given amount of contrast medium can be calculated using a computer-based mathematical model.

  • PDF

Computation of Criterion Rainfall for Urban Flood by Logistic Regression (로지스틱 회귀에 의한 도시 침수발생의 한계강우량 산정)

  • Kim, Hyun Il;Han, Kun Yeun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.39 no.6
    • /
    • pp.713-723
    • /
    • 2019
  • Due to the climate change and various rainfall pattern, it is difficult to estimate a rainfall criterion which cause inundation for urban drainage districts. It is necessary to examine the result of inundation analysis by considering the detailed topography of the watershed, drainage system, and various rainfall scenarios. In this study, various rainfall scenarios were considered with the probabilistic rainfall and Huff's time distribution method in order to identify the rainfall characteristics affecting the inundation of the Hyoja drainage basin. Flood analysis was performed with SWMM and two-dimensional inundation analysis model and the parameters of SWMM were optimized with flood trace map and GA (Genetic Algorithm). By linking SWMM and two-dimensional flood analysis model, the fitness ratio between the existing flood trace and simulated inundation map turned out to be 73.6 %. The occurrence of inundation according to each rainfall scenario was identified, and the rainfall criterion could be estimated through the logistic regression method. By reflecting the results of one/two dimensional flood analysis, and AWS/ASOS data during 2010~2018, the rainfall criteria for inundation occurrence were estimated as 72.04 mm, 146.83 mm, 203.06 mm in 1, 2 and 3 hr of rainfall duration repectively. The rainfall criterion could be re-estimated through input of continuously observed rainfall data. The methodology presented in this study is expected to provide a quantitative rainfall criterion for urban drainage area, and the basic data for flood warning and evacuation plan.

Combined analysis of meteorological and hydrological drought for hydrological drought prediction and early response - Focussing on the 2022-23 drought in the Jeollanam-do - (수문학적 가뭄 예측과 조기대응을 위한 기상-수문학적 가뭄의 연계분석 - 2022~23 전남지역 가뭄을 대상으로)

  • Jeong, Minsu;Hong, Seok-Jae;Kim, Young-Jun;Yoon, Hyeon-Cheol;Lee, Joo-Heon
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.195-207
    • /
    • 2024
  • This study selected major drought events that occurred in the Jeonnam region from 1991 to 2023, examining both meteorological and hydrological drought occurrence mechanisms. The daily drought index was calculated using rainfall and dam storage as input data, and the drought propagation characteristics from meteorological drought to hydrological drought were analyzed. The characteristics of the 2022-23 drought, which recently occurred in the Jeonnam region and caused serious damage, were evaluated. Compared to historical droughts, the duration of the hydrological drought for 2022-2023 lasted 334 days, the second longest after 2017-2018, the drought severity was evaluated as the most severe at -1.76. As a result of a linked analysis of SPI (StandQardized Precipitation Index), and SRSI (Standardized Reservoir Storage Index), it is possible to suggest a proactive utilization for SPI(6) to respond to hydrological drought. Furthermore, by confirming the similarity between SRSI and SPI(12) in long-term drought monitoring, the applicability of SPI(12) to hydrological drought monitoring in ungauged basins was also confirmed. Through this study, it was confirmed that the long-term dryness that occurs during the summer rainy season can transition into a serious level of hydrological drought. Therefore, for preemptive drought response, it is necessary to use real-time monitoring results of various drought indices and understand the propagation phenomenon from meteorological-agricultural-hydrological drought to secure a sufficient drought response period.

Development of a deep neural network model to estimate solar radiation using temperature and precipitation (온도와 강수를 이용하여 일별 일사량을 추정하기 위한 심층 신경망 모델 개발)

  • Kang, DaeGyoon;Hyun, Shinwoo;Kim, Kwang Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.2
    • /
    • pp.85-96
    • /
    • 2019
  • Solar radiation is an important variable for estimation of energy balance and water cycle in natural and agricultural ecosystems. A deep neural network (DNN) model has been developed in order to estimate the daily global solar radiation. Temperature and precipitation, which would have wider availability from weather stations than other variables such as sunshine duration, were used as inputs to the DNN model. Five-fold cross-validation was applied to train and test the DNN models. Meteorological data at 15 weather stations were collected for a long term period, e.g., > 30 years in Korea. The DNN model obtained from the cross-validation had relatively small value of RMSE ($3.75MJ\;m^{-2}\;d^{-1}$) for estimates of the daily solar radiation at the weather station in Suwon. The DNN model explained about 68% of variation in observed solar radiation at the Suwon weather station. It was found that the measurements of solar radiation in 1985 and 1998 were considerably low for a small period of time compared with sunshine duration. This suggested that assessment of the quality for the observation data for solar radiation would be needed in further studies. When data for those years were excluded from the data analysis, the DNN model had slightly greater degree of agreement statistics. For example, the values of $R^2$ and RMSE were 0.72 and $3.55MJ\;m^{-2}\;d^{-1}$, respectively. Our results indicate that a DNN would be useful for the development a solar radiation estimation model using temperature and precipitation, which are usually available for downscaled scenario data for future climate conditions. Thus, such a DNN model would be useful for the impact assessment of climate change on crop production where solar radiation is used as a required input variable to a crop model.

Case Analysis of the Promotion Methodologies in the Smart Exhibition Environment (스마트 전시 환경에서 프로모션 적용 사례 및 분석)

  • Moon, Hyun Sil;Kim, Nam Hee;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.171-183
    • /
    • 2012
  • In the development of technologies, the exhibition industry has received much attention from governments and companies as an important way of marketing activities. Also, the exhibitors have considered the exhibition as new channels of marketing activities. However, the growing size of exhibitions for net square feet and the number of visitors naturally creates the competitive environment for them. Therefore, to make use of the effective marketing tools in these environments, they have planned and implemented many promotion technics. Especially, through smart environment which makes them provide real-time information for visitors, they can implement various kinds of promotion. However, promotions ignoring visitors' various needs and preferences can lose the original purposes and functions of them. That is, as indiscriminate promotions make visitors feel like spam, they can't achieve their purposes. Therefore, they need an approach using STP strategy which segments visitors through right evidences (Segmentation), selects the target visitors (Targeting), and give proper services to them (Positioning). For using STP Strategy in the smart exhibition environment, we consider these characteristics of it. First, an exhibition is defined as market events of a specific duration, which are held at intervals. According to this, exhibitors who plan some promotions should different events and promotions in each exhibition. Therefore, when they adopt traditional STP strategies, a system can provide services using insufficient information and of existing visitors, and should guarantee the performance of it. Second, to segment automatically, cluster analysis which is generally used as data mining technology can be adopted. In the smart exhibition environment, information of visitors can be acquired in real-time. At the same time, services using this information should be also provided in real-time. However, many clustering algorithms have scalability problem which they hardly work on a large database and require for domain knowledge to determine input parameters. Therefore, through selecting a suitable methodology and fitting, it should provide real-time services. Finally, it is needed to make use of data in the smart exhibition environment. As there are useful data such as booth visit records and participation records for events, the STP strategy for the smart exhibition is based on not only demographical segmentation but also behavioral segmentation. Therefore, in this study, we analyze a case of the promotion methodology which exhibitors can provide a differentiated service to segmented visitors in the smart exhibition environment. First, considering characteristics of the smart exhibition environment, we draw evidences of segmentation and fit the clustering methodology for providing real-time services. There are many studies for classify visitors, but we adopt a segmentation methodology based on visitors' behavioral traits. Through the direct observation, Veron and Levasseur classify visitors into four groups to liken visitors' traits to animals (Butterfly, fish, grasshopper, and ant). Especially, because variables of their classification like the number of visits and the average time of a visit can estimate in the smart exhibition environment, it can provide theoretical and practical background for our system. Next, we construct a pilot system which automatically selects suitable visitors along the objectives of promotions and instantly provide promotion messages to them. That is, based on the segmentation of our methodology, our system automatically selects suitable visitors along the characteristics of promotions. We adopt this system to real exhibition environment, and analyze data from results of adaptation. As a result, as we classify visitors into four types through their behavioral pattern in the exhibition, we provide some insights for researchers who build the smart exhibition environment and can gain promotion strategies fitting each cluster. First, visitors of ANT type show high response rate for promotion messages except experience promotion. So they are fascinated by actual profits in exhibition area, and dislike promotions requiring a long time. Contrastively, visitors of GRASSHOPPER type show high response rate only for experience promotion. Second, visitors of FISH type appear favors to coupon and contents promotions. That is, although they don't look in detail, they prefer to obtain further information such as brochure. Especially, exhibitors that want to give much information for limited time should give attention to visitors of this type. Consequently, these promotion strategies are expected to give exhibitors some insights when they plan and organize their activities, and grow the performance of them.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Linearity Estimation of PET/CT Scanner in List Mode Acquisition (List Mode에서 PET/CT Scanner의 직선성 평가)

  • Choi, Hyun-Jun;Kim, Byung-Jin;Ito, Mikiko;Lee, Hong-Jae;Kim, Jin-Ui;Kim, Hyun-Joo;Lee, Jae-Sung;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.1
    • /
    • pp.86-90
    • /
    • 2012
  • Purpose: Quantification of myocardial blood flow (MBF) using dynamic PET imaging has the potential to assess coronary artery disease. Rb-82 plays a key role in the clinical assessment of myocardial perfusion using PET. However, MBF could be overestimated due to the underestimation of left ventricular input function in the beginning of the acquisition when the scanner has non-linearity between count rate and activity concentration due to the scanner dead-time. Therefore, in this study, we evaluated the count rate linearity as a function of the activity concentration in PET data acquired in list mode. Materials & methods: A cylindrical phantom (diameter, 12 cm length, 10.5 cm) filled with 296 MBq F-18 solution and 800 mL of water was used to estimate the linearity of the Biograph 40 True Point PET/CT scanner. PET data was acquired with 10 min per frame of 1 bed duration in list mode for different activity concentration levels in 7 half-lives. The images were reconstructed by OSEM and FBP algorithms. Prompt, net true and random counts of PET data according to the activity concentration were measured. Total and background counts were measured by drawing ROI on the phantom images and linearity was measured using background correction. Results: The prompt count rates in list mode were linearly increased proportionally to the activity concentration. At a low activity concentration (<30 kBq/mL), the prompt net true and random count rates were increased with the activity concentration. At a high activity concentration (>30 kBq/mL), the increasing rate of the prompt net true rates was slightly decreased while the increasing rate of random counts was increased. There was no difference in the image intensity linearity between OSEM and FBP algorithms. Conclusion: The Biograph 40 True Point PET/CT scanner showed good linearity of count rate even at a high activity concentration (~370 kBq/mL).The result indicates that the scanner is useful for the quantitative analysis of data in heart dynamic studies using Rb-82, N-13, O-15 and F-18.

  • PDF

Some Statistical Characteristics of Substorms Under Northward IMF Conditions (북쪽방향 IMF 조건하에서 발생하는 서브스톰의 통계적 특성)

  • Lee, Ji-Hee;Lee, D.Y.;Choi, K.C.;Jeong, Y.
    • Journal of Astronomy and Space Sciences
    • /
    • v.26 no.4
    • /
    • pp.451-466
    • /
    • 2009
  • While substorms are known to generally occur under southward IMF conditions, they can sometimes occur even under northward IMF conditions. In this paper, we studied the substorms that occurred in May, 2000 to 2002 to examine some statistical characteristics of the IMF and solar wind associated with northward IMF substorms. We focused on the cases where two or more substorms occurred successively under northward IMF conditions. Also, by checking Sym-H index associated with each of the substorms we determined whether or not there is any association of such northward IMF substorm occurrence with storm times. We also examined statistical properties at geosynchronous altitude in terms of magnetic field dipolarization and energetic particle injection. The following results were obtained. (i) Most of the northward IMF substorms occurred under average solar wind conditions. The majority of them occurred within 2 hrs duration of northward IMF Bz state, but there are also a nonnegligible number of substorms that occurred after a longer duraiton of northward IMF Bz state. (ii) While most of the substorms occurred as isolated from a magnetic storm time, those that occurred in a magnetic storm time show a higher average value of IMF and solar wind than that for the isolated substorms. (iii) About 55% of the substorms were associated with the IMF clock angle that can possibly allow dayside reconnection, and the other 45% were associated with more or less pure northward IMF conditions. Therefore, for the latter cases, the energy input from the solar wind into the magnetosphere should be made by other way than the dayside reconnection. (iv) For most of the substorms, the magnetic field dipolarizations and energetic particle injections at geosynchronous altitude were identified to be generally weak. But, several events indicated strong magnetic field dipolarizations and energetic particle injections.