• 제목/요약/키워드: flow patterns

검색결과 1,697건 처리시간 0.031초

End-use Analysis of Household Water by Metering (가정용수의 용도별 사용 원단위 분석)

  • Kim, Hwa Soo;Lee, Doo Jin;Kim, Ju Whan;Jung, Kwan Soo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • 제28권5B호
    • /
    • pp.595-601
    • /
    • 2008
  • The purpose of this study is to investigate the trends and patterns of various kind of water uses in a household by metering in Korea. Water use components are classified by toilet, washbowl, bathing, laundry, kitchen, miscellaneous. Flow meters are installed in 140 household selected by sampling in all around Korea. The data are gathered by web-based data collection system from the year 2002 to 2006, considering pre-investigated data such as occupation, revenue, family members, housing types, age, floor area, water saving devices, education, miscellaneous. Reliable data are selected by upper fence method for each observed water use component and statistical characteristics are estimated for each residential type to determine liter per capita per day. Estimated domestic per capita day show an indoor water use with the range from 150 lpcd to 169 lpcd for each housing type as the order of high rise apartment, multi-house, and single house. As the order of consuming amount among water use components, it is investigated that toilet (38.5 lpcd) is the first, and the second is laundry water (30.8 lpcd), the third is kitchen (28.4 lpcd), the fourth is bathtub (24.7 lpcd), the next is washbowl (15.4 lpcd). The results are compared with water uses in U.K. and U.S. As life style has been changed into western style, pattern of water use in Korea is tend to be similar with the U.S. water use pattern. Compared with the surveying results by Bradley, on 1985. Thirty liter of total use increased with the advancement of economic level, and a little change of water use pattern can be found. Especially, toilet water take almost half part of total water use and laundry water shows lowest as 11% in surveying at the year of 1985. But, this study shows that 39 liter, 28% of toilet water, has been decreased by the spread of saving devices and campaign. It is supposed that the spread large sized laundry machine make by-hand laundry has been decreased and water use increased. Unit water amount of each end-use in household can be applied to design factor for water and wastewater facilities, and it play a role as information in establishing water demand forecasting and conservation policy.

Fish Community Characteristics and Distribution Aspect of Rhodeus pseudosericeus(Cyprinidae) in the Geumdangcheon(Stream), a Tributary of the Hangang Drainage System of Korea (한강 지류 금당천의 어류군집 특징과 멸종위기종 한강납줄개의 서식양상)

  • Mee-Sook Han;Myeong-Hun Ko
    • Korean Journal of Environment and Ecology
    • /
    • 제37권2호
    • /
    • pp.151-162
    • /
    • 2023
  • This study investigated the characteristics of fish communities and inhabiting status of the endangered species, Rhodeus pseudosericeus, in the Geumdang Stream in Korea from March to October 2021. A total of 1,698 fish in 5 families and 25 species were collected from 7 survey stations during the survey period. The dominant species was Zacco platypus (relative abundance, 46.5%), and the subdominant species was Squalidus gracilis majimae (16.7%), followed by Rhynchocypris oxycephalus (12.0%), Z. koreanus (5.7%), Pungtungia herzi (3.2%), R. pseudosericeus (2.0%), R. notatus (1.9%), and Acheilognathus rhombeus (1.8%). Nine Korean endemic species (36.0%) were collected, including R. pseudosericeus, R. uyekii, Sarcocheilichthys variegatus wakiyae, Microphysogobio yaluensis, S. gracilis majimae, Z. koreanus, Cobitis nalbanti, Iksookimia koreensis, and Odontobutis interrupta. An exotic species, Micropterus salmoides, designated as an invasive alien species (IAS), was collected downstream. The investigation of the habitat patterns of the endangered species (class II), Rhodeus pseudosericeus, showed a habitat range of about 6 to 7 km in the middle of Geumdang Stream (RP-1 to RP-4), and this species inhabited the edge with water depths of 0.3 through 1.0 m with slow water flow and many aquatic plants. According to the community analysis results, the overall dominance and evenness indexes were low, while diversity and richness indexes were high, and the cluster structure was largely divided into upstream and middle-downstream areas. The river health (fish assessment index) evaluated using fish was assessed as good (3 stations), normal (3 stations), and bad (1 station), and water quality was evaluated as good both upstream and downstream. Compared to previous studies, the number of species was relatively similar, and among the species that appeared in the past, 13 species did not appear in this survey, while 6 species appeared for the first time in this survey. Disturbance factors included river construction, many weirs, and the appearance of the ecosystem-disturbing species, M. salmoides. Since Geumdang Strem has high conservation value because it is home to many species in the Acheilognathinae subfamily, including the endangered species R. pseudosericeus, continuous attention and systematic conservation measures are required.

Determining Spatial and Temporal Variations of Surface Particulate Organic Carbon (POC) using in situ Measurements and Remote Sensing Data in the Northeastern Gulf of Mexico during El $Ni\tilde{n}o$ and La $Ni\tilde{n}a$ (현장관측 및 원격탐사 자료를 이용한 북동 멕시코 만에서 El $Ni\tilde{n}o$와 La $Ni\tilde{n}a$ 기간 동안 표층 입자성 유기탄소의 시/공간적 변화 연구)

  • Son, Young-Baek;Gardner, Wilford D.
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • 제15권2호
    • /
    • pp.51-61
    • /
    • 2010
  • Surface particulate organic carbon (POC) concentration was measured in the Northeastern Gulf of Mexico on 9 cruises from November 1997 to August 2000 to investigate the seasonal and spatial variability related to synchronous remote sensing data (Sea-viewing Wide Field-of-view Sensor (SeaWiFS), sea surface temperature (SST), sea surface height anomaly (SSHA), and sea surface wind (SSW)) and recorded river discharge data. Surface POC concentrations have higher values (>100 $mg/m^3$) on the inner shelf and near the Mississippi Delta, and decrease across the shelf and slope. The inter-annual variations of surface POC concentrations are relatively higher during 1997 and 1998 (El Nino) than during 1999 and 2000 (La Nina) in the study area. This phenomenon is directly related to the output of Mississippi River and other major rivers, which associated with global climate change such as ENSO events. Although highest river runoff into the northern Gulf of Mexico Coast occurs in early spring and lowest flow in late summer and fall, wide-range POC plumes are observed during the summer cruises and lower concentrations and narrow dispersion of POC during the spring and fall cruises. During the summer seasons, the river discharge remarkably decreases compared to the spring, but increasing temperature causes strong stratification of the water column and increasing buoyancy in near-surface waters. Low-density plumes containing higher POC concentrations extend out over the shelf and slope with spatial patterns and controlled by the Loop Current and eddies, which dominate offshore circulation. Although river discharge is normal or abnormal during the spring and fall seasons, increasing wind stress and decreasing temperature cause vertical mixing, with higher surface POC concentrations confined to the inner shelf.

Assessment of Methane Production Rate Based on Factors of Contaminated Sediments (오염퇴적물의 주요 영향인자에 따른 메탄발생 생성률 평가)

  • Dong Hyun Kim;Hyung Jun Park;Young Jun Bang;Seung Oh Lee
    • Journal of Korean Society of Disaster and Security
    • /
    • 제16권4호
    • /
    • pp.45-59
    • /
    • 2023
  • The global focus on mitigating climate change has traditionally centered on carbon dioxide, but recent attention has shifted towards methane as a crucial factor in climate change adaptation. Natural settings, particularly aquatic environments such as wetlands, reservoirs, and lakes, play a significant role as sources of greenhouse gases. The accumulation of organic contaminants on the lake and reservoir beds can lead to the microbial decomposition of sedimentary material, generating greenhouse gases, notably methane, under anaerobic conditions. The escalation of methane emissions in freshwater is attributed to the growing impact of non-point sources, alterations in water bodies for diverse purposes, and the introduction of structures such as river crossings that disrupt natural flow patterns. Furthermore, the effects of climate change, including rising water temperatures and ensuing hydrological and water quality challenges, contribute to an acceleration in methane emissions into the atmosphere. Methane emissions occur through various pathways, with ebullition fluxes-where methane bubbles are formed and released from bed sediments-recognized as a major mechanism. This study employs Biochemical Methane Potential (BMP) tests to analyze and quantify the factors influencing methane gas emissions. Methane production rates are measured under diverse conditions, including temperature, substrate type (glucose), shear velocity, and sediment properties. Additionally, numerical simulations are conducted to analyze the relationship between fluid shear stress on the sand bed and methane ebullition rates. The findings reveal that biochemical factors significantly influence methane production, whereas shear velocity primarily affects methane ebullition. Sediment properties are identified as influential factors impacting both methane production and ebullition. Overall, this study establishes empirical relationships between bubble dynamics, the Weber number, and methane emissions, presenting a formula to estimate methane ebullition flux. Future research, incorporating specific conditions such as water depth, effective shear stress beneath the sediment's tensile strength, and organic matter, is expected to contribute to the development of biogeochemical and hydro-environmental impact assessment methods suitable for in-situ applications.

Methacholine Responsiveness of Bronchial and Extrathoracic Airway in Patients with Chronic Cough (만성 기침 환자에서 기관지와 흉곽외 기도의 Methacholine 유발검사의 의의)

  • Shim, Jae-Jeong;Kim, Je-Hyeong;Lee, Sung-Yong;Kwan, Young-Hwan;Lee, So-Ra;Lee,, Sang-Yeub;Lee, Sang-Hwa;Suh, Jung-Kyung;Cho, Jae-Youn;In, Kwang-Ho;Yoo, Se-Hwa;Kang, Kyung-Ho
    • Tuberculosis and Respiratory Diseases
    • /
    • 제44권4호
    • /
    • pp.853-860
    • /
    • 1997
  • Background : Chronic cough, defined as a cough persisting for three weeks or longer, is a common symptom for which outpatient care is sought. The most common etiologies of chronic cough are postnasal drip, asthma, and gastroesophageal reflux. Methacholine challenge is a useful diagnostic study in the evaulation of chronic cough, particularly useful in chronic cough patients with asthmatic symptom. Patients with chronic cough may have dysfunction of bronchial and extrathoracic airways. To evaluate if dysfunction of the bronchial and extrathoracic airways causes chronic cough, we assessed bronchial (BHR) and extrathoracic airway (EAHR) responsiveness to inhaled methacholine in patients with chronic cough. Method : 111 patients with chronic cough were enrolled in our study. Enrolled patients had no recorded diagnosis of asthma, bronchopulmonary disease, hypertension, heart disease or systemic disease and no current treatment with bronchodilator or corticosteroid. Enrolled patients consisted of 46 patients with cough alone, 24 patients with wheeze, 22 patients with dyspnea, 19 patients with wheeze and dyspnea. The inhaled methacholine concentrations causing a 20% fall in forced expiratory volume in 1s($PC_{20}FEV_1$) and 25% fall in maximal mid-inspiratory flow ($PC_{25}MIF_{50}$) were used as bronchial and extra thoracic hyperresponsiveness. Results : There were four response patterns to methacholine challenge study : BHR in 27 patients, EAHR in 16 patients, combined BHR and EAHR in 8 patients, and no hyperresponsiveness in 60 patients. In patients with cough alone, there were BHR in 3 patients, EAHR in 9 patients, and combined BHR and EAHR in 2 patients. In patients with wheeze and/or dyspnea, there were BHR in 24 patients, EAHR in 7 patients, and BHR and EAHR in 6 patients. Compared with patients with wheeze and/or dyspnea, patients with cough alone had more common EAHR than BHR. In patients with wheeze and/or dyspnea, BHR was more common than EAHR. Conclusion : These results show that among patients with hyperresponsiveness to methacholine, those with dyspnea and/or wheezing had mainly bronchial hyperresponsiveness, whereas those with chronic cough alone had mainly extrathoracic airway hyperresponsiveness.

  • PDF

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • 제20권1호
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • 제25권2호
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.