• Title/Summary/Keyword: Additional filter

Search Result 436, Processing Time 0.025 seconds

Development of integrated microbubble and microfilter system for liquid fertilizer production by removing total coliform and improving reduction of suspended solid in livestock manure (가축분뇨 내 대장균 제거와 부유물질 저감 효율 향상을 통한 추비 생산용 미세기포 부상분리와 마이크로 필터 연계 시스템 개발)

  • Jang, Jae Kyung;Lee, Donggwan;Paek, Yee;Lee, Taeseok;Lim, Ryu Gap;Kim, Taeyoung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.139-147
    • /
    • 2021
  • Livestock manure is used as an organic fertilizer to replace chemical fertilizers after sufficient fermentation in an aerobic bioreactor. On the other hand, liquid manure disposal problems occur repeatedly because soil spraying is restricted during the summer when the crops are growing. To use liquid fertilizer (LF) as an additional nutrient source for crops, it is necessary to reduce the amount of suspended solids (SS) in the liquid fertilizer and secure stability problems against pathogenic microorganisms. This study examined the effects of the simultaneous SS removal and E.coli sterilization in the LF using the microbubble (MB) generator (FeMgO catalyst insertion). The remaining SS were further removed using the integrated microbubble and microfilter system. During the floating process in the MB device, the SS were removed by 57.9%, and the coliform group was not detected (16,200→0 MPN/100 mL). By optimizing the HRT of the integrated system, the removal efficiency of the SS was improved by 92.9% under the 0.1h of HRT condition. After checking the properties of the treated LF, 64.5%, 70.1%, 54.9%, and 51.5% of the TCOD, SCOD, PO4-P, and TN, respectively, were removed. The treated effluent from such an integrated system has a lower SS content than that of the existing LF and does not contain coliforms; therefore, it can be used directly as an additional fertilizer.

Imaging Characteristics of Computed Radiography Systems (CR 시스템의 종류와 I.P 크기에 따른 정량적 영상특성평가)

  • Jung, Ji-Young;Park, Hye-Suk;Cho, Hyo-Min;Lee, Chang-Lae;Nam, So-Ra;Lee, Young-Jin;Kim, Hee-Joung
    • Progress in Medical Physics
    • /
    • v.19 no.1
    • /
    • pp.63-72
    • /
    • 2008
  • With recent advancement of the medical imaging systems and picture archiving and communication system (PACS), installation of digital radiography has been accelerated over past few years. Moreover, Computed Radiography (CR) which was well established for the foundation of digital x-ray imaging systems at low cost was widely used for clinical applications. This study analyzes imaging characteristics for two systems with different pixel sizes through the Modulation Transfer Function (MTF), Noise Power Spectrum (NPS) and Detective Quantum Efficiency (DQE). In addition, influence of radiation dose to the imaging characteristics was also measured by quantitative assessment. A standard beam quality RQA5 based on an international electro-technical commission (IEC) standard was used to perform the x-ray imaging studies. For the results, the spatial resolution based on MTF at 10% for Agfa CR system with I.P size of $8{\times}10$ inches and $14{\times}17$ inches was measured as 3.9 cycles/mm and 2.8 cycles/mm, respectively. The spatial resolution based on MTF at 10% for Fuji CR system with I.P size of $8{\times}10$ inches and $14{\times}17$ inches was measured as 3.4 cycles/mm and 3.2 cycles/mm, respectively. There was difference in the spatial resolution for $14{\times}17$ inches, although radiation dose does not effect to the MTF. The NPS of the Agfa CR system shows similar results for different pixel size between $100{\mu}m$ for $8{\times}10$ inch I.P and $150{\mu}m$ for $14{\times}17$ inch I.P. For both systems, the results show better NPS for increased radiation dose due to increasing number of photons. DQE of the Agfa CR system for $8{\times}10$ inch I.P and $14{\times}17$ inch I.P resulted in 11% and 8.8% at 1.5 cycles/mm, respectively. Both systems show that the higher level of radiation dose would lead to the worse DQE efficiency. Measuring DQE for multiple factors of imaging characteristics plays very important role in determining efficiency of equipment and reducing radiation dose for the patients. In conclusion, the results of this study could be used as a baseline to optimize imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE for different level of radiation dose.

  • PDF

Enhancement of the Deformable Image Registration Accuracy Using Image Modification of MV CBCT (Megavoltage Cone-beam CT 영상의 변환을 이용한 변환 영상 정합의 정확도 향상)

  • Kim, Min-Joo;Chang, Ji-Na;Park, So-Hyun;Kim, Tae-Ho;Kang, Young-Nam;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.22 no.1
    • /
    • pp.28-34
    • /
    • 2011
  • To perform the Adaptive Radiation Therapy (ART), a high degree of deformable registration accuracy is essential. The purpose of this study is to identify whether the change of MV CBCT intensity can improve registration accuracy using predefined modification level and filtering process. To obtain modification level, the cheese phantom images was acquired from both kilovoltage CT (kV CT), megavoltage cone-beam CT (MV CBCT). From the cheese phantom images, the modification level of MV CBCT was defined from the relationship between Hounsfield Units (HUs) of kV CT and MV CBCT images. 'Gaussian smoothing filter' was added to reduce the noise of the MV CBCT images. The intensity of MV CBCT image was changed to the intensity of the kV CT image to make the two images have the same intensity range as if they were obtained from the same modality. The demon deformable registration which was efficient and easy to perform the deformable registration was applied. The deformable lung phantom which was intentionally created in the laboratory to imitate the changes of the breathing period was acquired from kV CT and MV CBCT. And then the deformable lung phantom images were applied to the proposed method. As a result of deformable image registration, the similarity of the correlation coefficient was used for a quantitative evaluation of the result was increased by 6.07% in the cheese phantom, and 18% in the deformable lung phantom. For the additional evaluation of the registration of the deformable lung phantom, the centric coordinates of the mark which was inserted into the inner part of the phantom were measured to calculate the vector difference. The vector differences from the result were 2.23, 1.39 mm with/without modification of intensity of MV CBCT images, respectively. In summary, our method has quantitatively improved the accuracy of deformable registration and could be a useful solution to improve the image registration accuracy. A further study was also suggested in this paper.

Prevalence of Extended Spectrum $\beta-Lactamase-Producing$ Clinical Isolates of Escher­ichia coli in a University Hospital, Korea (국내 대학병원에서 분리된 Eschepichia coli의 Extended-spectrum $\beta-Lactamase$ (ESBL) 현황)

  • Lee Kyenam;Kim Woo-Joo;Lee Yeonhee
    • Korean Journal of Microbiology
    • /
    • v.40 no.4
    • /
    • pp.295-301
    • /
    • 2004
  • Recently, the rapid increase and global spread of extended-spectrum $\beta-lactamase$ producing clinical isolates has become a serious problem. The incidence of extended-spectrum $\beta-lactamase$ producing clinical isolates of Escherichia coli in Korea and susceptibility to antimicrobial agents were investigated. Total 233 isolates of E. coli were obtained from urine from hospitalized patients in Guro hospital, Korea University in 2001. One hun­dred and eighty four isolates $(78.9\%)$ were resistant to ampicillin, 80 isolates $(34.3\%)$ were resistant to ceph­alothin, 93 isolates $(39.9\%)$ were resistant to gentamicin, and 64 isolates $(27.5\%)$ were resistant to norfloxacin. Among 233 isolates, 17 isolates $(7.3\%)$ were positive as determined by the double disk synergy test. When min­imal inhibitory concentrations were assayed with additional 6 antimicrobial agents, 13 isolates $(76.5\%)$ were multi-drug resistant to at least four different class antimicrobial agents. Extended-spectrum $\beta-lactamase$ were characterized with isoelectric focusing gel electrophoresis and DNA sequencing. They were TEM-1 in 5 iso­lates, TEM-15 in 1 isolate, TEM-20 in 1 isolate, TEM-52 in 4 isolates, TEM-1 and AmpC in 2 isolates, TEM-1 and OXA-30 in 1 isolate, TEM-1 and OXA-33 in 1 isolate, TEM-1, CTX-M-3, and AmpC in 1 isolate, but SHV was not detected. Antimicrobial resistance genes were transferred to animal isolate of E. coli (CCARM No. 1203) by the filter mating method. Extended spectrum $\beta-lactamase$ producers studied in the current study have low correlation to each other as determined by random amplified polymorphic DNA and pulsed field gel elec­trophoresis. This is a contradictory result from the general hypothesis that extended-spectrum $\beta-lactamase$ pro­ducers in one hospital is a result from a clonal spread.

The Effective Approach for Non-Point Source Management (효과적인 비점오염원관리를 위한 접근 방향)

  • Park, Jae Hong;Ryu, Jichul;Shin, Dong Seok;Lee, Jae Kwan
    • Journal of Wetlands Research
    • /
    • v.21 no.2
    • /
    • pp.140-146
    • /
    • 2019
  • In order to manage non-point sources, the paradigm of the system should be changed so that the management of non-point sources will be systematized from the beginning of the use and development of the land. It is necessary to change the method of national subsidy support and poeration plan for the non-point source management area. In order to increase the effectiveness of the non-point source reduction project, it is necessary to provide a minimum support ratio and to provide additional support according to the performance of the local government. A new system should be established to evaluate the performance of non-point source reduction projects and to monitor the operational effectiveness. It is necessary to establish the related rules that can lead the local government to take responsible administration so that the local governments faithfully carry out the non-point source reduction project and achieve the planned achievement and become the sustainable maintenance. Alternative solutions are needed, such as problems with the use of $100{\mu}m$ filter in automatic sampling and analysis, timely acquisition of water sampling and analysis during rainfall, and effective management of non-point sources network operation management. As an alternative, it is necessary to consider improving the performance of sampling and analysis equipment, and operate the base station. In addition, countermeasures are needed if the amount of pollutant reduction according to the non-point source reduction facility promoted by the national subsidy is required to be used as the development load of the TMDLs. As an alternative, it is possible to consider supporting incentive type of part of the maintenance cost of the non-point source reduction facility depending on the amount of pollutants reduction.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.