• Title/Summary/Keyword: Process Filtering

Search Result 827, Processing Time 0.023 seconds

The Flow-rate Measurements in a Multi-phase Flow Pipeline by Using a Clamp-on Sealed Radioisotope Cross Correlation Flowmeter (투과 감마선 계측신호의 Cross correlation 기법 적용에 의한 다중상 유체의 유량측정)

  • Kim, Jin-Seop;Kim, Jong-Bum;Kim, Jae-Ho;Lee, Na-Young;Jung, Sung-Hee
    • Journal of Radiation Protection and Research
    • /
    • v.33 no.1
    • /
    • pp.13-20
    • /
    • 2008
  • The flow rate measurements in a multi-phase flow pipeline were evaluated quantitatively by means of a clamp-on sealed radioisotope based on a cross correlation signal processing technique. The flow rates were calculated by a determination of the transit time between two sealed gamma sources by using a cross correlation function following FFT filtering, then corrected with vapor fraction in the pipeline which was measured by the ${\gamma}$-ray attenuation method. The pipeline model was manufactured by acrylic resin(ID. 8 cm, L=3.5 m, t=10 mm), and the multi-phase flow patterns were realized by an injection of compressed $N_2$ gas. Two sealed gamma sources of $^{137}Cs$ (E=0.662 MeV, ${\Gamma}$ $factor=0.326\;R{\cdot}h^{-1}{\cdot}m^2{\cdot}Ci^{-1}$) of 20 mCi and 17 mCi, and radiation detectors of $2"{\times}2"$ NaI(Tl) scintillation counter (Eberline, SP-3) were used for this study. Under the given conditions(the distance between two sources: 4D(D; inner diameter), N/S ratio: $0.12{\sim}0.15$, sampling time ${\Delta}t$: 4msec), the measured flow rates showed the maximum. relative error of 1.7 % when compared to the real ones through the vapor content corrections($6.1\;%{\sim}9.2\;%$). From a subsequent experiment, it was proven that the closer the distance between the two sealed sources is, the more precise the measured flow rates are. Provided additional studies related to the selection of radioisotopes their activity, and an optimization of the experimental geometry are carried out, it is anticipated that a radioisotope application for flow rate measurements can be used as an important tool for monitoring multi-phase facilities belonging to petrochemical and refinery industries and contributes economically in the light of maintenance and control of them.

Enhancement of Inter-Image Statistical Correlation for Accurate Multi-Sensor Image Registration (정밀한 다중센서 영상정합을 위한 통계적 상관성의 증대기법)

  • Kim, Kyoung-Soo;Lee, Jin-Hak;Ra, Jong-Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.1-12
    • /
    • 2005
  • Image registration is a process to establish the spatial correspondence between images of the same scene, which are acquired at different view points, at different times, or by different sensors. This paper presents a new algorithm for robust registration of the images acquired by multiple sensors having different modalities; the EO (electro-optic) and IR(infrared) ones in the paper. The two feature-based and intensity-based approaches are usually possible for image registration. In the former selection of accurate common features is crucial for high performance, but features in the EO image are often not the same as those in the R image. Hence, this approach is inadequate to register the E0/IR images. In the latter normalized mutual Information (nHr) has been widely used as a similarity measure due to its high accuracy and robustness, and NMI-based image registration methods assume that statistical correlation between two images should be global. Unfortunately, since we find out that EO and IR images don't often satisfy this assumption, registration accuracy is not high enough to apply to some applications. In this paper, we propose a two-stage NMI-based registration method based on the analysis of statistical correlation between E0/1R images. In the first stage, for robust registration, we propose two preprocessing schemes: extraction of statistically correlated regions (ESCR) and enhancement of statistical correlation by filtering (ESCF). For each image, ESCR automatically extracts the regions that are highly correlated to the corresponding regions in the other image. And ESCF adaptively filters out each image to enhance statistical correlation between them. In the second stage, two output images are registered by using NMI-based algorithm. The proposed method provides prospective results for various E0/1R sensor image pairs in terms of accuracy, robustness, and speed.

Development and Analysis of COMS AMV Target Tracking Algorithm using Gaussian Cluster Analysis (가우시안 군집분석을 이용한 천리안 위성의 대기운동벡터 표적추적 알고리듬 개발 및 분석)

  • Oh, Yurim;Kim, Jae Hwan;Park, Hyungmin;Baek, Kanghyun
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.6
    • /
    • pp.531-548
    • /
    • 2015
  • Atmospheric Motion Vector (AMV) from satellite images have shown Slow Speed Bias (SSB) in comparison with rawinsonde. The causes of SSB are originated from tracking, selection, and height assignment error, which is known to be the leading error. However, recent works have shown that height assignment error cannot be fully explained the cause of SSB. This paper attempts a new approach to examine the possibility of SSB reduction of COMS AMV by using a new target tracking algorithm. Tracking error can be caused by averaging of various wind patterns within a target and changing of cloud shape in searching process over time. To overcome this problem, Gaussian Mixture Model (GMM) has been adopted to extract the coldest cluster as target since the shape of such target is less subject to transformation. Then, an image filtering scheme is applied to weigh more on the selected coldest pixels than the other, which makes it easy to track the target. When AMV derived from our algorithm with sum of squared distance method and current COMS are compared with rawindsonde, our products show noticeable improvement over COMS products in mean wind speed by an increase of $2.7ms^{-1}$ and SSB reduction by 29%. However, the statistics regarding the bias show negative impact for mid/low level with our algorithm, and the number of vectors are reduced by 40% relative to COMS. Therefore, further study is required to improve accuracy for mid/low level winds and increase the number of AMV vectors.

Esterification of Indonesia Tropical Crop Oil by Amberlyst-15 and Property Analysis of Biodiesel (인도네시아 열대작물 오일의 Amberlyst-15 촉매 에스테르화 반응 및 바이오디젤 물성 분석)

  • Lee, Kyoung-Ho;Lim, Riky;Lee, Joon-Pyo;Lee, Jin-Suk;Kim, Deog-Keun
    • Journal of the Korean Applied Science and Technology
    • /
    • v.36 no.1
    • /
    • pp.324-332
    • /
    • 2019
  • Most countries including Korea and Indonesia have strong policy for implementing biofuels like biodiesel. Shortage of the oil feedstock is the main barrier for increasing the supply of biodiesel fuel. In this study, in order to improve the stability of feedstock supply and lower the biodiesel production cost, the feasibility of biodiesel production using two types of Indonesian tropical crop oils, pressed at different harvesting times, were investigated. R. Trisperma oils, a high productive non-edible feedstocks, were investigated to produce biodiesel by esterification and transesterification because of it's high impurity and free fatty acid contents. the kindly provided oils from Indonesia were required to perform the filtering and water removal process to increase the efficiency of the esterificaton and transesterification reactions. The esterification used heterogeneous acid catalyst, Amberlyst-15. Before the reaction, the acid value of two types oil were 41, 17 mg KOH/g respectively. After the pre-esterification reaction, the acid value of oils were 3.7, 1.8 mg KOH/g respectively, the conversions were about 90%. Free fatty acid content was reduced to below 2%. Afterwards, the transesterification was performed using KOH as the base catalyst for transesterification. The prepared biodiesel showed about 93% of FAME content, and the total glycerol content was 0.43%. It did not meet the quality specification(FAME 96.5% and Total glycerol 0.24%) since the tested oils were identified to have a uncommon fatty acid, generally not found in vegetable oils, ${\alpha}$-eleostearic acid with much contents of 10.7~33.4%. So, it is required to perform the further research on reaction optimization and product purification to meet the fuel quality standards. So if the biodiesel production technology using un-utilized non-edible feedstock oils is successfully developed, stable supply of the feedstock for biodiesel production may be possible in the future.

Application of diversity of recommender system accordingtouserpreferencechange (사용자 선호도 변화에 따른 추천시스템의 다양성 적용)

  • Na, Hyeyeon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.67-86
    • /
    • 2020
  • Recommender Systems have been huge influence users and business more and more. Recently the importance of E-commerce has been reached rapid growth greatly in world-wide COVID-19 pandemic. Recommender system is the center of E-commerce lively. Top ranked E-commerce managers mentioned that recommender systems have a major influence on customer's purchase such as about 50% of Netflix, Amazon sales from their recommender systems. Most algorithms have been focused on improving accuracy of recommender system regardless of novelty, diversity, serendipity etc. Recommender systems with only high accuracy cannot satisfy business long-term profit because of generating sales polarization. In addition, customers do not experience enjoyment of shopping from only focusing accuracy recommender system because customer's preference is changed constantly. Therefore, recommender systems with various values need to be developed for user's high satisfaction. Reranking is the most useful methodology to realize diversity of recommender system. In this paper, diversity of recommender system is represented through constructing high similarity with users who have different preference using each user's purchased item's category algorithm. It is distinguished from past research approach which is changing the algorithm of recommender system without user's diversity preference level. We tried to discover user's diversity preference level and observed the results how the effect was different according to user's diversity preference level. In addition, graph-based recommender system was used to show diversity through user's network, not collaborative filtering. In this paper, Amazon Grocery and Gourmet Food data was used because the low-involvement product, such as habitual product, foods, low-priced goods etc., had high probability to show customer's diversity. First, a bipartite graph with users and items simultaneously is constructed to make graph-based recommender system. However, each users and items unipartite graph also need to be established to show diversity of recommender system. The weight of each unipartite graph has played crucial role changing Jaccard Distance of item's category. We can observe two important results from the user's unipartite network. First, the user's diversity preference level is observed from the network and second, dissimilar users can be discovered in the user's network. Through the research process, diversity of recommender system is presented highly with small accuracy loss and optimalization for higher accuracy is possible controlling diversity ratio. This paper has three important theoretical points. First, this research expands recommender system research for user's satisfaction with various values. Second, the graph-based recommender system is developed newly. Third, the evaluation indicator of diversity is made for diversity. In addition, recommender systems are useful for corporate profit practically and this paper has contribution on business closely. Above all, business long-term profit can be improved using recommender system with diversity and the recommender system can provide right service according to user's diversity level. Lastly, the corporate selling low-involvement products have great effect based on the results.

Geochemical Equilibria and Kinetics of the Formation of Brown-Colored Suspended/Precipitated Matter in Groundwater: Suggestion to Proper Pumping and Turbidity Treatment Methods (지하수내 갈색 부유/침전 물질의 생성 반응에 관한 평형 및 반응속도론적 연구: 적정 양수 기법 및 탁도 제거 방안에 대한 제안)

  • 채기탁;윤성택;염승준;김남진;민중혁
    • Journal of the Korean Society of Groundwater Environment
    • /
    • v.7 no.3
    • /
    • pp.103-115
    • /
    • 2000
  • The formation of brown-colored precipitates is one of the serious problems frequently encountered in the development and supply of groundwater in Korea, because by it the water exceeds the drinking water standard in terms of color. taste. turbidity and dissolved iron concentration and of often results in scaling problem within the water supplying system. In groundwaters from the Pajoo area, brown precipitates are typically formed in a few hours after pumping-out. In this paper we examine the process of the brown precipitates' formation using the equilibrium thermodynamic and kinetic approaches, in order to understand the origin and geochemical pathway of the generation of turbidity in groundwater. The results of this study are used to suggest not only the proper pumping technique to minimize the formation of precipitates but also the optimal design of water treatment methods to improve the water quality. The bed-rock groundwater in the Pajoo area belongs to the Ca-$HCO_3$type that was evolved through water/rock (gneiss) interaction. Based on SEM-EDS and XRD analyses, the precipitates are identified as an amorphous, Fe-bearing oxides or hydroxides. By the use of multi-step filtration with pore sizes of 6, 4, 1, 0.45 and 0.2 $\mu\textrm{m}$, the precipitates mostly fall in the colloidal size (1 to 0.45 $\mu\textrm{m}$) but are concentrated (about 81%) in the range of 1 to 6 $\mu\textrm{m}$in teams of mass (weight) distribution. Large amounts of dissolved iron were possibly originated from dissolution of clinochlore in cataclasite which contains high amounts of Fe (up to 3 wt.%). The calculation of saturation index (using a computer code PHREEQC), as well as the examination of pH-Eh stability relations, also indicate that the final precipitates are Fe-oxy-hydroxide that is formed by the change of water chemistry (mainly, oxidation) due to the exposure to oxygen during the pumping-out of Fe(II)-bearing, reduced groundwater. After pumping-out, the groundwater shows the progressive decreases of pH, DO and alkalinity with elapsed time. However, turbidity increases and then decreases with time. The decrease of dissolved Fe concentration as a function of elapsed time after pumping-out is expressed as a regression equation Fe(II)=10.l exp(-0.0009t). The oxidation reaction due to the influx of free oxygen during the pumping and storage of groundwater results in the formation of brown precipitates, which is dependent on time, $Po_2$and pH. In order to obtain drinkable water quality, therefore, the precipitates should be removed by filtering after the stepwise storage and aeration in tanks with sufficient volume for sufficient time. Particle size distribution data also suggest that step-wise filtration would be cost-effective. To minimize the scaling within wells, the continued (if possible) pumping within the optimum pumping rate is recommended because this technique will be most effective for minimizing the mixing between deep Fe(II)-rich water and shallow $O_2$-rich water. The simultaneous pumping of shallow $O_2$-rich water in different wells is also recommended.

  • PDF

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.