• Title/Summary/Keyword: weighted possibility mean

Search Result 14, Processing Time 0.025 seconds

SOLVING BI-OBJECTIVE TRANSPORTATION PROBLEM UNDER NEUTROSOPHIC ENVIRONMENT

  • S. SANDHIYA;ANURADHA DHANAPAL
    • Journal of applied mathematics & informatics
    • /
    • v.42 no.4
    • /
    • pp.831-854
    • /
    • 2024
  • The transportation problem (TP) is one of the earliest and the most significant implementations of linear programming problem (LPP). It is a specific type of LPP that mostly works with logistics and it is connected to day-to-day activities in our everyday lives. Nowadays decision makers (DM's) aim to reduce the transporting expenses and simultaneously aim to reduce the transporting time of the distribution system so the bi-objective transportation problem (BOTP) is established in the research. In real life, the transportation parameters are naturally uncertain due to insufficient data, poor judgement and circumstances in the environment, etc. In view of this, neutrosophic bi-objective transportation problem (NBOTP) is introduced in this paper. By introducing single-valued trapezoidal neutrosophic numbers (SVTrNNs) to the co-efficient of the objective function, supply and demand constraints, the problem is formulated. The DM's aim is to determine the optimal compromise solution for NBOTP. The extended weighted possibility mean for single-valued trapezoidal neutrosophic numbers based on [40] is proposed to transform the single-valued trapezoidal neutrosophic BOTP (SVTrNBOTP) into its deterministic BOTP. The transformed deterministic BOTP is then solved using the dripping method [10]. Numerical examples are provided to illustrate the applicability, effectiveness and usefulness of the solution approach. A sensitivity analysis (SA) determines the sensitivity ranges for the objective functions of deterministic BOTP. Finally, the obtained optimal compromise solution from the proposed approach provides a better result as compared to the existing approaches and conclusions are discussed for future research.

Pattern Analysis of Sea Surface Temperature Distribution in the Southeast Sea of Korea Using a Weighted Mean Center (가중공간중심을 활용한 한국 남동해역의 표층수온 분포 패턴 분석)

  • KIM, Bum-Kyu;YOON, Hong-Joo;KIM, Tae-Hoon;CHOI, Hyun-Woo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.23 no.3
    • /
    • pp.263-274
    • /
    • 2020
  • In the Southeast Sea of Korea, a cold water mass is formed intensively in summer every year, causing frequent abnormal sea conditions. In order to analyze the spatial changes of sea surface temperature distribution in this area, ocean fields buoy data observed at Gori and Jeongja and reanalyzed sea surface temperature(SST) data from GHRSST Level 4 were used from June to September 2018. The buoy data were used to analyze the time-series water temperature changes at two stations, and the GHRSST data were used to calculate the daily SST variance and weighted mean center(WMC) across the study area. When the buoy's water temperature was lowered, the variance of SST in the study area trend to increase, but it did not appear consistently for the entire period. This is because GHRSST is a reanalysis data that does not reflect sensitive changes in water temperature along the coast. As such, there is a limit to grasping the local small-scale water temperature change in the coast or detecting the location and extent of the cold water zone only by the statistical variance representing the SST change in the entire sea area. Therefore, as a result of using WMC to quantitatively determine the spatial location of the cold water mass, when the cold water zone occurred, WMC was located in the northwest sea area from the mean center(MC) of the study area. This means that it is possible to quantitatively identify where and to what extent the distribution of cold surface water temperature appears through SST's WMC location information, and we could see the possibility of WMC's use in detecting the scale of cold water zones and the extent of regional spread in the future.

Protein-Protein Interaction Prediction using Interaction Significance Matrix (상호작용 중요도 행렬을 이용한 단백질-단백질 상호작용 예측)

  • Jang, Woo-Hyuk;Jung, Suk-Hoon;Jung, Hwie-Sung;Hyun, Bo-Ra;Han, Dong-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.10
    • /
    • pp.851-860
    • /
    • 2009
  • Recently, among the computational methods of protein-protein interaction prediction, vast amounts of domain based methods originated from domain-domain relation consideration have been developed. However, it is true that multi domains collaboration is avowedly ignored because of computational complexity. In this paper, we implemented a protein interaction prediction system based the Interaction Significance matrix, which quantified an influence of domain combination pair on a protein interaction. Unlike conventional domain combination methods, IS matrix contains weighted domain combinations and domain combination pair power, which mean possibilities of domain collaboration and being the main body on a protein interaction. About 63% of sensitivity and 94% of specificity were measured when we use interaction data from DIP, IntAct and Pfam-A as a domain database. In addition, prediction accuracy gradually increased by growth of learning set size, The prediction software and learning data are currently available on the web site.

Retrieval Biases Analysis on Estimation of GNSS Precipitable Water Vapor by Tropospheric Zenith Hydrostatic Models (GNSS 가강수량 추정시 건조 지연 모델에 의한 복원 정밀도 해석)

  • Nam, JinYong;Song, DongSeob
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.4
    • /
    • pp.233-242
    • /
    • 2019
  • ZHD (Zenith Hydrostatic Delay) model is important parameter in estimating of GNSS (Global Navigation Satellite System) PWV (Precipitable Water Vapor) along with weighted mean temperature. The ZWD (Zenith Wet Delay) is tend to accumulate the ZHD error, so that biases from ZHD will be affected on the precision of GNSS PWV. In this paper, we compared the accuracy of GNSS PWV with radiosonde PWV using three ZHD models, such as Saastamoinen, Hopfield, and Black. Also, we adopted the KWMT (Korean Weighted Mean Temperature) model and the mean temperature which was observed by radiosonde on the retrieval processing of GNSS PWV. To this end, GNSS observation data during one year were processed to produce PWVs from a total of 5 GNSS permanent stations in Korea, and the GNSS PWVs were compared with radiosonde PWVs for the evaluating of biases. The PWV biases using mean temperature estimated by the KWMT model are smaller than radiosonde mean temperature. Also, we could confirm the result that the Saastamoinen ZHD which is most used in the GNSS meteorology is not valid in South Korea, because it cannot be exclude the possibility of biases by latitude or height of GNSS station.

A Study on the Safety and Usability of University Dormitory Buildings (대학 기숙사 건물의 안전성 및 사용성 평가 연구)

  • Chae, Kyoung-Hun;Heo, Seok-Jae;Hur, Moo-Won
    • Journal of the Korean Institute of Educational Facilities
    • /
    • v.26 no.2
    • /
    • pp.3-10
    • /
    • 2019
  • This study evaluated the vibration use and safety of students living in the dormitories on the 12th and 14th floors by feeling uncomfortable. The measurement method was to measure the acceleration due to free vibration and single - person walking. The slab stiffness was then calculated, and the usability and safety were compared according to international standards. The natural frequency of the slab was 6.8 Hz. The natural frequency of a typical slab is around 15Hz. Therefore, the evaluation slab is judged as a flexible floor structure. It is considered that there is a high possibility of resonance in the middle of daily life because of low natural frequency and near harmonic component of walking vibration. As a result, the RMS acceleration level is within the tolerance range defined by ISO 10137 code, but the 13th floor exceeds the reference limit, so that a sensitive person could detect the vibration somewhat in the lying position.

Pollution Characteristics of Rainwater at Jeju Island during 2009~2010 (2009~2010년 제주지역 강우의 오염 특성 연구)

  • Kim, Ki-Ju;Bu, Jun-Oh;Kim, Won-Hyung;Lee, Yoon-Sang;Hyeon, Dong-Rim;Kang, Chang-Hee
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.29 no.6
    • /
    • pp.818-829
    • /
    • 2013
  • The collection of rainwater samples was made at Jeju area during 2009~2010, and the major ionic species were analyzed. In the comparison of ion balance, conductivity, and acid fraction for the validation of analytical data, the correlation coefficients showed a good linear relationship within the range of 0.966~0.990. The volume-weighted mean pH and electric conductivity were 4.9 and $17.8{\mu}S/cm$, respectively, at the Jeju area. The volume-weighted mean concentrations of ionic species in rainwater were in the order of $Cl^-$ > $Na^+$ > $nss-SO_4{^{2-}}$ > $NH_4{^+}$ > $NO_3{^-}$ > $Mg^{2+}$ > $H^+$ > $nss-Ca^{2+}$ > $HCOO^-$ > $K^+$ > $PO_4{^{3-}}$ > $CH_3COO^-$ > $NO_2{^-}$ > $F^-$ > $HCO_3{^-}$ > $CH_3SO_3{^-}$. The ionic strength of rainwater was $0.26{\pm}0.21$ mM during the study period. The composition ratios of ionic species were such as 50.1% for the marine sources ($Na^+$, $Mg^{2+}$, $Cl^-$), 30.9% for the anthropogenic sources ($NH_4{^+}$, $nss-SO_4{^{2-}}$, $NO_3{^-}$), and 4.7% for the soil source ($nss-Ca^{2+}$), and 3.1% for organic acids ($HCOO^-$, $CH_3COO^-$). From the seasonal comparison, the concentrations of $NO_3{^-}$, $nss-Ca^{2+}$, and $nss-SO_4{^{2-}}$ increased in winter and spring seasons, indicating a reasonable possibility of long range transport from Asia continent. Especially, the acidifying contributions by major inorganic acids ($nss-SO_4{^{2-}}$ and $NO_3{^-}$) and organic acids ($HCOO^-$ and $CH_3COO^-$) were 87.6% and 12.4%, respectively. In comparison by sectional inflow pathway of air mass during the rainy sampling days, the concentrations of $nss-SO_4{^{2-}}$ and $NO_3{^-}$ were relatively high when the air mass was moved from the China continent into Jeju area.

Evidence for the Luminosity Evolution of Type Ia Supernovae from the Ages of Early-type Host Galaxies

  • Lee, Young-Wook;Kang, Yijung;Kim, Young-Lo;Lim, Dongwook;Chung, Chul
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.38 no.2
    • /
    • pp.56.1-56.1
    • /
    • 2013
  • Supernovae type Ia (SNe Ia) cosmology is providing the only direct evidence for the presence of dark energy. This result is based on the assumption that the look-back time evolution of SNe Ia luminosity, after light-curve shape correction, would be negligible. However, the most recent compilation of SNe Ia data shows systematic difference in the Hubble residual (HR) between the E and Sd/Irr galaxies, indicating that the light-curve fitters used by the SNe Ia community cannot quite correct for a large portion of the population age effect. In order to investigate this possibility more directly, we have obtained low-resolution spectra for 30 nearby early-type host galaxies. This data set is used to estimate the luminosity-weighted mean ages and metallicities of host galaxies by employing the population synthesis models. We found an interesting trend between the host galaxy age and HR, in the sense that younger galaxies have positive residuals (i.e., light-curve corrected SNe Ia luminosity is fainter). This result is rather independent of the choice of the population synthesis models employed. Taken at face value, this age (evolution) effect can mimic a large fraction of the HR used in the discovery of the dark energy. This result is significant at 1.4 - 3 sigma levels, depending on the light curve fitters adopted, and further observations and analyses are certainly required to confirm the trend reported here.

  • PDF

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

An Evaluation of Solid Removal Efficiency in Coagulation System for Treating Combined Sewer Overflows by Return Sludge (CSOs처리를 위한 응집침전시스템에서 슬러지 반송에 의한 고형물 처리효율평가)

  • Ha, Sung-Ryong;Lee, Seung-Chul
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.35 no.3
    • /
    • pp.171-178
    • /
    • 2013
  • In this study, the sludge that occurs in the initial operation of coagulation system developed for the treatment of CSOs were returned to the flocculation reactor. The purposes of this study were to analyze the Characteristics of flocs that are generated through the recycling sludge and settling characteristics of sludge, and to evaluate the possibility that high concentrations of particulate matter in the initial inflow of CSOs could be used as an weighted coagulant additive. As a result, the concentration of treated CSOs pollutants at the beginning of the CSOs influent with a large amount of particulate matter over 20 ${\mu}m$ was low, after gradually increasing the concentrations of them. The flocs generated from the sludge return were similar in size compared to flocs generated through injection of micro sands, and settling velocity in case of return sludge injection was decreased from 55.1 cm/min to 21.5 cm/min. SVI value of the sludge accumulated at the bottom of the sedimentation tank was 72, and settled sludge volume decreased rapidly due to the consolidation of sludge to the time it takes to 10 minutes. these mean that sludge used for recycling has good settling characteristic. A condition of returned sludge which is 0.1% return of 0.3% extraction was formed in the balance of settlement and extraction. In this case, This condition was to be adequate to maintain the proper concentration such as 100~200 mg/L of TS and 50~100 mg/L of VS in the flocculation reactor. The usage of the return sludge containing particulate matters of CSOs as an weighted coagulant additive was able to secure a stable treated water quality despite the change of influent water quality dynamically. Furthermore, it can be expected to reduce the alum dosage along with the sludge production.

Estimating Benzene Exposure Level over Time and by Industry Type through a Review of Literature on Korea

  • Park, Donguk;Choi, Sangjun;Ha, Kwonchul;Jung, Hyejung;Yoon, Chungsik;Koh, Dong-Hee;Ryu, Seunghun;Kim, Soogeun;Kang, Dongmug;Yoo, Kyemook
    • Safety and Health at Work
    • /
    • v.6 no.3
    • /
    • pp.174-183
    • /
    • 2015
  • The major purpose of this study is to construct a retrospective exposure assessment for benzene through a review of literature on Korea. Airborne benzene measurements reported in 34 articles were reviewed. A total of 15,729 individual measurements were compiled. Weighted arithmetic means [AM(w)] and their variance calculated across studies were summarized according to 5-year period intervals (prior to the 1970s through the 2010s) and industry type. Industries were classified according to Korea Standard Industrial Classification (KSIC) using information provided in the literature. We estimated quantitative retrospective exposure to benzene for each cell in the matrix through a combination of time and KSIC. Analysis of the AM(w) indicated reductions in exposure levels over time, regardless of industry, with mean levels prior to the 1980-1984 period of 50.4 ppm (n = 2,289), which dropped to 2.8 ppm (n = 305) in the 1990-1994 period, and to 0.1 ppm (n = 294) in the 1995-1999 period. There has been no improvement since the 2000s, when the AM(w) of 4.3 ppm (n = 6,211) for the 2005-2009 period and 4.5 ppm (n = 3,358) for the 2010-2013 period were estimated. A comparison by industry found no consistent patterns in the measurement results. Our estimated benzene measurements can be used to determine not only the possibility of retrospective exposure to benzene, but also to estimate the level of quantitative or semiquantitative retrospective exposure to benzene.