• Title/Summary/Keyword: Application Selection

Search Result 1,867, Processing Time 0.028 seconds

Magnetic separation device for paramagnetic materials operated in a low magnetic field

  • Mishima, F.;Nomura, N.;Nishijima, S.
    • Progress in Superconductivity and Cryogenics
    • /
    • v.24 no.3
    • /
    • pp.19-23
    • /
    • 2022
  • We have been developing a magnetic separation device that can be used in low magnetic fields for paramagnetic materials. Magnetic separation of paramagnetic particles with a small particle size is desired for volume reduction of contaminated soil in Fukushima or separation of iron scale from water supply system in power plants. However, the implementation of the system has been difficult due to the needed magnetic fields is high for paramagnetic materials. This is because there was a problem in installing such a magnet in the site. Therefore, we have developed a magnetic separation system that combines a selection tube and magnetic separation that can separate small sized paramagnetic particles in a low magnetic field. The selection tube is a technique for classifying the suspended particles by utilizing the phenomenon that the suspended particles come to rest when the gravity acting on the particles and the drag force are balanced when the suspension is flowed upward. In the balanced condition, they can be captured with even small magnetic forces. In this study, we calculated the particle size of paramagnetic particles trapped in a selection tube in a high gradient magnetic field. As a result, the combination of the selection tube and HGMS (High Gradient Magnetic Separation-system) can separate small sized paramagnetic particles under low magnetic field with high efficiency, and this paper shows its potential application.

Lossless Compression for Hyperspectral Images based on Adaptive Band Selection and Adaptive Predictor Selection

  • Zhu, Fuquan;Wang, Huajun;Yang, Liping;Li, Changguo;Wang, Sen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3295-3311
    • /
    • 2020
  • With the wide application of hyperspectral images, it becomes more and more important to compress hyperspectral images. Conventional recursive least squares (CRLS) algorithm has great potentiality in lossless compression for hyperspectral images. The prediction accuracy of CRLS is closely related to the correlations between the reference bands and the current band, and the similarity between pixels in prediction context. According to this characteristic, we present an improved CRLS with adaptive band selection and adaptive predictor selection (CRLS-ABS-APS). Firstly, a spectral vector correlation coefficient-based k-means clustering algorithm is employed to generate clustering map. Afterwards, an adaptive band selection strategy based on inter-spectral correlation coefficient is adopted to select the reference bands for each band. Then, an adaptive predictor selection strategy based on clustering map is adopted to select the optimal CRLS predictor for each pixel. In addition, a double snake scan mode is used to further improve the similarity of prediction context, and a recursive average estimation method is used to accelerate the local average calculation. Finally, the prediction residuals are entropy encoded by arithmetic encoder. Experiments on the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) 2006 data set show that the CRLS-ABS-APS achieves average bit rates of 3.28 bpp, 5.55 bpp and 2.39 bpp on the three subsets, respectively. The results indicate that the CRLS-ABS-APS effectively improves the compression effect with lower computation complexity, and outperforms to the current state-of-the-art methods.

Project Selection of Six Sigma Using Group Fuzzy AHP and GRA (그룹 Fuzzy AHP와 GRA를 이용한 식스시그마 프로젝트 선정방안)

  • Yoo, Jung-Sang;Choi, Sung-Woon
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.149-159
    • /
    • 2019
  • Six sigma is an innovative management movement which provides improved business process by adapting the paradigm and the trend of market and customers. Suitable selection of six sigma project could highly reduce the costs, improve the quality, and enhance the customer satisfaction. There are existing studies on the selection of Six Sigma projects, but few studies have been conducted to select the correct project under an incomplete information environment. The purpose of this study is to propose the application of integrated MCDM techniques for correct project selection under incomplete information. The project selection process of six sigma involves four steps as follows: 1) determination of project selection criteria 2) calculation of relative importance of team member's competencies 3) assessment with project preference scale 4) finalization of ranking the projects. This study proposes the combination methods by applying group fuzzy Analytical Hierarchy Process (AHP), an easy defuzzified number of Trapezoidal Fuzzy Number (TrFN) and Grey Relational Analysis (GRA). Both of the weight of project selection criteria and the relative importance of team member's competencies can be evaluated by group fuzzy AHP. Project preferences are assessed by easy defuzzified scale of TrFN in case of incomplete information.)

A new sample selection model for overdispersed count data (과대산포 가산자료의 새로운 표본선택모형)

  • Jo, Sung Eun;Zhao, Jun;Kim, Hyoung-Moon
    • The Korean Journal of Applied Statistics
    • /
    • v.31 no.6
    • /
    • pp.733-749
    • /
    • 2018
  • Sample selection arises as a result of the partial observability of the outcome of interest in a study. Heckman introduced a sample selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. Recently sample selection models for binomial and Poisson response variables have been proposed. Based on the theory of symmetry-modulated distribution, we extend these to a model for overdispersed count data. This type of data with no sample selection is often modeled using negative binomial distribution. Hence we propose a sample selection model for overdispersed count data using the negative binomial distribution. A real data application is employed. Simulation studies reveal that our estimation method based on profile log-likelihood is stable.

A Survey on the M&V to guarantee the energy saving performance of ESCO (ESCO 에너지절약 M&V 방법의 선택 및 적용방안 연구)

  • Lim, Ki Choo
    • Journal of Energy Engineering
    • /
    • v.23 no.4
    • /
    • pp.123-129
    • /
    • 2014
  • ESCO industry should guarantee the energy saving performance with M&V such as developed countries. The application of the ESCO M&V is a necessary condition on energy saving performance. This study recommends a goal, direction, and order of application and suggests selection of M&V between option A, option B, option C, option D by energy conservation technology in Korea, with reference to the examples from IPMVP and applied in US and Japan. In the future, it is needed to study on the guideline for plan report and result report of M&V based on a goal, direction, and selection of M&V option.

Knee-driven many-objective sine-cosine algorithm

  • Hongxia, Zhao;Yongjie, Wang;Maolin, Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.335-352
    • /
    • 2023
  • When solving multi-objective optimization problems, the blindness of the evolution direction of the population gradually emerges with the increase in the number of objectives, and there are also problems of convergence and diversity that are difficult to balance. The many- objective optimization problem makes some classic multi-objective optimization algorithms face challenges due to the huge objective space. The sine cosine algorithm is a new type of natural simulation optimization algorithm, which uses the sine and cosine mathematical model to solve the optimization problem. In this paper, a knee-driven many-objective sine-cosine algorithm (MaSCA-KD) is proposed. First, the Latin hypercube population initialization strategy is used to generate the initial population, in order to ensure that the population is evenly distributed in the decision space. Secondly, special points in the population, such as nadir point and knee points, are adopted to increase selection pressure and guide population evolution. In the process of environmental selection, the diversity of the population is promoted through diversity criteria. Through the above strategies, the balance of population convergence and diversity is achieved. Experimental research on the WFG series of benchmark problems shows that the MaSCA-KD algorithm has a certain degree of competitiveness compared with the existing algorithms. The algorithm has good performance and can be used as an alternative tool for many-objective optimization problems.

An Adaptive Grid Resource Selection Method Using Statistical Analysis of Job History (작업 이력의 통계 분석을 통한 적응형 그리드 자원 선택 기법)

  • Hur, Cin-Young;Kim, Yoon-Hee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.3
    • /
    • pp.127-137
    • /
    • 2010
  • As large-scale computational applications in various scientific domains have been utilized over many integrated sets of grid computing resources, the difficulty of their execution management and control has been increased. It is beneficial to refer job history generated from many application executions, in order to identify application‘s characteristics and to decide selection policies of grid resource meaningfully. In this paper, we apply a statistical technique, Plackett-Burman design with fold-over (PBDF), for analyzing grid environments and execution history of applications. PBDF design identifies main factors in grid environments and applications, ranks based on how much they affect to their execution time. The effective factors are used for selecting reference job profiles and then preferable resource based on the reference profiles is chosen. An application is performed on the selected resource and its execution result is added to job history. Factor's credit is adjusted according to the actual execution time. For a proof-of-concept, we analyzed job history from an aerospace research grid system to get characteristics of grid resource and applications. We built JARS algorithm and simulated the algorithm with the analyzed job history. The simulation result shows good reliability and considerable performance in grid environment with frequently crashed resources.