• Title/Summary/Keyword: Linux Cluster

Search Result 89, Processing Time 0.03 seconds

M-VIA Implementation on a Gigabit Ethernet Card (기가비트 이더넷상에서의 M-VIA 구현)

  • 윤인수;정상화
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.12
    • /
    • pp.648-654
    • /
    • 2002
  • The Virtual Interface Architecture(VIA) is an industry standard for communication over system area networks(SANs). M-VIA is a software implementation of VIA technology on Linux. In this paper, we implemented the M-VIA on an AceNIC Gigabit Ethernet by developing a new AceNIC driver for the M-VIA. We analyzed the M-VIA data segmentation processes. When a Gigabit Ethernet MTU is larger than 1514 bytes, M-VIA data segmentation size leaves much room for improvement. So we experimented with various MTU and M-VIA data segmentation size and compared the performances.

Analysis of the Load Balancing Algorithms according to the Request Patterns on the LVS Cluster Systems (LVS 클러스터 시스템의 요구 패턴에 따른 부하 분산 알고리즘 분석)

  • Li, Shan-Hong;Kim, Sung-Ki;Na, Yong-Hee;Min, Byoung-Joon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.11a
    • /
    • pp.151-154
    • /
    • 2002
  • 갈수록 증가하는 인터넷 사용자의 서비스 요구량에 대처하기 위해, 부하 분산 기능을 갖는 클러스터 시스템의 이용이 늘어가고 있다. 본 연구에서는 클라이언트에게 보다 향상된 응답 성능을 제공하기 위해 사용되는 RR(Round Robin), WRR(Weighted Round Robin), LC(Least Connection), WLC(Weighted Least Connection) 부하 분산 알고리즘에 대해서, 클라이언트로부터 인입되는 7 가지 요구 수신 패턴에 따른 부하 분산 응답 특성을 분석하고 그 결과를 논한다. 이를 위해, 실제 시스템의 측정 결과를 토대로 단위 시간 당 인입되는 클라이언트의 요구량 변화를 7 가지 패턴으로 분류하였고, 리눅스 가상 서버(LVS: Linux Virtual Server) 클러스터 시스템을 대상으로 7 가지 요구 패턴에 대한 부하 분산 응답 특성을 얻었다. 본 연구를 통해서 클라이언트 요구랑 변화 패턴에 따른 최적의 부하 분산 알고리즘을 제시할 수 있었다. 본 연구 결과는 향후 효율적인 동적 부하 분산 연구에 좋은 참고가 될 것이다.

  • PDF

Dynamic Scheduling based on Host Load Information in a Wireless Internet Proxy Server Cluster Environment (무선 인터넷 프록시 서버 클러스터 환경에서 호스트 부하 정보에 기반한 동적 스케줄링)

  • Park, Hong-Joo;Kwak, Hu-Keun;Chung, Kyu-Sik
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07a
    • /
    • pp.310-312
    • /
    • 2005
  • 무선 인터넷 프록시 서버 클러스터에서 부하 분산기는 사용자의 요청을 각 서버(호스트)로 분산시키는 역할을 한다. 리눅스 가상 서버(LVS: Linux Virtual Server)는 소프트웨어적으로 사용되는 부하 분산기로서 여러 가지 스케줄링 방식들을 가지고 있다. 그러나 부하 분산시에 서버(호스트)의 유동적인 부하 정보를 반영하지 못하는 단점이 있다. 이에 개선된 방식으로 서버의 동시 연결 개수에 따라 상한계(Upper Bound)와 하한계(Lower Bound)를 설정하고, 요청을 분산하는 동적 스케줄링(Dynamic Scheduling)이 존재한다. 그러나 사용자의 요청 컨텐츠에 따라 상한계와 하한계가 바뀔 수 있음에도 불구하고 이 값들이 고정되어 있다는 단점을 가진다. 본 논문에서는 호스트 부하 정보에 기반한 스케줄링 방식을 제안한다. 제안된 방식은 호스트의 부하 정보를 바탕으로 사용자의 요청을 분산하였으며, 사용자의 요청에 따라 상한계와 하한계가 바뀔 수 있음을 고려하여 상한계와 하한계를 설정하지 않고 사용자 요청 컨텐츠에 따라 적절하게 요청이 분배되도록 하였다. 16대의 컴퓨터를 사용하여 실험을 수행하였으며, 실험 결과 사용자가 요청하는 컨텐츠가 동일한 경우에는 기존 스케줄링 방식과 $13\%$ 성능 감소를 다른 경우에는 기존 스케줄링 방식보다 $102\%$의 성능 향상을 보임을 확인하였다.

  • PDF

A Study on Distributed System Construction and Numerical Calculation Using Raspberry Pi

  • Ko, Young-ho;Heo, Gyu-Seong;Lee, Sang-Hyun
    • International journal of advanced smart convergence
    • /
    • v.8 no.4
    • /
    • pp.194-199
    • /
    • 2019
  • As the performance of the system increases, more parallelized data is being processed than single processing of data. Today's cpu structure has been developed to leverage multicore, and hence data processing methods are being developed to enable parallel processing. In recent years desktop cpu has increased multicore, data is growing exponentially, and there is also a growing need for data processing as artificial intelligence develops. This neural network of artificial intelligence consists of a matrix, making it advantageous for parallel processing. This paper aims to speed up the processing of the system by using raspberrypi to implement the cluster building and parallel processing system against the backdrop of the foregoing discussion. Raspberrypi is a credit card-sized single computer made by the raspberrypi Foundation in England, developed for education in schools and developing countries. It is cheap and easy to get the information you need because many people use it. Distributed processing systems should be supported by programs that connected multiple computers in parallel and operate on a built-in system. RaspberryPi is connected to switchhub, each connected raspberrypi communicates using the internal network, and internally implements parallel processing using the Message Passing Interface (MPI). Parallel processing programs can be programmed in python and can also use C or Fortran. The system was tested for parallel processing as a result of multiplying the two-dimensional arrangement of 10000 size by 0.1. Tests have shown a reduction in computational time and that parallelism can be reduced to the maximum number of cores in the system. The systems in this paper are manufactured on a Linux-based single computer and are thought to require testing on systems in different environments.

A Study on Web Services for Sequence Similarity search in the Workflow Environment (워크플로우 환경에서의 대규모 서열 유사성 검색 웹 서비스에 관한 연구)

  • Jun, Jin-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.6
    • /
    • pp.41-49
    • /
    • 2008
  • In recent years, a life phenomenon using a workflow management tool in bioinformatics has been actively researched. Workflow management tool is the base which enables researchers to collaborate through the re-use and sharing of service, and a variety of workflow management tools including MyGrid project's Taverna, Kepler and BioWMS have been developed and used as the open source. This workflow management tool can model and automate different services in spatially-distant area in one working space based on the web service technology. Many tools and databases used in the bioinformatics are provided in the web services form and are used in the workflow management tool. In such the situation, the web services development and stable service offering for a sequence similarity search which is basically used in the bioinformatics can be essential in the bioinformatics field. In this paper, the similarity retrieval speed of biology sequence data was improved based on a Linux cluster, and the sequence similarity retrieval could be done for a short time by linking with the workflow management tool through developing it in the web services.

  • PDF

A Dynamic Server Load Balancing based on Power Information for Saving Energy in a Server Cluster Environment (서버 클러스터 환경에서 에너지 절약을 위한 전력 정보 기반의 동적 서버 부하분산)

  • Kim, Dong-Jun;Kang, Na-Myong;Kwon, Hui-Ung;Kwak, Hu-Keun;Kim, Young-Jong;Chung, Kyu-Sik
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.171-174
    • /
    • 2011
  • 서버 클러스터에서 부하 분산기는 사용자의 요청을 각 서버로 분산시키는 역할을 한다. 리눅스 가상 서버(LVS: Linux Virtual Server)는 소프트웨어적으로 사용되는 부하 분산기로서 여러 가지 스케줄링 방식들을 가지고 있다. 그러나 부하 분산 시에 서버의 유동적인 부하 정보를 반영하지 못하는 단점이 있다. 이에 개선된 방식으로 서버의 동시 연결 개수에 따라 상한계(Upper Bound)와 하한계(Lower Bound)를 설정하고, 요청을 분산하는 동적 스케줄링(Dynamic Scheduling)이 존재한다. 그러나 서버의 상태에 따라 상한계와 하한계가 바뀔 수 있음에도 불구하고 이 값들이 고정되어 있다는 단점을 가진다. 본 논문에서는 기존 부하 분산 방법의 단점을 극복하는 서버 전력 정보에 기반한 스케줄링 방식을 제안한다. 제안된 방식은 서버의 부하 정보를 기반으로 에너지를 추정하고 전력 수치를 기반으로 LVS의 가중치 테이블을 주기적으로 갱신한다. 그리고 부하 분산기는 클라이언트로부터 요청 받은 트래픽을 각 서버의 에너지 소모 상태에 따라 적용시킴으로써 에너지 소모가 최소화되도록 부하를 분산한다. 또한 서버의 상태에 따라 상한계와 하한계가 바뀔수 있음을 고려하여 상한계와 하한계를 설정하지 않고 서버의 상태에 따라 적절하게 요청이 분배되도록 하였다. 15대의 PC를 사용하여 실험을 수행하였으며, 실험 결과는 기존 부하 분산 알고리즘 중 성능이 가장 좋은 알고리즘에 비해 서버의 성능이 동일한 경우 성능 및 소비전력 면에서 거의 동등하였고, 서버의 성능이 상이한 경우 50.2% 성능 향상 및 27.3% 소비 전력 절감을 확인하였다.

Design and Implementation of a Metadata Structure for Large-Scale Shared-Disk File System (대용량 공유디스크 파일 시스템에 적합한 메타 데이타 구조의 설계 및 구현)

  • 이용주;김경배;신범주
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.1
    • /
    • pp.33-49
    • /
    • 2003
  • Recently, there have been large storage demands for manipulating multimedia data. To solve the tremendous storage demands, one of the major researches is the SAN(Storage Area Network) that provides the local file requests directly from shared-disk storage and also eliminates the server bottlenecks to performance and availability. SAN also improve the network latency and bandwidth through new channel interface like FC(Fibre Channel). But to manipulate the efficient storage network like SAN, traditional local file system and distributed file system are not adaptable and also are lack of researches in terms of a metadata structure for large-scale inode object such as file and directory. In this paper, we describe the architecture and design issues of our shared-disk file system and provide the efficient bitmap for providing the well-formed block allocation in each host, extent-based semi flat structure for storing large-scale file data, and two-phase directory structure of using Extendible Hashing. Also we describe a detailed algorithm for implementing the file system's device driver in Linux Kernel and compare our file system with the general file system like EXT2 and shard disk file system like GFS in terms of file creation, directory creation and I/O rate.

Construction of web-based Database for Haliotis SNP (웹기반 전복류 (Haliotis) SNP 데이터베이스 구축)

  • Jeong, Ji-Eun;Lee, Jae-Bong;Kang, Se-Won;Baek, Moon-Ki;Han, Yeon-Soo;Choi, Tae-Jin;Kang, Jung-Ha;Lee, Yong-Seok
    • The Korean Journal of Malacology
    • /
    • v.26 no.2
    • /
    • pp.185-188
    • /
    • 2010
  • The Web-based the genus Haliotis SNP database was constructed on the basis of Intel Server Platform ZSS130 dual Xeon 3.2 GHz cpu and Linux-based (Cent OS) operating system. Haliotis related sequences (2,830 nucleotide sequences, 9,102 EST sequences) were downloaded through NCBI taxonomy browser. In order to eliminate vector sequences, we conducted vector masking step using cross match software with vector sequence database. In addition, poly-A tails were removed using Trimmest software from EMBOSS package. The processed sequences were clustered and assembled by TGICL package (TIGR tools) equipped with CAP3 software. A web-based interface (Haliotis SNP Database, http://www.haliotis.or.kr) was developed to enable optimal use of the clustered assemblies. The Clustering Res. menu shows the contig sequences from the clustering, the alignment results and sequences from each cluster. And also we can compare any sequences with Haliotis related sequences in BLAST menu. The search menu is equipped with its own search engine so that it is possible to search all of the information in the database using the name of a gene, accession number and/or species name. Taken together, the Web-based SNP database for Haliotis will be valuable to develop SNPs of Haliotis in the future.

Benchmark Results of a Monte Carlo Treatment Planning system (몬데카를로 기반 치료계획시스템의 성능평가)

  • Cho, Byung-Chul
    • Progress in Medical Physics
    • /
    • v.13 no.3
    • /
    • pp.149-155
    • /
    • 2002
  • Recent advances in radiation transport algorithms, computer hardware performance, and parallel computing make the clinical use of Monte Carlo based dose calculations possible. To compare the speed and accuracies of dose calculations between different developed codes, a benchmark tests were proposed at the XIIth ICCR (International Conference on the use of Computers in Radiation Therapy, Heidelberg, Germany 2000). A Monte Carlo treatment planning comprised of 28 various Intel Pentium CPUs was implemented for routine clinical use. The purpose of this study was to evaluate the performance of our system using the above benchmark tests. The benchmark procedures are comprised of three parts. a) speed of photon beams dose calculation inside a given phantom of 30.5 cm$\times$39.5 cm $\times$ 30 cm deep and filled with 5 ㎣ voxels within 2% statistical uncertainty. b) speed of electron beams dose calculation inside the same phantom as that of the photon beams. c) accuracy of photon and electron beam calculation inside heterogeneous slab phantom compared with the reference results of EGS4/PRESTA calculation. As results of the speed benchmark tests, it took 5.5 minutes to achieve less than 2% statistical uncertainty for 18 MV photon beams. Though the net calculation for electron beams was an order of faster than the photon beam, the overall calculation time was similar to that of photon beam case due to the overhead time to maintain parallel processing. Since our Monte Carlo code is EGSnrc, which is an improved version of EGS4, the accuracy tests of our system showed, as expected, very good agreement with the reference data. In conclusion, our Monte Carlo treatment planning system shows clinically meaningful results. Though other more efficient codes are developed such like MCDOSE and VMC++, BEAMnrc based on EGSnrc code system may be used for routine clinical Monte Carlo treatment planning in conjunction with clustering technique.

  • PDF