• Title/Summary/Keyword: and Parallel Processing

Search Result 2,013, Processing Time 0.031 seconds

A Comparative Study on Application of Material in Traditional Residents of Korea, China and Japan - Focusing on Representative Upper-class House - (한·중·일 전통주거의 재료적용 특성 비교 연구 - 각국 대표 상류주택을 중심으로 -)

  • Kim, Hwi Kyung;Choi, Kyung Ran
    • Korea Science and Art Forum
    • /
    • v.19
    • /
    • pp.293-305
    • /
    • 2015
  • At the same time the unique cultural traits of each country are valued, it has become an essential element to establish the cultural identity of a country. This study is aimed at comparing the residence architectural cultures in East-Asia and thus identifying Korea's own unique traits by determining the application characteristics of traditional architectures of Korea, China and Japan through practical investigation of materials, a basic element of architectural shaping. Literature survey and field study were conducted in parallel for this study, and architectural buildings under investigation included Mucheomdang House in Korea, Prince Gong Mansion in China and Dokyudo Building in Japan. Construction materials in Korea, China and Japan include natural materials such as wood, stone and clay, and artificial materials such as metals, paper, roof tiles, plug and glass. and the buildings were constructed with the combination of these materials. This commonality can be often found in the architectural composition. However, in the interior composition, the choice and application of different materials were clear between three countries, which were shown to be different depending on climates, processing methods and living culture of each country. First of all, since each country selected materials under the influence of its own vegetation and climates, living environment of each country could be seen via its residence. Also, it could be seen that while Korea and Japan show a certain similarity such as the traits of standing-sitting culture and the finish of paper in the interior, China is clearly different. In particular, regarding the material processing, the artificial processing was minimized in Korea, which mainly gave rough and crude feelings while due to the use of straight timbers, the architectural representation with organized and refined feelings was made in Japan. China showed the highest percentage of artificial processing of materials among three countries, which was highly associated with the coloring culture of China. Also, it could be seen that technology related to fine architectural materials such as bricks and glass was greatly advanced in China. Thus, how immaterial elements such as natural characteristics, functionality and aesthetics were applied in relation to residence in Korea, Japan and China could be determined through the comparison of architectural materials.

Comparison of the wall clock time for extracting remote sensing data in Hierarchical Data Format using Geospatial Data Abstraction Library by operating system and compiler (운영 체제와 컴파일러에 따른 Geospatial Data Abstraction Library의 Hierarchical Data Format 형식 원격 탐사 자료 추출 속도 비교)

  • Yoo, Byoung Hyun;Kim, Kwang Soo;Lee, Jihye
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.1
    • /
    • pp.65-73
    • /
    • 2019
  • The MODIS (Moderate Resolution Imaging Spectroradiometer) data in Hierarchical Data Format (HDF) have been processed using the Geospatial Data Abstraction Library (GDAL). Because of a relatively large data size, it would be preferable to build and install the data analysis tool with greater computing performance, which would differ by operating system and the form of distribution, e.g., source code or binary package. The objective of this study was to examine the performance of the GDAL for processing the HDF files, which would guide construction of a computer system for remote sensing data analysis. The differences in execution time were compared between environments under which the GDAL was installed. The wall clock time was measured after extracting data for each variable in the MODIS data file using a tool built lining against GDAL under a combination of operating systems (Ubuntu and openSUSE), compilers (GNU and Intel), and distribution forms. The MOD07 product, which contains atmosphere data, were processed for eight 2-D variables and two 3-D variables. The GDAL compiled with Intel compiler under Ubuntu had the shortest computation time. For openSUSE, the GDAL compiled using GNU and intel compilers had greater performance for 2-D and 3-D variables, respectively. It was found that the wall clock time was considerably long for the GDAL complied with "--with-hdf4=no" configuration option or RPM package manager under openSUSE. These results indicated that the choice of the environments under which the GDAL is installed, e.g., operation system or compiler, would have a considerable impact on the performance of a system for processing remote sensing data. Application of parallel computing approaches would improve the performance of the data processing for the HDF files, which merits further evaluation of these computational methods.

Acceleration of computation speed for elastic wave simulation using a Graphic Processing Unit (그래픽 프로세서를 이용한 탄성파 수치모사의 계산속도 향상)

  • Nakata, Norimitsu;Tsuji, Takeshi;Matsuoka, Toshifumi
    • Geophysics and Geophysical Exploration
    • /
    • v.14 no.1
    • /
    • pp.98-104
    • /
    • 2011
  • Numerical simulation in exploration geophysics provides important insights into subsurface wave propagation phenomena. Although elastic wave simulations take longer to compute than acoustic simulations, an elastic simulator can construct more realistic wavefields including shear components. Therefore, it is suitable for exploration of the responses of elastic bodies. To overcome the long duration of the calculations, we use a Graphic Processing Unit (GPU) to accelerate the elastic wave simulation. Because a GPU has many processors and a wide memory bandwidth, we can use it in a parallelised computing architecture. The GPU board used in this study is an NVIDIA Tesla C1060, which has 240 processors and a 102 GB/s memory bandwidth. Despite the availability of a parallel computing architecture (CUDA), developed by NVIDIA, we must optimise the usage of the different types of memory on the GPU device, and the sequence of calculations, to obtain a significant speedup of the computation. In this study, we simulate two- (2D) and threedimensional (3D) elastic wave propagation using the Finite-Difference Time-Domain (FDTD) method on GPUs. In the wave propagation simulation, we adopt the staggered-grid method, which is one of the conventional FD schemes, since this method can achieve sufficient accuracy for use in numerical modelling in geophysics. Our simulator optimises the usage of memory on the GPU device to reduce data access times, and uses faster memory as much as possible. This is a key factor in GPU computing. By using one GPU device and optimising its memory usage, we improved the computation time by more than 14 times in the 2D simulation, and over six times in the 3D simulation, compared with one CPU. Furthermore, by using three GPUs, we succeeded in accelerating the 3D simulation 10 times.

Quality of Working Life (직장생활에 대한 새로운 인식)

  • 김영환
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.4 no.4
    • /
    • pp.43-61
    • /
    • 1981
  • Interest in the Quality of working life is spreading rapidly and the phrase has entered the popular vocabulary. That this should be so is probably due in large measure to changes in the values of society, nowadays accelerated as never before by the concerns and demands of younger people. But however topical the concept has become, there is very little agreement on its definition. Rather, the term appears to have become a kind of depository for a variety of sometimes contradictory meanings attributed to it by different groups. A list of all the elements it if held to cover would include availability and security of employment, adaquate income, safe and pleasant physical working conditions, reasonable hours of work, equitable treatment and democracy in the workplace, the possibility of self-development, control over one's work, a sense of pride in craftsmanship or product, wider career choices, and flexibility in matters such as the time of starting work, the number of working days in the week, Job sharing and so on altogether an array that encompasses a variety of traditional aspirations and many new ones reflecting the entry into the post industrial era. The term "quality of working life" was introduced by professor Louis E. Davis and his colleagues in the late 1960s to call attention to the prevailing and needlessly poor quality of life at the workplace. In their usage it referred to the quality of the relationship between the worker and his working environment as a whole, and was intended to emphasize the human dimension so often forgotten among the technical and economic factors in job design. Treating workers as if they were elements or cogs in the production process is not only an affront to the dignity of human life, but is also a serious underestimation of the human capabilities needed to operate more advanced technologies. When tasks demand high levels of vigilence, technical problem-solving skills, self initiated behavior, and social and communication skills. it is imperative that our concepts of man be of requisite complexity. Our aim is not just to protect the worker's life and health but to give them an informal interest in their job and opportunity to express their views and exercise control over everything that affects their working life. Certainly, so far as his work is concerned, a man must feel better protected but he must also have a greater feeling of freedom and responsibility. Something parallel but wholly different if happening in Europe, industrial democracy. What has happened in Europe has been discrete, fixed, finalized, and legalized. Those developing centuries driving toward industrialization like R.O.K, shall have to bear in mind the human complexity in processing and designing the work and its environment. Increasing attention is needed to the contradiction between autocratic rule at the workplace and democratic rights in society.n society.

  • PDF

Coordinated alteration of mRNA-microRNA transcriptomes associated with exosomes and fatty acid metabolism in adipose tissue and skeletal muscle in grazing cattle

  • Muroya, Susumu;Ogasawara, Hideki;Nohara, Kana;Oe, Mika;Ojima, Koichi;Hojito, Masayuki
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.33 no.11
    • /
    • pp.1824-1836
    • /
    • 2020
  • Objective: On the hypothesis that grazing of cattle prompts organs to secrete or internalize circulating microRNAs (c-miRNAs) in parallel with changes in energy metabolism, we aimed to clarify biological events in adipose, skeletal muscle, and liver tissues in grazing Japanese Shorthorn (JSH) steers by a transcriptomic approach. Methods: The subcutaneous fat (SCF), biceps femoris muscle (BFM), and liver in JSH steers after three months of grazing or housing were analyzed using microarray and quantitative polymerase chain reaction (qPCR), followed by gene ontology (GO) and functional annotation analyses. Results: The results of transcriptomics indicated that SCF was highly responsive to grazing compared to BFM and liver tissues. The 'Exosome', 'Carbohydrate metabolism' and 'Lipid metabolism' were extracted as the relevant GO terms in SCF and BFM, and/or liver from the >1.5-fold-altered mRNAs in grazing steers. The qPCR analyses showed a trend of upregulated gene expression related to exosome secretion and internalization (charged multivesicular body protein 4A, vacuolar protein sorting-associated protein 4B, vesicle associated membrane protein 7, caveolin 1) in the BFM and SCF, as well as upregulation of lipolysis-associated mRNAs (carnitine palmitoyltransferase 1A, hormone-sensitive lipase, perilipin 1, adipose triglyceride lipase, fatty acid binding protein 4) and most of the microRNAs (miRNAs) in SCF. Moreover, gene expression related to fatty acid uptake and inter-organ signaling (solute carrier family 27 member 4 and angiopoietin-like 4) was upregulated in BFM, suggesting activation of SCF-BFM organ crosstalk for energy metabolism. Meanwhile, expression of plasma exosomal miR-16a, miR-19b, miR-21-5p, and miR-142-5p was reduced. According to bioinformatic analyses, the c-miRNA target genes are associated with the terms 'Endosome', 'Caveola', 'Endocytosis', 'Carbohydrate metabolism', and with pathways related to environmental information processing and the endocrine system. Conclusion: Exosome and fatty acid metabolism-related gene expression was altered in SCF of grazing cattle, which could be regulated by miRNA such as miR-142-5p. These changes occurred coordinately in both the SCF and BFM, suggesting involvement of exosome in the SCF-BFM organ crosstalk to modulate energy metabolism.

A Comprehensive Groundwater Modeling using Multicomponent Multiphase Theory: 1. Development of a Multidimensional Finite Element Model (다중 다상이론을 이용한 통합적 지하수 모델링: 1. 다차원 유한요소 모형의 개발)

  • Joon Hyun Kim
    • Journal of Korea Soil Environment Society
    • /
    • v.1 no.1
    • /
    • pp.89-102
    • /
    • 1996
  • An integrated model is presented to describe underground flow and mass transport, using a multicomponent multiphase approach. The comprehensive governing equation is derived considering mass and force balances of chemical species over four phases(water, oil, air, and soil) in a schematic elementary volume. Compact and systemati notations of relevant variables and equations are introduced to facilitate the inclusion of complex migration and transformation processes, and variable spatial dimensions. The resulting nonlinear system is solved by a multidimensional finite element code. The developed code with dynamic array allocation, is sufficiently flexible to work across a wide spectrum of computers, including an IBM ES 9000/900 vector facility, SP2 cluster machine, Unix workstations and PCs, for one-, two and three-dimensional problems. To reduce the computation time and storage requirements, the system equations are decoupled and solved using a banded global matrix solver, with the vector and parallel processing on the IBM 9000. To avoide the numerical oscillations of the nonlinear problems in the case of convective dominant transport, the techniques of upstream weighting, mass lumping, and elementary-wise parameter evaluation are applied. The instability and convergence criteria of the nonlinear problems are studied for the one-dimensional analogue of FEM and FDM. Modeling capacity is presented in the simulation of three dimensional composite multiphase TCE migration. Comprehesive simulation feature of the code is presented in a companion paper of this issue for the specific groundwater or flow and contamination problems.

  • PDF

Efficient FPGA Logic Design for Rotatory Vibration Data Acquisition (회전체 진동 데이터 획득을 위한 효율적인 FPGA 로직 설계)

  • Lee, Jung-Sik;Ryu, Deung-Ryeol
    • 전자공학회논문지 IE
    • /
    • v.47 no.4
    • /
    • pp.18-27
    • /
    • 2010
  • This paper is designed the efficient Data Acquisition System for an vibration of rotatory machines. The Data Acquisition System is consist of the analog logic having signal filer and amplifier, and digital logic with ADC, DSP, FPGA and FIFO memory. The vibration signal of rotatory machines acquired from sensors is controlled by the FPGA device through the analog logic and is saved to FIFO memory being converted analog to digital signal. The digital signal process is performed by the DSP using the vibration data in FIFO memory. The vibration factor of the rotatory machinery analysis and diagnosis is defined the RMS, Peak to Peak, average, GAP, FFT of vibration data and digital filtering by DSP, and is need to follow as being happened the event of vibration and make an application to an warning system. It takes time to process the several analysis step of all vibration data and the event follow, also special event. It should be continuously performed the data acquisition and the process, however during processing the input signal the DSP can not be performed to the acquisited data after then, also it will be lose the data at several channel. Therefore it is that the system uses efficiently the DSP and FPGA devices for reducing the data lose, it design to process a part of the signal data to FPGA from DSP in order to minimize the process time, and a process to parallel process system, as a result of design system it propose to method of faster process and more efficient data acquisition system by using DSP and FPGA than signal DSP system.

Benchmark Results of a Monte Carlo Treatment Planning system (몬데카를로 기반 치료계획시스템의 성능평가)

  • Cho, Byung-Chul
    • Progress in Medical Physics
    • /
    • v.13 no.3
    • /
    • pp.149-155
    • /
    • 2002
  • Recent advances in radiation transport algorithms, computer hardware performance, and parallel computing make the clinical use of Monte Carlo based dose calculations possible. To compare the speed and accuracies of dose calculations between different developed codes, a benchmark tests were proposed at the XIIth ICCR (International Conference on the use of Computers in Radiation Therapy, Heidelberg, Germany 2000). A Monte Carlo treatment planning comprised of 28 various Intel Pentium CPUs was implemented for routine clinical use. The purpose of this study was to evaluate the performance of our system using the above benchmark tests. The benchmark procedures are comprised of three parts. a) speed of photon beams dose calculation inside a given phantom of 30.5 cm$\times$39.5 cm $\times$ 30 cm deep and filled with 5 ㎣ voxels within 2% statistical uncertainty. b) speed of electron beams dose calculation inside the same phantom as that of the photon beams. c) accuracy of photon and electron beam calculation inside heterogeneous slab phantom compared with the reference results of EGS4/PRESTA calculation. As results of the speed benchmark tests, it took 5.5 minutes to achieve less than 2% statistical uncertainty for 18 MV photon beams. Though the net calculation for electron beams was an order of faster than the photon beam, the overall calculation time was similar to that of photon beam case due to the overhead time to maintain parallel processing. Since our Monte Carlo code is EGSnrc, which is an improved version of EGS4, the accuracy tests of our system showed, as expected, very good agreement with the reference data. In conclusion, our Monte Carlo treatment planning system shows clinically meaningful results. Though other more efficient codes are developed such like MCDOSE and VMC++, BEAMnrc based on EGSnrc code system may be used for routine clinical Monte Carlo treatment planning in conjunction with clustering technique.

  • PDF

Evaluation of MR-SENSE Reconstruction by Filtering Effect and Spatial Resolution of the Sensitivity Map for the Simulation-Based Linear Coil Array (선형적 위상배열 코일구조의 시뮬레이션을 통한 민감도지도의 공간 해상도 및 필터링 변화에 따른 MR-SENSE 영상재구성 평가)

  • Lee, D.H.;Hong, C.P.;Han, B.S.;Kim, H.J.;Suh, J.J.;Kim, S.H.;Lee, C.H.;Lee, M.W.
    • Journal of Biomedical Engineering Research
    • /
    • v.32 no.3
    • /
    • pp.245-250
    • /
    • 2011
  • Parallel imaging technique can provide several advantages for a multitude of MRI applications. Especially, in SENSE technique, sensitivity maps were always required in order to determine the reconstruction matrix, therefore, a number of difference approaches using sensitivity information from coils have been demonstrated to improve of image quality. Moreover, many filtering methods were proposed such as adaptive matched filter and nonlinear diffusion technique to optimize the suppression of background noise and to improve of image quality. In this study, we performed SENSE reconstruction using computer simulations to confirm the most suitable method for the feasibility of filtering effect and according to changing order of polynomial fit that were applied on variation of spatial resolution of sensitivity map. The image was obtained at 0.32T(Magfinder II, Genpia, Korea) MRI system using spin-echo pulse sequence(TR/TE = 500/20 ms, FOV = 300 mm, matrix = $128{\times}128$, thickness = 8 mm). For the simulation, obtained image was multiplied with four linear-array coil sensitivities which were formed of 2D-gaussian distribution and the image was complex white gaussian noise was added. Image processing was separated to apply two methods which were polynomial fitting and filtering according to spatial resolution of sensitivity map and each coil image was subsampled corresponding to reduction factor(r-factor) of 2 and 4. The results were compared to mean value of geomety factor(g-factor) and artifact power(AP) according to r-factor 2 and 4. Our results were represented while changing of spatial resolution of sensitivity map and r-factor, polynomial fit methods were represented the better results compared with general filtering methods. Although our result had limitation of computer simulation study instead of applying to experiment and coil geometric array such as linear, our method may be useful for determination of optimal sensitivity map in a linear coil array.

Parallel Computation For The Edit Distance Based On The Four-Russians' Algorithm (4-러시안 알고리즘 기반의 편집거리 병렬계산)

  • Kim, Young Ho;Jeong, Ju-Hui;Kang, Dae Woong;Sim, Jeong Seop
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.2
    • /
    • pp.67-74
    • /
    • 2013
  • Approximate string matching problems have been studied in diverse fields. Recently, fast approximate string matching algorithms are being used to reduce the time and costs for the next generation sequencing. To measure the amounts of errors between two strings, we use a distance function such as the edit distance. Given two strings X(|X| = m) and Y(|Y| = n) over an alphabet ${\Sigma}$, the edit distance between X and Y is the minimum number of edit operations to convert X into Y. The edit distance between X and Y can be computed using the well-known dynamic programming technique in O(mn) time and space. The edit distance also can be computed using the Four-Russians' algorithm whose preprocessing step runs in $O((3{\mid}{\Sigma}{\mid})^{2t}t^2)$ time and $O((3{\mid}{\Sigma}{\mid})^{2t}t)$ space and the computation step runs in O(mn/t) time and O(mn) space where t represents the size of the block. In this paper, we present a parallelized version of the computation step of the Four-Russians' algorithm. Our algorithm computes the edit distance between X and Y in O(m+n) time using m/t threads. Then we implemented both the sequential version and our parallelized version of the Four-Russians' algorithm using CUDA to compare the execution times. When t = 1 and t = 2, our algorithm runs about 10 times and 3 times faster than the sequential algorithm, respectively.