• Title/Summary/Keyword: Order of Magnitude

Search Result 1,602, Processing Time 0.033 seconds

A Unified Software Architecture for Storage Class Random Access Memory (스토리지 클래스 램을 위한 통합 소프트웨어 구조)

  • Baek, Seung-Jae;Choi, Jong-Moo
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.3
    • /
    • pp.171-180
    • /
    • 2009
  • Slowly, but surely, we are seeing the emergence of a variety of embedded systems that are employing Storage Class RAM (SCRAM) such as FeRAM, MRAM and PRAM, SCRAM not only has DRAM-characteristic, that is, random byte-unit access capability, but also Disk-characteristic, that is, non-volatility. In this paper, we propose a new software architecture that allows SCRAM to be used both for main memory and for secondary storage simultaneously- The proposed software architecture has two core modules, one is a SCRAM driver and the other is a SCRAM manager. The SCRAM driver takes care of SCRAM directly and exports low level interfaces required for upper layer software modules including traditional file systems, buddy systems and our SCRAM manager. The SCRAM manager treats file objects and memory objects as a single object and deals with them in a unified way so that they can be interchanged without copy overheads. Experiments conducted on real embedded board with FeRAM have shown that the SCRAM driver indeed supports both the traditional F AT file system and buddy system seamlessly. The results also have revealed that the SCRAM manager makes effective use of both characteristics of SCRAM and performs an order of magnitude better than the traditional file system and buddy system.

Seismic Studies on Ground Motion using the Multicomponent Complex Trace Analysis Method (다성분 복소 트레이스 분석법을 이용한 지진파 입자운동 연구)

  • Lee, So-Young;Kim, Ki-Young;Kim, Han-Joon
    • Journal of the Korean Geophysical Society
    • /
    • v.3 no.1
    • /
    • pp.37-48
    • /
    • 2000
  • In order to investigate in-line ground motions caused by earthquakes, we examine the multicomponent complex trace analysis method (MCTAM) for the synthetic data and apply it to real earthquake data. An experimental result for synthetic data gives correct information on the arrival times, duration of individual phases, and approaching angles for body waves. Rayleigh waves are also easily identified with the MCTAM. A deep earthquake with magnitude of 7.3 was chosen to test various polarization attributes of ground motions. For P waves, instantaneous phase difference between the vertical and the in-line horizontal components ${\phi}(t)$, instantaneous reciprocal ellipticity ${\rho}(t)$, and approaching angle ${\tau}(t)$ are computed to be ${\pm}180^{\circ},\;0{\sim}0.25,\;and\;-30^{\circ}{\sim}-45^{\circ}$, respectively. For S waves, ${\phi}(t)$ tends to vary while ${\rho}(t)$ have values of $0{\sim}0.3\;and\;{\tau}(t)$ remains near vertical, respectively. A relatively low frequency signal registered just prior to the S wave event is interpreted as a P-wave phase based on its polarization characteristics. Velocities of P and S waves are computed to be 8.633 km/s and 4.762 km/s, and their raypath parameters 0.074 s/km and 0.197 s/km. Dynamic Poisson's ratio is obtained as 0.281 from the velocities of P and S waves.

  • PDF

Evaluation on the Radiological Shielding Design of a Hot Cell Facility (핫셀시설의 방사선 안전성 평가)

  • 조일제;국동학;구정회;정원명;유길성;이은표;박성원
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.2 no.1
    • /
    • pp.1-11
    • /
    • 2004
  • The hot cell facility for research activities related to the lithium reduction of spent fuel, which is designed to permit safe handling of source materials with radioactivity levels up to 1,385 TBq, is planned to be built. To meet this goal, the facility is designed to keep gamma and neutron radiation lower than the recommended dose-rate in normally occupied areas. The calculations peformed with QAD-CGGP and MCNP-4C are used to evaluate the proposed engineering design concepts that would provide acceptable dose-rates during a normal operation in hot cell facility. The maximum effective gamma dose-rates on the surfaces of the facility at operation area and at service area calculated by QAD-CGGP are estimated to be $2.10{\times}10^{-3}, 2.97{\times}10^{-3} and 1.01{\times}10{-1}$ mSv/h, respectively. And those calculated by MCNP-4C are $1.60{\times}10^{-3}, 2.99{\times}10^{-3} and 7.88{\times}10^{-2}$ mSv/h, respectively, The dose-rates contributed by neutrons are one order of magnitude less than that of gamma sources. Therefore, it is confirmed that the radiological design for hot cell facility satisfies the Korean criterion of 0.01 mSv/h for the operation area and 0.15 mSv/h for the service (maintenance) area.

  • PDF

Genetic Analysis of Carcass Traits in Hanwoo with Different Slaughter End-points (세가지 도축 종료 시점을 공변량으로 하는 한우 도체형질에 대한 유전능력 분석모형)

  • Choy, Y.H.;Yoon, H.B.;Choi, S.B.;Chung, H.W.
    • Journal of Animal Science and Technology
    • /
    • v.47 no.5
    • /
    • pp.703-710
    • /
    • 2005
  • Data from Hanwoo steers and bull calves were analyzed to see the phenotypic and genetic relationships between carcass traits from four different covariance models. Four models fit test station and test period as fixed effect of contemporary group and sire as random effect assuming paternal half-sib relationships among animals. Each model fits one of linear covariate (s) of different slaughter end points-age at slaughter in the first order, age at slaughter in the first and second order, slaughter weight or back fat thickness at 12-13th rib of cold carcass. Age at slaughter in its second order was not significant. Age at slaughter accounted for signifi- cant amount of genetic variances and covariances of carcass traits. Heritability estimates of back fat thickness, rib eye area, carcass weight, marbling score and dressing percentage were 0.34, 0.22, 0.24, 0.42 and 0.18, respectively at constant age basis. The genetic correlation between carcass weight and the other variables were all positive and low to high in magnitude. Genetic correlations between back fat thickness and rib eye area and between marbling score and dressing percentage were low but negative. Variance and covariance structure between these traits were shifted to a great extent when these variables were regressed on slaughter weight or on back fat thickness. These two covariates counteracted to each other but they adjusted each carcass variable or their interrelationship according to differential growth of body components, bone, muscle and fat. Slaughter weight tended to decrease genetic variances and covariances of carcass weight and between component traits and back fat thickness tended to increase those of rib eye area and between rib eye area and carcass weight.

Developmental Difference in Metacognitive Accuracy between High School Students and College Students (메타인지 정확성의 발달 차이 연구: 고등학생과 대학생 데이터)

  • Bae, Jinhee;Cho, Hye-Seung;Kim, Kyungil
    • Korean Journal of Cognitive Science
    • /
    • v.26 no.1
    • /
    • pp.53-67
    • /
    • 2015
  • Metacognitive monitoring refers to high dimensional cognitive activities. Understanding one's own cognitive processes accurately can make effective controls for their performance. Brain area related with metacognition is PFC which is completed the order of late and it can be inferred that monitoring abilities is developing during late adolescent. In this study, we explored the developmental difference in monitoring accuracy between high school students and college students using by measuring JOL(Judgment of Learning). Participants was asked that they study Spanish-Korean word pairs and judge their future performance of memory. In the result, people in both groups thought that they could remember word pairs better than their actual performance. Absolute bias scores which mean the degree to predict their performance apart from true scores showed the interaction between subject groups and task difficulty. Specifically, people judged their learning state quite accurately in easy task condition. However, in difficult task condition, both groups showed inaccuracy for predicting their learning and the magnitude of the degree was bigger in the group of high school students.

Performance Evaluation of Hash Join Algorithm on Flash Memory SSDs (플래쉬 메모리 SSD 기반 해쉬 조인 알고리즘의 성능 평가)

  • Park, Jang-Woo;Park, Sang-Shin;Lee, Sang-Won;Park, Chan-Ik
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.11
    • /
    • pp.1031-1040
    • /
    • 2010
  • Hash join is one of the core algorithms in databases management systems. If a hash join cannot complete in one-pass because the available memory is insufficient (i.e., hash table overflow), however, it may incur a few sequential writes and excessive random reads. With harddisk as the tempoary storage for hash joins, the I/O time would be dominated by slow random reads in its probing phase. Meanwhile, flash memory based SSDs (flash SSDs) are becoming popular, and we will witness in the foreseeable future that flash SSDs replace harddisks in enterprise databases. In contrast to harddisk, flash SSD without any mechanical component has fast latency in random reads, and thus it can boost hash join performance. In this paper, we investigate several important and practical issues when flash SSD is used as tempoary storage for hash join. First, we reveal the va patterns of hash join in detail and explain why flash SSD can outperform harddisk by more than an order of magnitude. Second, we present and analyze the impact of cluster size (i.e., va unit in hash join) on performance. Finally, we emperically demonstrate that, while a commerical query optimizer is error-prone in predicting the execution time with harddisk as temporary storage, it can precisely estimate the execution time with flash SSD. In summary, we show that, when used as temporary storage for hash join, flash SSD will provide more reliable cost estimation as well as fast performance.

A Meta-analysis of variables related to Empowerment of social workers (사회복지사의 임파워먼트와 관련된 변인에 관한 메타분석)

  • Lee, Jung-Gun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.12
    • /
    • pp.401-410
    • /
    • 2020
  • The purpose of this study is to reveal the magnitude of the correlation effect between variables related to empowerment of social workers. To this end, 30 studies published in Korea were analyzed through meta-analysis. The results are as follows. First, the overall effect size of the variable group was the median effect size; the job-related positive variables showed the largest effect size among the variable groups, and then came the organization-related variables, the personal psychology-related variables, the job-related negative variables, and the personal background variables in that order. Second, among the factors related to personal background variables, all factors except position were found to have a small effect size, or close to a small effect size. Self-esteem, which is an individual psychologically related variable, showed a medium effect size close to a large effect size. Among the organizationally related variables, organizational commitment and transformational leadership showed a large effect size, and organization culture showed a medium effect size. In addition, job satisfaction, which is a positive job-related variable, showed a large effect size, while burnout from job-related negative variables showed a large effect size, and turnover intention showed a medium effect size.

Partial Denoising Boundary Image Matching Based on Time-Series Data (시계열 데이터 기반의 부분 노이즈 제거 윤곽선 이미지 매칭)

  • Kim, Bum-Soo;Lee, Sanghoon;Moon, Yang-Sae
    • Journal of KIISE
    • /
    • v.41 no.11
    • /
    • pp.943-957
    • /
    • 2014
  • Removing noise, called denoising, is an essential factor for the more intuitive and more accurate results in boundary image matching. This paper deals with a partial denoising problem that tries to allow a limited amount of partial noise embedded in boundary images. To solve this problem, we first define partial denoising time-series which can be generated from an original image time-series by removing a variety of partial noises and propose an efficient mechanism that quickly obtains those partial denoising time-series in the time-series domain rather than the image domain. We next present the partial denoising distance, which is the minimum distance from a query time-series to all possible partial denoising time-series generated from a data time-series, and we use this partial denoising distance as a similarity measure in boundary image matching. Using the partial denoising distance, however, incurs a severe computational overhead since there are a large number of partial denoising time-series to be considered. To solve this problem, we derive a tight lower bound for the partial denoising distance and formally prove its correctness. We also propose range and k-NN search algorithms exploiting the partial denoising distance in boundary image matching. Through extensive experiments, we finally show that our lower bound-based approach improves search performance by up to an order of magnitude in partial denoising-based boundary image matching.

인터넷, 인트라넷과 연계되는 데이타웨어하우스 시스템의 구축방안

  • 박주석;김찬수
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1996.10a
    • /
    • pp.73-77
    • /
    • 1996
  • 정보는 의사결정자들의 수주에 있을 때 기업에 있어 강력한 경쟁무기가 된다. 의사결정자들의 정보에 대한 이러한 필요성을 충족시키기 위해서 데이타는 운영시스템(Operational System)에서 추출되어 데이타웨어하우스에 저장된다. 데이타웨어하우스는 핵심 비지니스영역(key business dimension)에 의해 정리된 historical data를 저장한다. 이러한 의사결정자들을 위한 데이타웨어하우스 정보의 전달은 기존의 클라이언트/서버 시스템 하에서는 많은 지원을 요구한다. 즉 기존 클라이언트/서버 시스템 하에서는 사용자들의 접근을 위해 데이타가 추출되고 조직화되어지고 나면, 반드시 분석 소프트웨어가 각 사용자의 컴퓨터에 설치되어야 하고 외부의 사용자를 위한 새로운 운영자가 고용되어야 한다. 사용자의 다양한 요구 그리고 계속적 사용자의 교체는 사용자 지원에 있어 심각한 기업부담으로 작용한다. 또한 클라이언트/서버 시스템에서는 기업외부의 정보 이용자들에게 정보를 제공하는데 있어 장소적 한계점을 가지고 잇다. 인트라넷과 인터넷은 이러한 클라이언트/서버 시스템 환경의 문제에 대해 해답을 제시한다. 인트라넷은 데이타웨어하우스로의 접근을 간단히 할뿐만 아니라 의사결정자들의 정보의 공유와 상호분석의 새로운 단계를 제공한다. 그리고 인터넷은 기업 외부 어디에서나 기업이 제공하는 정보를 이용하고자 하는 사람들에게 접근의 편의성을 제공한다. 즉 데이타웨어하우스의 목표와 인트라넷, 인터넷의 목표는 데이타로의 손쉬운 접근이라는 점에서 동일하다. 이러한 점에 착안하여 인트라넷과 인터넷하에서 운용되는 데이타웨어하우스 시스템 구축을 위한 방안을 제시하고자 한다.다(학생군:8.16kg 작업자군:12.9kg). 심박수를 이용한 생리학적 연구에서는 평균 심박수가 거의 100 이하를 유지하므로써 피실험자들이 8시간 작업기준으로 보아 무리가 없는 최대허용 하중을 결정하였음을 보였다. 또한 각 운반작업에 대한 최대허용 하중을 예측하는 회귀모형을 제시하였다.아직 정립되어 있지 않은 분산 환경하에서의 관계형 데이타베이스의 데이타관리의 분류체계를 나름대로 정립하였다는데 그 의의가 있다. 또한 이것의 응용은 현재 분산데이타베이스 구축에 있어 나타나는 기술적인 문제점들을 어느정도 보완할 수 있다는 점에서 그 중요성이 있다.ence of a small(IxEpc),hot(Tex> SOK) core which contains two tempegatlue peaks at -15" east and north of MDS. The column density of HCaN is (1-3):n1014cm-2. Column density at distant position from MD5 is larger than that in the (:entral region. We have deduced that this hot-core has a mass of 10sR1 which i:s about an order of magnitude larger those obtained by previous studies.previous studies.업순서들의 상관관계를 고려하여 보다 개선된 해를 구하기 위한 연구가 요구된다. 또한, 준비작업비용을 발생시키는 작업장의 작업순서결정에 대해서도 연구를 행하여, 보완작업비용과 준비비용을 고려한 GMMAL 작업순서문제를 해

  • PDF

A study on the multifrontal method in interior point method (내부점 선형계획법에서의 멀티프런탈방법에 관한 연구)

  • 김병규;박순달
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1995.09a
    • /
    • pp.370-380
    • /
    • 1995
  • 선형계획법의 해법으로 최근에는 내부점기법(Interior Point Method)가 관심 을 끌고 있다. 이 내부점 기법은 계산복잡도 뿐만 아니라 수행속도면에서도 우수한 결과를 보이고 있다. 이 방법은 매 회 대칭양정치(Symmetric Positive Definite)인 선형시스템을 풀어야 하는데 이 과정이 전체 내부점 수 행시간의 80-90%를 차지한다. 따라서 내부점 기법의 수행속도는 대칭양정치 인 선형시스템을 효율적으로 푸는 방법에 달려 있다. 대칭양정치인 선형시스 템을 풀기 위해서는 상하분해를 이용하게 되는 데 가우스소거를 이용해서 상하 분해를 하는 경우 매 단계에서 행렬의 모든 요소를 가지고 있을 필요 가 없다. 행렬의 모든 요소에 대한 정보를 동시에 필요로 하지 않는다. 즉, 현 단계에서 가우스소거와 관련된 열들에 대한 정보만 있으면 상하 분해가 가능하고 이러한 개념을 이용한 방법이 프런탈방법이다. 프런탈 방법은 대형 선형계획 문제를 풀기에 유리하다는 장점이 있다. 이러한 프런탈 방법을 확 장해서 동시에 여러 개의 프런탈을 계산하는 방법이 멀티프런탈방법이다. 이 방법은 알고리듬 자체가 병렬처리에 적합하기 때문에 병렬처리와 관련해서 도 많은 연구가 수행되고 있다. 본 연구에서는 삭제나무(Elimination Tree)를 이용한 프런탈 방법과 프런탈방법에 슈퍼노드의 개념을 도입한 슈퍼노들 프 런탈방법등에 대해서 이제까지의 연구 현황을 알아보고 프런탈방법에 적합 하고 효율적인 자료 구조와 멀티프런탈 방법에 적용 가능한 병렬알고리듬에 대하여 연구하고자 한다. 본 연구결과 기대효과로는 프런탈 방법에 적합하고 효율적인 자료 구조와 멀티프런탈 방법에 적용 가능한 병렬알고리듬을 개발 함으로써 내부점 선형계획법의 수행속도의 개선에 도움이 될 것이다.성요소들을 제시하였다.용자 만족도가 보다 높은 것으 로 나타났다. 할 수 있는 효율적인 distributed system를 개발하는 것을 제시하였다. 본 논문은 데이타베이스론의 입장에서 아직 정립되어 있지 않은 분산 환경하에서의 관계형 데이타베이스의 데이타관리의 분류체계를 나름대로 정립하였다는데 그 의의가 있다. 또한 이것의 응용은 현재 분산데이타베이스 구축에 있어 나타나는 기술적인 문제점들을 어느정도 보완할 수 있다는 점에서 그 중요성이 있다.ence of a small(IxEpc),hot(Tex> SOK) core which contains two tempegatlue peaks at -15" east and north of MDS. The column density of HCaN is (1-3):n1014cm-2. Column density at distant position from MD5 is larger than that in the (:entral region. We have deduced that this hot-core has a mass of 10sR1 which i:s about an order of magnitude larger those obtained by previous studies.previous studies.업순서들의 상관관계를 고려하여 보다 개선된 해를 구하기 위한 연구가 요구된다. 또한, 준비작업비용을 발생시키는 작업장의 작업순서결정에 대해서도 연구를 행하여, 보완작업비용과 준비비용을 고려한 GMMAL 작업순서문제를 해결하기 위한 연구가 수행되어야 할 것이다.로 이루어 져야 할 것이다.태를 보다 효율적으로 증진시킬 수 있는 대안이 마련되어져야 한다고 사료된다.$\ulcorner$순응$\lrcorner$<

  • PDF