• Title/Summary/Keyword: small universe

Search Result 88, Processing Time 0.046 seconds

A Study of Dark Photon at the Electron-Positron Collider Experiments Using KISTI-5 Supercomputer

  • Park, Kihong;Cho, Kihyeon
    • Journal of Astronomy and Space Sciences
    • /
    • v.38 no.1
    • /
    • pp.55-63
    • /
    • 2021
  • The universe is well known to be consists of dark energy, dark matter and the standard model (SM) particles. The dark matter dominates the density of matter in the universe. The dark matter is thought to be linked with dark photon which are hypothetical hidden sector particles similar to photons in electromagnetism but potentially proposed as force carriers. Due to the extremely small cross-section of dark matter, a large amount of data is needed to be processed. Therefore, we need to optimize the central processing unit (CPU) time. In this work, using MadGraph5 as a simulation tool kit, we examined the CPU time, and cross-section of dark matter at the electron-positron collider considering three parameters including the center of mass energy, dark photon mass, and coupling constant. The signal process pertained to a dark photon, which couples only to heavy leptons. We only dealt with the case of dark photon decaying into two muons. We used the simplified model which covers dark matter particles and dark photon particles as well as the SM particles. To compare the CPU time of simulation, one or more cores of the KISTI-5 supercomputer of Nurion Knights Landing and Skylake and a local Linux machine were used. Our results can help optimize high-energy physics software through high-performance computing and enable the users to incorporate parallel processing.

ACCELERATION OF COSMIC RAYS AT LARGE SCALE COSMIC SHOCKS IN THE UNIVERSE

  • KANG HYESUNG;JONES T. W.
    • Journal of The Korean Astronomical Society
    • /
    • v.35 no.4
    • /
    • pp.159-174
    • /
    • 2002
  • Cosmological hydrodynamic simulations of large scale structure in the universe have shown that accretion shocks and merger shocks form due to flow motions associated with the gravitational collapse of nonlinear structures. Estimated speed and curvature radius of these shocks could be as large as a few 1000 km/s and several Mpc, respectively. According to the diffusive shock acceleration theory, populations of cosmic-ray particles can be injected and accelerated to very high energy by astrophysical shocks in tenuous plasmas. In order to explore the cosmic ray acceleration at the cosmic shocks, we have performed nonlinear numerical simulations of cosmic ray (CR) modified shocks with the newly developed CRASH (Cosmic Ray Amr SHock) numerical code. We adopted the Bohm diffusion model for CRs, based on the hypothesis that strong Alfven waves are self-generated by streaming CRs. The shock formation simulation includes a plasma-physics-based 'injection' model that transfers a small proportion of the thermal proton flux through the shock into low energy CRs for acceleration there. We found that, for strong accretion shocks, CRs can absorb most of shock kinetic energy and the accretion shock speed is reduced up to $20\%$, compared to pure gas dynamic shocks. For merger shocks with small Mach numbers, however, the energy transfer to CRs is only about $10-20\%$ with an associated CR particle fraction of $10^{-3}$. Nonlinear feedback due to the CR pressure is insignificant in the latter shocks. Although detailed results depend on models for the particle diffusion and injection, these calculations show that cosmic shocks in large scale structure could provide acceleration sites of extragalactic cosmic rays of the highest energy.

Redshift Space Distortion on the Small Scale Clustering of Structure

  • Park, Hyunbae;Sabiu, Cristiano;Li, Xiao-dong;Park, Changbom;Kim, Juhan
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.42 no.2
    • /
    • pp.78.3-78.3
    • /
    • 2017
  • The positions of galaxies in comoving Cartesian space varies under different cosmological parameter choices, inducing a redshift-dependent scaling in the galaxy distribution. The shape of the two-point correlation of galaxies exhibits a significant redshift evolution when the galaxy sample is analyzed under a cosmology differing from the true, simulated one. In our previous works, we can made use of this geometrical distortion to constrain the values of cosmological parameters governing the expansion history of the universe. This current work is a continuation of our previous works as a strategy to constrain cosmological parameters using redshift-invariant physical quantities. We now aim to understand the redshift evolution of the full shape of the small scale, anisotropic galaxy clustering and give a firmer theoretical footing to our previous works.

  • PDF

Cosmological Information from the Small-scale Redshift Space Distortions

  • Tonegawa, Motonari;Park, Changbom;Zheng, Yi;Kim, Juhan;Park, Hyunbae;Hong, Sungwook
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.2
    • /
    • pp.52.3-52.3
    • /
    • 2019
  • We present our first attempt at understanding the dual impact of the large-scale density and velocity environment on the formation of very first astrophysical objects in the Universe. Following the recently developed quasi-linear perturbation theory on this effect, we introduce the publicly available initial condition generator of ours, BCCOMICS (Baryon Cold dark matter COsMological Inital Condition generator for Small scales), which provides so far the most self-consistent treatment of this physics beyond the usual linear perturbation theory. From a suite of uniform-grid simulations of N-body+hydro+BCCOMICS, we find that the formation of first astrophysical objects is strongly affected by both the density and velocity environment. Overdensity and streming-velocity (of baryon against cold dark matter) are found to give positive and negative impact on the formation of astrophysical objects, which we quantify in terms of various physical variables.

  • PDF

The clumping factor of the IGM at the epoch of reionization in the SPHINX simulations

  • Yoo, Taehwa;Kimm, Taysun
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.1
    • /
    • pp.58.2-58.2
    • /
    • 2021
  • The clumping factor of the inter-galactic medium (IGM) is one of the most important quantities that determine the process of cosmic reionization. However, theoretical attempts to make predictions about the clumping factor have been hampered by finite resolutions of the simulations, because small-scale structures in the IGM were under-resolved. We use high-resolution (~10 pc), cosmological radiation-hydrodynamic simulations, SPHINX, to estimate the clumping factor in the IGM. We find that the global clumping factors (CHII>3) are higher than previously estimated (CHII=3), indicating that resolving the small structures is indeed crucial to accurately model the reionization history of the Universe. We also discuss the local clumping factors, which should be useful to make predictions about the local ionization histories with analytic methods.

  • PDF

Status Report of the NISS and SPHEREx Missions

  • Jeong, Woong-Seob;Park, Sung-Joon;Moon, Bongkon;Lee, Dae-Hee;Park, Won-Kee;Lee, Duk-Hang;Ko, Kyeongyeon;Pyo, Jeonghyun;Kim, Il-Joong;Park, Youngsik;Nam, Ukwon;Kim, Minjin;Ko, Jongwan;Im, Myungshin;Lee, Hyung Mok;Lee, Jeong-Eun;Shin, Goo-Hwan;Chae, Jangsoo;Matsumoto, Toshio
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.41 no.1
    • /
    • pp.58.2-58.2
    • /
    • 2016
  • The NISS (Near-infrared Imaging Spectrometer for Star formation history) onboard NEXTSat-1 is the near-infrared instrument optimized to the first small satellite of NEXTSat series. The capability of both imaging and low spectral resolution spectroscopy with the Field of View of $2{\times}2deg.$ in the near-infrared range from 0.9 to $3.8{\mu}m$ is a unique function of the NISS. The major scientific mission is to study the cosmic star formation history in local and distant universe. The Flight Model of the NISS is being developed and tested. After an integration into NEXTSat-1, it will be tested under the space environment. The NISS will be launched in 2017 and it will be operated during 2 years. As an extension of the NISS, SPEHREx (Spectro-Photometer for the History of the Universe Epoch of Reionization, and Ices Explorer) is the NASA SMEX (SMall EXploration) mission proposed together with KASI (PI Institute: Caltech). It will perform an all-sky near-infrared spectral survey to probe the origin of our Universe; explore the origin and evolution of galaxies, and explore whether planets around other stars could harbor life. The SPHEREx is designed to have wider FoV of $3.5{\times}7deg.$ as well as wider spectral range from 0.7 to $4.8{\mu}m$. After passing the first selection process, SPHEREx is under the Phase-A study. The final selection will be made in the end of 2016. Here, we report the current status of the NISS and SPHEREx missions.

  • PDF

Discovery of a New Mechanism of Dust Destruction in Strong Radiation Fields and Implications

  • Hoang, Thiem;Tram, Le Ngoc;Lee, Hyseung;Ahn, Sang-hyeon
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.44.3-44.3
    • /
    • 2019
  • Massive stars, supernovae, and kilonovae are among the most luminous radiation sources in the universe. Observations usually show near- to mid-infrared (NIR-MIR, 1-5~micron) emission excess from H II regions around young massive star clusters (YMSCs) and anomalous dust extinction and polarization towards Type Ia supernova (SNe Ia). The popular explanation for such NIR-MIR excess and unusual dust properties is the predominance of small grains (size a<0.05micron) relative to large grains (a>0.1micron) in the local environment of these strong radiation sources. The question of why small grains are predominant in these environments remains a mystery. Here we report a new mechanism of dust destruction based on centrifugal stress within extremely fast rotating grains spun-up by radiative torques, namely the RAdiative Torque Disruption (RATD) mechanism, which can resolve this question. We find that RATD can destroy large grains located within a distance of ~ 1 pc from a massive star of luminosity L~ 10^4L_sun and a supernova. This increases the abundance of small grains relative to large grains and successfully reproduces the observed NIR-MIR excess and anomalous dust extinction/polarization. We show that small grains produced by RATD can also explain the steep far-UV rise in extinction curves toward starburst and high redshift galaxies, as well as the decrease of the escape fraction of Ly-alpha photons observed from HII regions surrounding YMSCs.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

MAGNETIC FIELD IN THE LOCAL UNIVERSE AND THE PROPAGATION OF UHECRS

  • DOLAG KLAUS;GRASSO DARIO;SPRINGEL VOLKER;TKACHEV IGOR
    • Journal of The Korean Astronomical Society
    • /
    • v.37 no.5
    • /
    • pp.427-431
    • /
    • 2004
  • We use simulations of large-scale structure formation to study the build-up of magnetic fields (MFs) in the intergalactic medium. Our basic assumption is that cosmological MFs grow in a magnetohy-drodynamical (MHD) amplification process driven by structure formation out of a magnetic seed field present at high redshift. This approach is motivated by previous simulations of the MFs in galaxy clusters which, under the same hypothesis that we adopt here, succeeded in reproducing Faraday rotation measurements (RMs) in clusters of galaxies. Our ACDM initial conditions for the dark matter density fluctuations have been statistically constrained by the observed large-scale density field within a sphere of 110 Mpc around the Milky Way, based on the IRAS 1.2-Jy all-sky redshift survey. As a result, the positions and masses of prominent galaxy clusters in our simulation coincide closely with their real counterparts in the Local Universe. We find excellent agreement between RMs of our simulated galaxy clusters and observational data. The improved numerical resolution of our simulations compared to previous work also allows us to study the MF in large-scale filaments, sheets and voids. By tracing the propagation of ultra high energy (UHE) protons in the simulated MF we construct full-sky maps of expected deflection angles of protons with arrival energies $E = 10^{20}\;eV$ and $4 {\times} 10^{19}\;eV$, respectively. Accounting only for the structures within 110 Mpc, we find that strong deflections are only produced if UHE protons cross galaxy clusters. The total area on the sky covered by these structures is however very small. Over still larger distances, multiple crossings of sheets and filaments may give rise to noticeable deflections over a significant fraction of the sky; the exact amount and angular distribution depends on the model adopted for the magnetic seed field. Based on our results we argue that over a large fraction of the sky the deflections are likely to remain smaller than the present experimental angular sensitivity. Therefore, we conclude that forthcoming air shower experiments should be able to locate sources of UHE protons and shed more light on the nature of cosmological MFs.

Korean Contribution to All-Sky Near-infrared Spectro-Photometric Survey

  • Jeong, Woong-Seob;Pyo, Jeonghyun;Park, Sung-Joon;Moon, Bongkon;Lee, Dae-Hee;Park, Won-Kee;Lee, Duk-Hang;Ko, Kyeongyeon;Kim, Il-Joong;Kim, Minjin;Yang, Yujin;Ko, Jongwan;Song, Yong-Seon;Yu, Young Sam;Im, Myungshin;Lee, Hyung Mok;Lee, Jeong-Eun;Shim, Hyunjin;Matsumoto, Toshio
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.41 no.2
    • /
    • pp.37.3-37.3
    • /
    • 2016
  • The SPEHREx (Spectro-Photometer for the History of the Universe Epoch of Reionization, and Ices Explorer) is one of the candidates for the Astrophysical Small Explore mission of the NASA proposed together with KASI (PI Institute: Caltech). It will perform an all-sky near-infrared spectral survey to probe the origin of the Universe and water in the planetary systems and to explore the evolution of galaxies. The SPHEREx is designed to cover wide field of view of $3.5{\times}7deg$. as well as wide spectral range from 0.7 to $4.8{\mu}m$ by using four linear variable filters. The SPHEREx is under the Phase-A study to finalize the conceptual design and test plan of the instrument. The international partner, KASI will contribute to the SPHEREx in the hardware as well as the major science cases. The final selection will be made in the early 2017. Here, we report the current status of the SPHEREx mission.

  • PDF