• Title/Summary/Keyword: Q-방법

Search Result 1,638, Processing Time 0.03 seconds

Construction of Jacket Matrices Based on q-ary M-sequences (q-ary M-sequences에 근거한 재킷 행렬 설계)

  • S.P., Balakannan;Kim, Jeong-Ki;Borissov, Yuri;Lee, Moon-Ho
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.45 no.7
    • /
    • pp.17-21
    • /
    • 2008
  • As with the binary pseudo-random sequences q-ary m-sequences possess very good properties which make them useful in many applications. So we construct a class of Jacket matrices by applying additive characters of the finite field $F_q$ to entries of all shifts of q-ary m-sequence. In this paper, we generalize a method of obtaining conventional Hadamard matrices from binary PN-sequences. By this way we propose Jacket matrix construction based on q-ary M-sequences.

A Case Study for Evaluating Groundwater Condition in RMR and Q Rock Mass Classification on Bard Rock Tunnel (RMR 및 Q 분류시 지하수 조건 평가방법에 관한 사례 연구)

  • 이대혁;이철욱;김호영
    • Tunnel and Underground Space
    • /
    • v.13 no.5
    • /
    • pp.353-361
    • /
    • 2003
  • For RMR and Q rock mass classification at the design and construction stage, evaluation of groundwater condition is usually based upon the experience due to the restriction of available methods. Based on the results of Taejon LNG Pilot Cavern which acquire joint water pressure, inflow rate of ground water and hydraulic conductivity model, estimates from numerical analysis and analytical solutions were compared to verify each evaluation method. As the result, the Raymer(2001) approach was found to be efficient for estimating inflow rate and corresponding value.

A Performance Improvement Technique for Nash Q-learning using Macro-Actions (매크로 행동을 이용한 내시 Q-학습의 성능 향상 기법)

  • Sung, Yun-Sik;Cho, Kyun-Geun;Um, Ky-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.3
    • /
    • pp.353-363
    • /
    • 2008
  • A multi-agent system has a longer learning period and larger state-spaces than a sin91e agent system. In this paper, we suggest a new method to reduce the learning time of Nash Q-learning in a multi-agent environment. We apply Macro-actions to Nash Q-learning to improve the teaming speed. In the Nash Q-teaming scheme, when agents select actions, rewards are accumulated like Macro-actions. In the experiments, we compare Nash Q-learning using Macro-actions with general Nash Q-learning. First, we observed how many times the agents achieve their goals. The results of this experiment show that agents using Nash Q-learning and 4 Macro-actions have 9.46% better performance than Nash Q-learning using only 4 primitive actions. Second, when agents use Macro-actions, Q-values are accumulated 2.6 times more. Finally, agents using Macro-actions select less actions about 44%. As a result, agents select fewer actions and Macro-actions improve the Q-value's update. It the agents' learning speeds improve.

  • PDF

Generalized Extending Method for q-ary LCZ Sequence Sets (q진 LCZ 수열군의 일반화된 확장 생성 방법)

  • Chung, Jung-Soo;Kim, Young-Sik;Jang, Ji-Woong;No, Jong-Seon;Chung, Ha-Bong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.11C
    • /
    • pp.874-879
    • /
    • 2008
  • In this paper, a new extending method of q-ary low correlation zone(LCZ) sequence sets is proposed, which is a generalization of binary LCZ sequence set by Kim, Jang, No, and Chung. Using this method, q-ary LCZ sequence set with parameters (N,M,L,${\epsilon}$) is extended as a q-ary LCZ sequence set with parameters (pN,pM,p[(L+1)/p]-1,p${\epsilon}$), where p is prime and p|q.

Subjectivity Study on Broadcasting of Civil Defense Exercise in Nation : Focused on Fire-fighting Officers (국가 민방위 훈련 방송에 대한 주관성 연구 : 소방공무원을 중심으로)

  • Lee, Jei-Young;Kim, Jee-Hee
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.12
    • /
    • pp.216-226
    • /
    • 2019
  • The purpose of this study was to provide basic data for developing strategic programs based on broadcasting of civil defense exercise in nation focused on fire-fighting officers. 33 Q-population (concourse) was selected based on the media related literature review described above, and interviews targeting the general public. As the next step representative statements were chosen randomly and reduced in number to a final 25 statement samples for the purposes of this study. The methodology of a Q-study does not infer the characteristics of the population sample from the sample, selecting of the P-sample is likewise not governed probabilistic sampling methods. Finally, in this research, 41 people were selected as the P-sample.

A Study on the Subjectivity of Semi-Professional Athletes on Talent Donation Activities

  • Young-Seol Yu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.2
    • /
    • pp.169-177
    • /
    • 2024
  • The purpose of this paper is to explore the meaning of talent donations by semi-professional Athlete using Q methodology. This Q factor analysis finally identified 3 factors: The first factor, The type of donation cautious, They represent an understanding the need for donation, needing to prepare for donation, information about donation targets, careful attitude toward talent donation. The second factor, The type of donation authenticity, They represent departure from good mind, premise of authenticity to donation, necessity of program for donation activities. The final factor, The type of trust for endowment institution, They represent trust in the target organization to donate, interest in donation-related incidents, necessity of various donation methods in sports field.

Teacher's Belief in Young Children's Play : A Q-approach (유아 놀이에 대한 교사의 신념 분석 : Q-방법론적 접근)

  • Kim, Young Sook;Kim, Sung Soo
    • Korean Journal of Child Studies
    • /
    • v.22 no.4
    • /
    • pp.257-269
    • /
    • 2001
  • This study identified and explained the prototypes of teacher's belief about children's play with the use of Stephenson's Q-methodology. The sample consisted of 35 kindergarten teachers. A Q-deck composed of 48 cards, derived in part from a review of related literature and was developed by the researchers and sorted by the subjects. The obtained Q-sort scores were analyzed by factor analysis. The findings revealed that the subjects were divided into 3 belief types : "life-experienced", "developmental-disciplinary", "cultural-environmental". Likely explanations for these results are considered, and implications are discussed.

  • PDF

A study of $Q_{Lg}^{-1}$ by the reversed two station method in the crust of central South Korea (Reversed Two Station Method (RSTM)에 의한 중부지방 $Q_{Lg}^{-1}$ 연구)

  • Cheong, Tae-Woong
    • Journal of the Korean Geophysical Society
    • /
    • v.5 no.3
    • /
    • pp.211-218
    • /
    • 2002
  • The reversed two station method (RSTM) devised by Chun et al. (1987) is widely used to obtain $Q_{Lg}^{-1}$ for Lg wave data with hypocentral distance greater than 90 km. By applying RSTM to the Lg data of central South Korea with hypocentral distance between 95 and 381 km, we obtained high $Q_{Lg}^{-1}$. The value of $Q_{Lg}^{-1}$ is very similar with that of southeastern S. Korea, which is derived from the same method for similar distances. The studied hypocentral range seems to distort $Q_{Lg}^{-1}$ to high value because decay rate in this range is higher than 0.5, which is typical decay rate of surface wave.

  • PDF

Doubly-robust Q-estimation in observational studies with high-dimensional covariates (고차원 관측자료에서의 Q-학습 모형에 대한 이중강건성 연구)

  • Lee, Hyobeen;Kim, Yeji;Cho, Hyungjun;Choi, Sangbum
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.3
    • /
    • pp.309-327
    • /
    • 2021
  • Dynamic treatment regimes (DTRs) are decision-making rules designed to provide personalized treatment to individuals in multi-stage randomized trials. Unlike classical methods, in which all individuals are prescribed the same type of treatment, DTRs prescribe patient-tailored treatments which take into account individual characteristics that may change over time. The Q-learning method, one of regression-based algorithms to figure out optimal treatment rules, becomes more popular as it can be easily implemented. However, the performance of the Q-learning algorithm heavily relies on the correct specification of the Q-function for response, especially in observational studies. In this article, we examine a number of double-robust weighted least-squares estimating methods for Q-learning in high-dimensional settings, where treatment models for propensity score and penalization for sparse estimation are also investigated. We further consider flexible ensemble machine learning methods for the treatment model to achieve double-robustness, so that optimal decision rule can be correctly estimated as long as at least one of the outcome model or treatment model is correct. Extensive simulation studies show that the proposed methods work well with practical sample sizes. The practical utility of the proposed methods is proven with real data example.

Design and implementation of Robot Soccer Agent Based on Reinforcement Learning (강화 학습에 기초한 로봇 축구 에이전트의 설계 및 구현)

  • Kim, In-Cheol
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.139-146
    • /
    • 2002
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless these algorithms can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL) as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. In this paper we use the AMMQL algorithn as a learning method for dynamic positioning of the robot soccer agent, and implement a robot soccer agent system called Cogitoniks.