• 제목/요약/키워드: Selecting action

검색결과 87건 처리시간 0.025초

AMT 차량의 변속제어 특성에 관한 연구 (Characteristics of transmission control of an AMT vehicle)

  • 공진영;송창섭
    • 한국정밀공학회지
    • /
    • 제23권3호
    • /
    • pp.86-93
    • /
    • 2006
  • This study is concerned with the investigation of characteristics of an AMT (Automated Manual Transmission) which are composed of clutch part and transmission part. When a shilling signal is received from the controller, the clutch is disengaged first, and shifting action including selecting action is followed, and then the clutch is engaged last. The characteristics of transmission shifting response are affected by various parameters of clutch and transmission control elements. Analytical results are in fair agreement with experimental results. It is found that the operating pressure level is the most important for the response of AMT characteristics, and that the other parameters such as natural frequency and damping ratio of the control valve are less important.

SWOT분석을 토대로 한 서비스 FMEA에서의 개선조치전략 (Corrective Action Strategy based on SWOT Analysis in Service FMEA)

  • ;권혁무
    • 품질경영학회지
    • /
    • 제40권1호
    • /
    • pp.25-38
    • /
    • 2012
  • Service FMEA may yield several possible corrective actions for each failure mode with large RPN. Corrective actions for each service failure are usually interrelated with the customers and environmental elements of the service system. SWOT analysis can provide an effective way to analyze the inner and outer environmental impacts for each corrective action. In this paper, we suggest a way for selecting and ranking corrective strategy in service operation based on SWOT analysis. Every candidate of corrective action strategy is ranked and evaluated on the basis of the impact factors of the SWOT variables, correlations between possible corrective actions and SWOT variables, and RPNs of service failures. The most desirable set of corrective actions is selected considering the preference score of each corrective action, required resources and budgetary allowance. The proposed methodology is demonstrated with an illustrative example.

고위공직 후보자-엔지니어-최고경영자 교육 프로그램의 액션러닝 프로세스 분석 (An Analysis of Action Learning Process in Education Programs for Senior Officials, Engineers, Chief Executive Officers)

  • 정현곤;문승한
    • 디지털융복합연구
    • /
    • 제10권1호
    • /
    • pp.87-104
    • /
    • 2012
  • 본 연구의 목적은 교육프로그램의 액션러닝 프로세스를 분석한 연구로서, 과정별 액션러닝 프로세스의 오리엔테이션, 과제의 명료화, 자료 활동, 대안의 모색과 실행 안 선정, 실행과 결과를 파악한 연구이다. 고위공직 후보자 액션러닝 과정은 정책현장 방문, 체험사례 분석 등을 통한 성과가 제고되어야 하며, 포스코 엔지니어 액션러닝 과정은 액션러닝 문제해결에서 습득한 지식을 회사의 지적자산으로 체계화 하는 것이 중요하며, 이(異) 업종 융합 최고경영자 액션러닝 과정은 자사의 제품을 소비하는 소비자 그룹이나 주주 등을 가상의 과제후원자로 정하여 그들의 의견을 통해 방향이 설정되어야 한다.

지능형 사이버공격 대비 상황 탄력적 / 실행 중심의 사이버 대응 메커니즘 (A situation-Flexible and Action-Oriented Cyber Response Mechanism against Intelligent Cyber Attack)

  • 김남욱;엄정호
    • 디지털산업정보학회논문지
    • /
    • 제16권3호
    • /
    • pp.37-47
    • /
    • 2020
  • The In the 4th industrial revolution, cyber space will evolve into hyper-connectivity, super-convergence, and super-intelligence due to the development of advanced information and communication technologies, which will connect the nation's core infrastructure into a single network. As applying the 4th industrial revolution technology to the cyber attack technique, it is evolving in an intelligent and sophisticate method. In order to response intelligent cyber attacks, it is difficult to guarantee self-defense in cyberspace by policy-oriented, preplanned-centric and hierarchical cyber response strategies. Therefore, this research aims to propose a situation-flexible & action-oriented cyber response mechanism that can respond flexibly by selecting the most optimal smart security solution according to changes in the cyber attack steps. The proposed cyber response mechanism operates the smart security solutions according to the action-oriented detailed strategies. In addition, artificial intelligence-based decision-making systems are used to select the smart security technology with the best responsiveness.

Seamless Mobility of Heterogeneous Networks Based on Markov Decision Process

  • Preethi, G.A.;Chandrasekar, C.
    • Journal of Information Processing Systems
    • /
    • 제11권4호
    • /
    • pp.616-629
    • /
    • 2015
  • A mobile terminal will expect a number of handoffs within its call duration. In the event of a mobile call, when a mobile node moves from one cell to another, it should connect to another access point within its range. In case there is a lack of support of its own network, it must changeover to another base station. In the event of moving on to another network, quality of service parameters need to be considered. In our study we have used the Markov decision process approach for a seamless handoff as it gives the optimum results for selecting a network when compared to other multiple attribute decision making processes. We have used the network cost function for selecting the network for handoff and the connection reward function, which is based on the values of the quality of service parameters. We have also examined the constant bit rate and transmission control protocol packet delivery ratio. We used the policy iteration algorithm for determining the optimal policy. Our enhanced handoff algorithm outperforms other previous multiple attribute decision making methods.

Subjective Point Prediction Algorithm for Decision Analysis

  • Kim, Soung-Hie
    • 한국경영과학회지
    • /
    • 제8권1호
    • /
    • pp.31-40
    • /
    • 1983
  • An uncertain dynamic evolving process has been a continuing challenge to decision problems. The dynamic random variable (drv) changes which characterize such a process are very important for the decision-maker in selecting a course of action in a world that is perceived as uncertain, complex, and dynamic. Using this subjective point prediction algorithm based on a modified recursive filter, the decision-maker becomes to have periodically changing plausible points with the passage of time.

  • PDF

A Fuzzy BOXES Scheme for the Cartpole Control

  • Kwon, Sung-Gyu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.1710-1715
    • /
    • 2005
  • Two fuzzy controllers are coordinated to control a cartpole such that the pole is balanced as well as the cart is brought back to the track origin. The coordination is due to the BOXES scheme that is established through the evaluation of the outcomes of the control action by one of the fuzzy controllers. It is found that the control scheme is good at selecting proper fuzzy controller so that the pole is balanced fast while the cart moves back to the track origin steadily.

  • PDF

PWM 인버터용 SNUBBER 설계 (Design of Snubber for PWM Inverter)

  • 오진석
    • 한국안전학회지
    • /
    • 제8권4호
    • /
    • pp.95-100
    • /
    • 1993
  • In power transistor switching circuit have shunt snubber(dv/dt limiting capacitor) and series snubber (di/dt limiting inductor). The shunt snubber is used to reduce the turn-off switching loss and the series snubber is used to reduce the turn-on switching loss. Design procedures are derived for selecting the capacitance, inductor and resistance to limit the peak voltage and current values. The action of snubber is analyzed and applied to the design for safety PWM inverter.

  • PDF

마르코프 결정 과정에서 시뮬레이션 기반 정책 개선의 효율성 향상을 위한 시뮬레이션 샘플 누적 방법 연구 (A Simulation Sample Accumulation Method for Efficient Simulation-based Policy Improvement in Markov Decision Process)

  • 황시랑;최선한
    • 한국멀티미디어학회논문지
    • /
    • 제23권7호
    • /
    • pp.830-839
    • /
    • 2020
  • As a popular mathematical framework for modeling decision making, Markov decision process (MDP) has been widely used to solve problem in many engineering fields. MDP consists of a set of discrete states, a finite set of actions, and rewards received after reaching a new state by taking action from the previous state. The objective of MDP is to find an optimal policy, that is, to find the best action to be taken in each state to maximize the expected discounted reward of policy (EDR). In practice, MDP is typically unknown, so simulation-based policy improvement (SBPI), which improves a given base policy sequentially by selecting the best action in each state depending on rewards observed via simulation, can be a practical way to find the optimal policy. However, the efficiency of SBPI is still a concern since many simulation samples are required to precisely estimate EDR for each action in each state. In this paper, we propose a method to select the best action accurately in each state using a small number of simulation samples, thereby improving the efficiency of SBPI. The proposed method accumulates the simulation samples observed in the previous states, so it is possible to precisely estimate EDR even with a small number of samples in the current state. The results of comparative experiments on the existing method demonstrate that the proposed method can improve the efficiency of SBPI.