과제정보
본 연구는 산업통상자원부(MOTIE)와 한국에너지기술평가원(KETEP)의 지원을 받아 수행한 연구 과제입니다(No. 20224000000150). 본 논문은 2022년도 정부(교육부)의 재원으로 한국연구재단의 지원을 받아 수행된 기초연구사업(No. 2020R1I1A3073651)의 지원을 받아 작성되었음.
참고문헌
- "'22 Capital market risk analysis report", the Financial Supervisory Service, pp. 82-87, 2022.
- Zhou, Feng, et al. "EMD2FNN: A strategy combining empirical mode decomposition and factorization machine based neural network for stock market trend prediction." Expert Systems with Applications, Vol 115, pp. 136-151, 2019. DOI:https://doi.org/10.1016/j.eswa.2018.07.065
- Mukherji, Sandip and Dhatt, Manjeet S and Kim, Yong H "A fundamental analysis of Korean stock returns." Financial Analysts Journal, Vol 53, No. 3, pp. 75-80, 1997. DOI:https://doi.org/10.2469/faj.v53.n3.2086
- Achelis, S. B, "Technical Analysis from A to Z.",McGraw Hill, 2001.
- Orlando, J. M. "Algorithmic presentation to european central bank-BNP Paribas.", 2007.
- El Akraoui, Bouchra, and Cherki Daoui. "Deep Reinforcement Learning for Bitcoin Trading." International Conference on Business Intelligence. Springer, Cham, pp. 82-93, 2022.DOI: 10.1007/978-3-031-06458-6_7
- J.Y. Park, S.S. Hong, MG.. Park, H. Lee, "An Implementation of Stock Investment Service based on Reinforcement Learning", The Journal of the Convergence on Culture Technology (JCCT), Vol. 7. No. 4, pp. 807-814. November 2021, DOI:https://doi.org/10.17703/JCCT.2021.7.4.807
- Azhikodan, Akhil Raj, Anvitha GK Bhat, and Mamatha V. Jadhav. "Stock trading bot using deep reinforcement learning." Innovations in Computer Science and Engineering, pp. 41-49, 2019. DOI: 10.1007/978-981-10-8201-6_5
- Moody, John, et al. "Performance functions and reinforcement learning for trading systems and portfolios.", Journal of Forecasting, Vol. 17, No. 5-6, pp. 441-470, 1998. DOI:https://doi.org/10.1002/(SICI)1099-131X(1998090)17:5/6<441::AID-FOR707>3.0.CO;2-%23
- Shui-Ling, Y. U., and Zhe Li. "Stock price prediction based on ARIMA-RNN combined model." 4th International Conference on Social Science (ICSS 2017), pp 1-6, 2017.
- Hirsa, Ali, et al. "Deep reinforcement learning on a multi-asset environment for trading." arXiv preprint arXiv:2106.08437, 2021, DOI:https://doi.org/10.48550/arXiv.2106.08437
- Zejnullahu, Frensi, Maurice Moser, and Joerg Osterrieder. "Applications of Reinforcement Learning in Finance--Trading with a Double Deep Q-Network." arXiv preprint arXiv:2206.14267, 2022. DOI:https://doi.org/10.48550/arXiv.2206.14267
- Bollinger, John. "Using bollinger bands." Stocks & Commodities, Vol. 10, No. 2, pp. 47-51, 1992.
- Mnih, Volodymyr, et al. "Playing atari with deep reinforcement learning." arXiv preprint arXiv: 1312.5602, 2013. DOI:https://doi.org/10.48550/arXiv.1312.5602
- Konda, Vijay, and John Tsitsiklis. "Actor-critic algorithms." Advances in neural information processing systems, Vol. 12, 1999.
- MNIH, Volodymyr, et al. "Asynchronous methods for deep reinforcement learning." In: International conference on machine learning. PMLR, pp. 1928-1937, 2016.
- Xiong, Zhuoran, et al. "Practical deep reinforcement learning approach for stock trading". arXiv preprint arXiv:1811.07522, 2018. DOI:https://doi.org/10.48550/arXiv.1811.07522
- Liu, Xiao-Yang, et al. "FinRL: A deep reinforcement learning library for automated stock trading in quantitative finance." arXiv preprint arXiv:2011.09607, 2020. DOI:https://doi.org/10.48550/arXiv.2011.09607
- Kim, Sung-Hyeock, et al. "Influence on overfitting and reliability due to change in training data". International Journal of Advanced Culture Technology(IJACT), Vol 5. No. 2, pp 82-89, June 2017, DOI: https://doi.org/10.17703/IJACT.2017.5.2.82