Downloads

Kaixian Xu, Zhaoyan Zhang, Alan Wilson, Yu Qiao, Lisang Zhou, & Ying Jiang. (2024). Generalizable Multi-Agent Framework for Quantitative Trading of US Education Funds. Innovations in Applied Engineering and Technology, 3(1), 1–12. https://doi.org/10.62836/iaet.v3i1.362

Generalizable Multi-Agent Framework for Quantitative Trading of US Education Funds

Quantitative trading of specialized financial instruments like US education funds  requires comprehensive analysis of both market dynamics and external influenc- ing factors.  This paper proposes a novel multi-agent framework that integrates  collaborative agents for market analysis, macroeconomic trend assessment, and  policy change evaluation, along with a multi-level reflection mechanism for contin- uous strategy optimization. Through extensive experiments using a comprehensive  dataset from 2018 to 2024, the framework demonstrates superior performance  compared to traditional rule-based strategies and machine learning approaches, achieving higher returns, better risk-adjusted performance, and enhanced risk man- agement capabilities.  The integration of multi-agent collaboration, non-market  factor analysis, and adaptive strategy refinement provides a robust solution for  achieving long-term investment goals in dynamic market environments.

multi-agent framework; quantitative trading; non-market factor analysis; multi-level reflection mechanism; US education funds

References

  1. Xie Q, Han W, Zhang X, et al. Pixiu: A Large Language Model, Instruction Data and Evaluation Benchmark for Finance. arXiv 2023; arXiv:2306.05443.
  2. Yang H, Liu X-Y, Wang CD. Fingpt: Open-Source Financial Large Language Models. arXiv 2023; arXiv:2306.06031.
  3. Gan Y, Zhu D. The Research on Intelligent News Advertisement Recommendation Algorithm Based on Prompt Learning in End-to-End Large Language Model Architecture. Innovations in Applied Engineering and Technology 2024; 3(1): 1–19.
  4. Zhang B, Yang H, Liu X-Y. Instruct-Fingpt: Financial Sentiment Analysis by Instruction Tuning of General-Purpose Large Language Models. arXiv 2023; arXiv:2306.12659.
  5. Zhang S, Roller S, Goyal N, et al. Opt: Open Pre-Trained Transformer Language Models. arXiv 2022; arXiv:2205.01068.
  6. Wu S, Irsoy O, Lu S, et al. Bloomberggpt: A Large Language Model for Finance. arXiv 2023; arXiv:2303.17564.
  7. Zhang X, Yang Q, Xu D. Xuanyuan 2.0: A Large Chinese Financial Chat Model with Hundreds of Billions Parameters. In Proceedings of the 32nd ACM international conference on information and knowledge management, Birmingham, UK, 21–25 October 2023.
  8. Lu D, Wu H, Liang J, et al. Bbt-fin: Comprehensive Construction of Chinese Financial Domain Pre-Trained Language Model, Corpus and Benchmark. arXiv 2023; arXiv:2302.09432.
  9. Lopez-Lira A, Tang Y. Can chatgpt forecast stock price movements? Return Predictability and Large Language Models arXiv 2023; arXiv:2304.07619.
  10. Gan Y, Ma J, Xu K. Enhanced E-Commerce Sales Forecasting Using EEMD-Integrated LSTM Deep Learning Model. Journal of Computational Methods in Engineering Applications 2023; 3(1): 1–11.
  11. Gan Y, Chen X. The Research on End-to-End Stock Recommendation Algorithm Based on Time-Frequency Consistency. Advances in Computer and Communication 2024; 5(4).
  12. Fatouros G, Metaxas K, Soldatos J, et al. Can Large Language Models Beat Wall Street? Unveiling the Potential of AI in Stock Selection. arXiv 2024; arXiv:2401.03737.
  13. Wang M, Izumi K, Sakaji H. LLMfactor: Extracting Profitable Factors through Prompts for Explainable Stock Movement Prediction. arXiv 2024; arXiv:2406.10811.
  14. Ma J, Zhang Z, Xu K, et al. Improving the Applicability of Social Media Toxic Comments Prediction across Diverse Data Platforms Using Residual Self-Attention-Based LSTM Combined with Transfer Learning. Optimizations in Applied Machine Learning 2022; 2(1).
  15. Zhao Y, Dai W, Wang Z, Ragab AE. Application of computer simulation to model transient vibration responses of GPLs reinforced doubly curved concrete panel under instantaneous heating. Materials Today Communications 2024; 38: 107949. doi: 10.1016/j.mtcomm.2023.107949.
  16. Zhang W, Zhao L, Xia H, et al. A Multimodal Foundation Agent for Financial Trading: Tool-Augmented, Diversified, and Generalist. arXiv 2024; arXiv:2402.18485.
  17. Ma J, Xu K, Qiao Y, et al. An Integrated Model for Social Media Toxic Comments Detection: Fusion of High-Dimensional Neural Network Representations and Multiple Traditional Machine Learning Algorithms. Journal of Computational Methods in Engineering Applications 2022; 2(1): 1–12.
  18. Xing F. Designing Heterogeneous LLM Agents for Financial Sentiment Analysis. ACM Transactions on Management Information Systems 2024; 16. doi: 10.1145/3688399.
  19. Li Y, Yu Y, Li H, Chen Z, Khashanah K. Tradinggpt: Multi-Agent System with Layered Memory and Distinct Characters for Enhanced Financial Trading Performance. arXiv 2023; arXiv:2309.03736.
  20. Koa KJ, Ma Y, Ng R, Chua T-S. Learning to Generate Explainable Stock Predictions Using Self-Reflective Large Language Models. In Proceedings of the ACM Web Conference, Singapore, 13–17 May 2024. http://dx.doi.org/10.1145/3589334.3645611.
  21. Ding Y, Jia S, Ma T, et al. Integrating Stock Features and Global Information via Large Language Models for Enhanced Stock Return Prediction. arXiv 2023; arXiv:2310.05627.
  22. Schulman J, Wolski F, Dhariwal P, et al. Proximal Policy Optimization Algorithms. arXiv 2017; arXiv:1707.06347.
  23. Zhang Z. RAG for Personalized Medicine: A Framework for Integrating Patient Data and Pharmaceutical Knowledge for Treatment Recommendations. Optimizations in Applied Machine Learning 2024; 4(1).
  24. Yu Y, Li H, Chen Z, et al. FinMem: A Performance-Enhanced LLM Trading Agent with Layered Memory and Character Design. In Proceedings of the 2024 AAAI Spring Symposium Series, Stanford, CA, USA, 25-27 March 2024.
  25. Wang S, Yuan H, Zhou L, Ni LM, Shum H-Y, GuoJ. Alpha-gpt: Human-AI Interactive Alpha Mining for Quantitative Investment. arXiv 2023; arXiv:2308.00016.
  26. Xu K, Gan Y, Wilson A. Stacked Generalization for Robust Prediction of Trust and Private Equity on Financial Performances. Innovations in Applied Engineering and Technology 2024; 3(1): 1–12.
  27. Chen F, et al. Assessing Four Neural Networks on Handwritten Digit Recognition Dataset (MNIST). Journal of Computer Science Research, 2024; 6(3): 17–22. doi: 10.30564/jcsr.v6i3.6804.

Supporting Agencies

  1. Funding: Funding This research was supported by the U.S. National Science Foundation under Grant No. 2339596.