Reinforcement Learning
stable-baselines3
finance
stock-trading
deep-reinforcement-learning
dqn
ppo
a2c
Eval Results (legacy)
Instructions to use AdityaaXD/Multi-Agent_Reinforcement_Learning_Trading_System_Models with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- stable-baselines3
How to use AdityaaXD/Multi-Agent_Reinforcement_Learning_Trading_System_Models with stable-baselines3:
from huggingface_sb3 import load_from_hub checkpoint = load_from_hub( repo_id="AdityaaXD/Multi-Agent_Reinforcement_Learning_Trading_System_Models", filename="{MODEL FILENAME}.zip", ) - Notebooks
- Google Colab
- Kaggle
Could you please provide the training environment code, or at least the observation spec
#1
by arunbabuc - opened
Thanks for sharing! Could you please provide the training environment code, or at least the observation spec (shape + feature order)? I’m trying to integrate this into the AI trade project, but the current RL agent expects a 3008‑feature observation (a 60‑day window + portfolio state). Without the env/feature spec, it’s hard to reliably align inputs.
https://github.com/ADITYA-tp01/Multi-Agent-Reinforcement-Learning-Trading-System-Data
i'll add this link in model card
Thank you!