List of Publications
Tip
Please open a pull request to add missing entries!
List of publications that are using CORL algorithms or benchmarked results:
- Lu, C., Ball, P. J., & Parker-Holder, J. Synthetic Experience Replay.
- Beeson, A., & Montana, G. (2023). Balancing policy constraint and ensemble size in uncertainty-based offline reinforcement learning. arXiv preprint arXiv:2303.14716.
- Nikulin, A., Kurenkov, V., Tarasov, D., & Kolesnikov, S. (2023). Anti-exploration by random network distillation. arXiv preprint arXiv:2301.13616.
- Bhargava, P., Chitnis, R., Geramifard, A., Sodhani, S., & Zhang, A. (2023). Sequence Modeling is a Robust Contender for Offline Reinforcement Learning. arXiv preprint arXiv:2305.14550.
- Hu, X., Ma, Y., Xiao, C., Zheng, Y., & Meng, Z. (2023). In-Sample Policy Iteration for Offline Reinforcement Learning. arXiv preprint arXiv:2306.05726.
- Lian, S., Ma, Y., Liu, J., Zheng, Y., & Meng, Z. (2023). HIPODE: Enhancing Offline Reinforcement Learning with High-Quality Synthetic Data from a Policy-Decoupled Approach. arXiv preprint arXiv:2306.06329.
- He, H., Bai, C., Xu, K., Yang, Z., Zhang, W., Wang, D., ... & Li, X. (2023). Diffusion Model is an Effective Planner and Data Synthesizer for Multi-Task Reinforcement Learning. arXiv preprint arXiv:2305.18459.
- Liu, J., Ma, Y., Hao, J., Hu, Y., Zheng, Y., Lv, T., & Fan, C. (2023). Prioritized Trajectory Replay: A Replay Memory for Data-driven Reinforcement Learning. arXiv preprint arXiv:2306.15503.
- Chitnis, R., Xu, Y., Hashemi, B., Lehnert, L., Dogan, U., Zhu, Z., & Delalleau, O. (2023). IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive Control. arXiv preprint arXiv:2306.00867.
- Kurenkov, V., Nikulin, A., Tarasov, D., & Kolesnikov, S. (2023). Katakomba: Tools and Benchmarks for Data-Driven NetHack. arXiv preprint arXiv:2306.08772.
- Lian, S., Ma, Y., Liu, J., Jianye, H. A. O., Zheng, Y., & Meng, Z. (2023, July). A Policy-Decoupled Method for High-Quality Data Augmentation in Offline Reinforcement Learning. In ICML Workshop on New Frontiers in Learning, Control, and Dynamical Systems.