Unlocking Real-World Autonomous Agents
Staff - Faculty of Informatics
Date: 7 June 2024 / 10:30 - 11:15
USI East Campus, Room C1.04
Speaker: Enrico Marchesini - MIT
Abstract: The uncertainty arising from real setups poses new challenges for autonomous learning systems. Autonomous agents struggle to explore high-dimensional environments and must provably guarantee safe behaviors under partial information to operate in our society. In addition, multiple agents (or humans) must learn to cooperate but typically rely on temporally extended actions. Deep reinforcement learning has the potential to address these challenges with new solutions tailored for exploration, safety, and multi-agent systems. I will present our recent works on cooperative multi-agent systems, which allow us to abstract the learning process at a higher macro-action layer, where agents operate asynchronously and execute temporally extended actions, or skills, of unknown duration.
Biography: Enrico Marchesini is a Postdoc in the Laboratory for Information and Decision Systems at MIT, advised by Priya Donti. Previously, he was a Postdoc in the Khoury College of Computer Sciences at Northeastern University with Christopher Amato and completed his Ph.D. in Computer Science at the University of Verona (Italy), advised by Alessandro Farinelli. His research interests lie in topics that can foster real-world applications of deep reinforcement learning. For this reason, he is investigating novel real-world problems to develop algorithms that can tackle open challenges for autonomous systems.
Host: Prof. Marc Langheinrich