A central challenge in large-scale engineering systems, such as energy and transportation networks, is enabling autonomous decision-making among interacting agents. Game theory provides a natural framework to model and analyze such problems. In practice, however, agents often have only partial information about the costs and actions of others. This makes decentralized learning a key tool for developing effective strategies. In this talk, I will discuss recent advances in decentralized learning for static and Markov games under bandit feedback. I will outline algorithms with convergence guarantees and highlight directions for future research.