The explosive growth of machine learning and data-driven methodologies have revolutionized numerous fields. Yet, the translation of these successes to the domain of dynamical physical systems remains a significant challenge. Closing the loop from data to actions in these systems faces many difficulties, stemming from the need for sample efficiency and computational feasibility, along with many other requirement such as verifiability, robustness, and safety. In this talk, we present a framework that bridges this gap by introducing novel representations for developing nonlinear stochastic control and reinforcement learning algorithms. Our approach enables efficient, safe, robust, and scalable decisionmaking with provable guarantees. We further demonstrate how these representations help close the simto-real gap, enhance data efficiency in imitation learning, and enable scalable computation of localized policies for large-scale nonlinear networked systems. Lastly, we will briefly present our latest work on using diffusion models to represent control policies and how to online train diffusion policies, along with their applications to manipulation tasks.