Score-based diffusion models (SDMs) have emerged as a powerful tool for sampling from the posterior distribution in Bayesian inverse problems. Existing methods however often require multiple evaluations of the forward map to generate a single sample, resulting in significant computational costs for large-scale inverse problems. To address this limitation, we propose a scalable diffusion posterior sampling (SDPS) method tailored to linear nonparametric inverse problems, which avoids forward model evaluations during sampling by shifting computational effort to an offline training phase. In this phase, a task-dependent score function is learned based on the linear forward operator. Crucially, the conditional posterior score is derived exactly from the trained score using affine transformations, eliminating the need for conditional score approximations. Our approach is shown to work in infinite-dimensional diffusion models and is supported by rigorous convergence analysis. We validate SDPS through high-dimensional computed tomography (CT) and image deblurring experiments. Based on joint work with Fabian Schneider, Matti Lassas, Maarten V. de Hoop, and Tapio Helin.