Controlling the distribution of a multi-agent system without centralized control and explicit communication between agents has gained significant interest in robotics and controls research. This is especially challenging when the robots are evolving on subsets of $\mathbb{R}^n$; prior works have considered this problem wherein agents evolve on graphs. Moreover, we would like to prevent agent state transitions at the equilibrium distribution, which would potentially waste energy. We solve this problem by assuming that each agent evolves independently and identically according to a discrete-time Markov process. We then stabilize the Kolmogorov forward equation associated with the Markov process to an arbitrary target distribution that has an $L^\infty$ density and does not necessarily have a connected support on the state space. The Kolmogorov forward equation is the mean-field model of the system, and it is stabilized using a density-dependent transition kernel as the control parameter. Further, we show that the Markov process can be constructed in such a way that the operator that pushes forward measures is the identity at the target distribution. In order to achieve this, the transition kernel is defined as a function of the current agent distribution, resulting in a nonlinear Markov process. Moreover, we design the transition kernel to be decentralized in the sense that it depends only on the local density measured by each agent. We prove the existence of such a decentralized control law that globally stabilizes the target distribution. Further, to implement our control approach on a finite N-agent system, we smoothen the mean-field dynamics via the process of mollification. Numerical simulation shows that as N increases, the agent distribution in the N-agent simulations converges to the solution of the mean-field model, and the number of agent state transitions at equilibrium decreases to zero.