Fluid models have been the main tools for Internet congestion control. By capturing how the average rate of each flow evolves, the fluid model proves to be useful as it predicts the equilibrium point to which system trajectory converges and also provides conditions under which the convergence is ensured, i.e., the system is stable. However, due to inherent randomness in the network caused by random packet arrivals or random packet marking, the actual system evolution is always of a stochastic nature. In this paper, we show that we can be better off using a stochastic approach toward the congestion control. We first prove that the equilibrium point of a fluid model can be quite different from the true average rate of the corresponding stochastic system. After we describe the notion of stability for two different approaches, we show that a stable fluid model can impose too much restriction on our choice of system parameters such as buffer size or link utilization. In particular, under fluid models, we show that there exists a fundamental tradeoff between the link utilization and buffer size requirement for large systems, while in a more realistic setting with stochastic models, there is no such tradeoff. This implies that the current congestion control design can be much more flexible, to the benefit of efficient usage of network resources.