The problem of reducing energy usage in data center networks is an important one. However, we would like to achieve this goal without compromising throughput and loss characteristics of these networks. Studies have shown that data center networks typically see loads of between 5%-25% but the energy draw of these networks is equal to operating them at maximum load. To this end, we examine the problem of reducing the power consumption of data center networks by merging traffic. The main idea is that low traffic from N links is merged to create K ≤ N streams of high traffic. These streams are fed to K switch interfaces which run at maximum rate while the remaining interfaces are switched to the lowest possible rate. We show that this merging can be accomplished with minimal latency and energy costs (less than 0.1 W total) while simultaneously allowing us a deterministic way of switching link rates between maximum and minimum. We examine the idea of traffic merging using three different data center networks – flattened butterfly, mesh and hyper-cube networks. In addition to analysis, we simulate these networks and utilizing previously developed traffic models we show that 49% energy savings are obtained for 5% per-link load while we get 20% savings on a 50% load for the flattened butterfly, and somewhat lower savings are obtained for the other two networks. The packet losses are statistically insignificant, and the maximum latency increase is less than 3 μs. The results show that energy-proportional data center networks are indeed possible.