On the Internet, network routers are typically implemented to provide strategic controls over the growing demands on limited and expensive bandwidth for an increasingly diverse traffic spectrum. A router consists of two major components: a set of switch fabric and multiple linecards. The functionalities of a linecard can be categorized into three parts: packet classification, statistics accounting, and packet scheduling. Packet classification includes functionalities such as routing table lookup, admission control, and deep packet inspection. Statistics accounting is implemented to store essential information such as flow statistics and network counting sketches, for the purpose of traffic monitoring and traffic management. Packet scheduling includes functionalities such as packet buffering, packet shaping, rate control, and hierarchical queue management. Sophisticated algorithms have been developed to improve the throughput and fairness on the network, however, the costs of implementing the new algorithms constantly outweigh their performance gains. This dissertation focuses on bridging the gap between advanced algorithms and their implementations in real- world network equipments by adopting a throughput- and fairness-driven design of network traffic managers which incorporate most of the functionalities of statistics accounting and packet scheduling, while providing worst- case performance guarantees for the whole router system. First, we explore parallelism to design high throughput statistics counter arrays that are robust against adversarial traffic. Second, we develop robust pipelined memory systems with worst-case performance guarantees for network processing. In our new memory systems, memory operations are finished within a fixed delay, which greatly simplifies the designs of network processors. Third, we present novel succinct priority index data structures that can be implemented for scheduling packets maintained in a large number of priority queues at line rate, which is essential in providing quality-of-service for per-flow queueing. Last, we show several reservation- based packet buffer architectures with interleaved memories that take advantage of the known packet departure times to achieve simplicity and determinism. They are scalable to growing packet storage requirements in routers to provide fine-grained per-flow queue buffering, while matching increasing line rates. All these approaches significantly improve the overall system throughput while providing better fairness, quality-of-service and worst- case performance guarantees over existing solutions