Traffic shaping is an attempt to control network traffic by prioritising network resources. It guarantees certain bandwidth, based on predefined policy rules. Traffic shaping uses traffic classification, policy rules, queue disciplines and quality of service (QoS).
The need for traffic shaping arises because network bandwidth is an expensive resource that is shared among many parties in an organisation, and some applications require guaranteed bandwidth and priority. Traffic shaping lets you: (1) control network services, (2) limit bandwidths, and (3) guarantee Quality of Service (QoS). Intelligently managed traffic shaping improves network latency, service availability and bandwidth utilisation.
NetEM capabilities include delay (it delays each packet); loss (it drops some packets); duplication (it duplicates some packets); and corruption (it introduces a single bit error at a random offset in a packet).
How NetEM works
NetEM consists of two components — a tiny kernel module for a queuing discipline, and a command-line utility to configure it. The kernel module has been integrated in 2.6.8 (and 2.4.28) or later, and the command is part of the iproute2
package. The command-line utility communicates with the kernel via the netlink socket interface. It encodes its requests into a standard message format, which the kernel decodes.
The queuing layer exists between the network device and the protocol output. The default queuing discipline is a simple FIFO packet queue. Queuing discipline consists of two key interfaces; one queues packets to be sent, and the other releases packets to the network device for transmission. The queuing discipline makes the policy decision of which packets to send, based on the current settings.
Terminology
- qdisc — A queue discipline (qdisc) is a set of rules that determine the order in which arrivals are serviced. It is a packet queue with an algorithm that decides when to send each packet.
- Classless qdisc — A qdisc with no configurable internal subdivision.
- Classful qdisc — A qdisc that may contain classes; classful qdiscs allow packet classification (Class-Based Queueing and others)
- Root qdisc — The root qdisc is attached to each network interface — either classful or classless.
- egress qdisc — Works on outgoing traffic only.
- ingress qdisc — Works on incoming traffic.
- Filter — Classification can be performed using filters
Here are some examples that can be used for network throttling.
Network emulation examples
Note: I have used eth1 for the interface in the examples below; you should use the name of the specific Ethernet card where traffic needs to be throttled.
To add constant delay to every packet going out through a specific interface, use the following command:
# tc qdisc add dev eth1 root netem delay 80ms
|
Now a ping
test to this host should show an increase of 80ms in the delay to replies.
To add random variance, use the command below:
# tc qdisc change dev eth1 root netem delay 80ms 10ms
|
We can also add variable delay (jitter)/Random Variance too. Most wide-area networks like the Internet have some jitter associated with them. The following command will add +/- 10 ms of jitter to the 80 ms of delay.
# tc qdisc add dev eth1 root netem delay 80ms 10ms
|
To see what queueing discipline (qdisc) has been applied to an interface, use:
To turn off/delete the qdisc from a specific interface (in this case, eth1), execute the command given below:
# tc qdisc del dev eth1 root
|
Typically, the delay in a network is not uniform. It is more common to use something like a normal distribution to describe the variation in delay. NetEM can accept a non-uniform distribution:
# tc qdisc change dev eth1 root netem delay 100ms 20ms distribution normal
|
Packet loss can be replicated:
# tc qdisc change dev eth1 root netem loss 0.1%
|
Packet duplication/corruption can also be configured:
# tc qdisc change dev eth1 root netem duplicate/corrupt 1%
|
Setting up the throttling
In the current environment (as seen in Figure 1), the Linux server plays the role of a router and a NetEM bandwidth throttling device. All the traffic goes through this server. Our requirement was to throttle the bandwidth by adding delay/packet loss/jitter, etc, and also to throttle incoming traffic on TCP port 7001 on eth1, down to 512 kbit/s
To set up the throttling for incoming traffic, I ran the following commands:
# tc qdisc add dev eth1 root handle 1: cbq avpkt 1000 bandwidth 10Mbit
# tc class add dev eth1 parent 1: classid 1:1 cbq rate 512kbit allot 1500 prio 3 bounded isolated
# tc filter add dev eth1 parent 1: protocol ip u32 match ip protocol 6 0xff match ip dport 7001 0xffff flowid 1:1
The first command created a new root queuing discipline of type CBQ (class-based queuing, used to both shape and prioritise) on interface eth1, bandwidth set to 10 MBps, and the average size of the packet (avpkt) set to 1000.
The second command created a child class under the root qdisc, 1:, and named it 1:1, allocating 1500 bytes off the queue, and limiting the rate to 512 KBps. The prio 5
defines the priority of the queue. The smaller the number, the higher the priority when you have multiple queues with different priorities. Here I had only a single queue, so that didn’t matter. The bounded and isolated options are used to make sure the class will not borrow bandwidth from any unused/inactive sibling queues.
The last command is the filter, which will queue all packets going to TCP port 7001 [dport is destination port] to qdisc “1:1″ which is the throttle queue. Here u32
is the filter type.
Cybercracking: Bandwidth Throttling With Netem Network Emulation >>>>> Download Now
ReplyDelete>>>>> Download Full
Cybercracking: Bandwidth Throttling With Netem Network Emulation >>>>> Download LINK
>>>>> Download Now
Cybercracking: Bandwidth Throttling With Netem Network Emulation >>>>> Download Full
>>>>> Download LINK CL