First Instance
ECS on Cloud:
root@iZj6c5vmzbgujq3xi2m45aZ:~# phoronix-test-suite system-info
Phoronix Test Suite v5.2.1
System Information
Hardware:
Processor: Intel Xeon E5-2682 v4 @ 2.50GHz (1 Core), Motherboard:
Alibaba Cloud ECS, Chipset: Intel 440FX- 82441FX PMC, Memory: 1 x 1024 MB RAM,
Disk: 40GB, Graphics: Cirrus Logic GD 5446, Network: Red Hat Virtio device
Software:
OS: Ubuntu 18.04, Kernel: 4.15.0-45-generic (x86_64), File-System:
ext4, Screen Resolution: 1024x768, System Layer: KVM
root@testbenchmark:~# phoronix-test-suite system-info
Phoronix Test Suite v5.2.1
System Information
Hardware:
Processor: Intel Xeon Platinum 8171M @ 2.10GHz (1 Core),
Motherboard: Microsoft Virtual Machine v7.0, Chipset: Intel 440BX/ZX/DX,
Memory: 1 x 1024 MB Microsoft, Disk: 32GB Virtual Disk + 4GB Virtual Disk,
Graphics: Microsoft Hyper-V virtual VGA
Software:
OS: Ubuntu 18.04, Kernel: 5.0.0-1035-azure (x86_64), File-System:
ext4, Screen Resolution: 1152x864, System Layer: Microsoft Hyper-V Server
ECS
|
Azure VM
|
|
Processor
|
Intel Xeon E5-2682 v4 @ 2.50GHz (1
Core)
|
Intel Xeon Platinum 8171M @
2.10GHz (1 Core)
|
Motherboard
|
Alibaba Cloud ECS
|
Microsoft Virtual Machine v7.0
|
Chipset
|
Intel 440FX- 82441FX PMC
|
Intel 440BX/ZX/DX
|
Memory
|
1 x 1024 MB RAM
|
1 x 1024 MB Microsoft
|
Disk
|
40GB
|
32GB Virtual Disk + 4GB Virtual
Disk
|
Graphics
|
Cirrus Logic GD 5446
|
Microsoft Hyper-V virtual VGA
|
Network
|
Red Hat Virtio device
|
|
OS
|
Ubuntu 18.04
|
Ubuntu 18.04
|
Kernel
|
4.15.0-45-generic (x86_64)
|
5.0.0-1035-azure (x86_64)
|
File-System
|
ext4
|
ext4
|
Screen Resolution
|
1024x768
|
1152x864
|
System Layer
|
KVM
|
Microsoft Hyper-V Server
|
NOTE: The test
and comparison here is very high level and the purpose here is to show how I do
usually benchmarking and reporting.
Network workload
To install iperf
on ubuntu linux use ‘sudo apt-get install -y iperf’
iPerf tool
requires two systems because one system must act as a server, while the other
acts as a client. The client connects to the server we are testing the speed
of.
Sender Server:
ECS on cloud: 47.75.190.46 (Internet IP)
Client Listening
Server:
VM on Azure
Cloud : 138.91.242.171 (Public IP) Ã Allowed UDP/TCP
communication over 5001
root@testwork:~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 128 KByte (default)
------------------------------------------------------------
[ 4] local 10.0.0.4 port 5001 connected with 47.75.190.46
port 50792
[ ID] Interval
Transfer Bandwidth
[ 4] 0.0-16.6 sec 2.25 MBytes 1.14 Mbits/sec
root@iZj6c5vmzbgujq3xi2m45aZ:~# iperf -c 138.91.242.171
------------------------------------------------------------
Client connecting to 138.91.242.171, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 172.31.146.175 port 50792 connected with
138.91.242.171 port 5001
[ ID] Interval
Transfer Bandwidth
[ 3] 0.0-10.2 sec 2.25 MBytes 1.85
Mbits/sec
From the above output, we can see that i got a speed of 1.85Mbits/sec. The output shows something more.
Interval: Interval
specifies the time duration for which the data is transferred.
Transfer: This
column shows the transferred data size.
Bandwidth: This
shows the rate of speed with which the data is transferred.
Using iPerf, we
can also test the maximum throughput achieved via UDP connections.
root@testwork:~# iperf -s -u
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.4 port 5001 connected with 47.75.190.46
port 39304
[ ID] Interval
Transfer
Bandwidth Jitter
Lost/Total Datagrams
[ 3] 0.0-10.0 sec 1.25 MBytes 1.05
Mbits/sec 0.305 ms 0/ 893 (0%)
root@iZj6c5vmzbgujq3xi2m45aZ:~# iperf -c 138.91.242.171 -u
------------------------------------------------------------
Client connecting to 138.91.242.171, UDP port 5001
Sending 1470 byte datagrams, IPG target: 11215.21 us (kalman
adjust)
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 172.31.146.175 port 39304 connected with
138.91.242.171 port 5001
[ ID] Interval
Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.25 MBytes 1.05
Mbits/sec
[ 3] Sent 893 datagrams
[ 3] Server Report:
[ 3] 0.0-10.0 sec 1.25 MBytes 1.05
Mbits/sec 0.000 ms 0/ 893 (0%)
iPerf limits the
bandwidth for UDP clients to 1 Mbit per second by default. We can change this
with the -b flag, replacing the number after with the maximum
bandwidth rate we wish to test against. If we are testing for network speed,
set this number above the maximum bandwidth cap provided by receiver server:
iperf -c
138.91.242.171 -u -b 1000m
This tells the
client that we want to achieve a maximum of 1000 Mbits per second if possible.
root@testwork:~# iperf -s -u -b 1000m
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.4 port 5001 connected with 47.75.190.46
port 47812
[ ID] Interval
Transfer
Bandwidth Jitter
Lost/Total Datagrams
[ 3] 0.0-10.1 sec 1.43 MBytes 1.19
Mbits/sec 3.330 ms 662244/663261 (1e+02%)
root@iZj6c5vmzbgujq3xi2m45aZ:~# iperf -c 138.91.242.171 -u -b 1000m
------------------------------------------------------------
Client connecting to 138.91.242.171, UDP port 5001
Sending 1470 byte datagrams, IPG target: 11.76 us (kalman adjust)
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 172.31.146.175 port 47812 connected with
138.91.242.171 port 5001
[ ID] Interval
Transfer Bandwidth
[ 3] 0.0-10.0 sec 930 MBytes
777 Mbits/sec
[ 3] Sent 663261 datagrams
[ 3] Server Report:
[ 3] 0.0-10.1 sec 1.43 MBytes 1.19
Mbits/sec 0.000 ms 662244/663261 (0%)
We may also want
to test both servers for the maximum amount of throughput. This can easily be
done using the built-in bidirectional testing feature iPerf offers. Also we can
use below parameter to get maximum network load, this will sum the overall
result at end.
· -w
- increase the TCP window size;
· -t
30 – is the time in seconds for the test to be done (by default, it is 10
seconds)
· -P
8 – is the number of parallel threads (streams) to get the maximum channel
load
Network Outbound
Bandwidth:
For other performance benchmarking refer:
Post a Comment
Post a Comment
Thanks for your comment !
I will review your this and will respond you as soon as possible.