A comparison between some VPS providers. It uses Ansible to perform a series of automated benchmark tests over the VPS servers that you specify. It allows the reproducibility of those tests by anyone that wanted to compare these results to their own. All the tests results are available in order to provide independence and transparency.
Switch branches/tags
Nothing to show
Clone or download

README.org

WARNING: A work in progress!

ATTENTION

If you like this project, and you would like to have more plans and providers in the comparison, please take a look at this issue.

VPS Comparison

A comparison between some VPS providers that have data centers located in Europe.

Initially I’m comparing only entry plans, below 5$ monthly.

What I trying to show here it’s basically a lot of things that I would want to know before sign up with any of them. If I save you a few hours researching, like I spend, I’ll be glad!

VPS Providers

Company

OVHLinodeDigitalOceanScalewayVultr
Foundation19992003201120132014
HeadquartersRoubaix (FR)Galloway, NJ (US)New York (US)Paris (FR)Matawan, NJ (US)
Market3° largest2° largest
WebsiteOVHLinodeDigitalOceanScalewayVultr

Notes:

  • The companies are sorted by the year of foundation.
  • Linode was spun-off from a company providing ColdFusion hosting (TheShore.net) that was founded in 1999.
  • Scaleway is a cloud division of Online.net (1999), itself subsidiary of the Iliad group (1990) owner also of the famous French ISP Free.
  • Vultr Holdings LLC is owned by Choopa LLC founded in 2000.
  • The Market numbers are extracted from the Wikipedia an other sources

Billing

OVHLinodeDigitalOceanScalewayVultr
Credit CardYesYesYesYesYes
PayPalYesYesYesNoYes
BitcoinNoNoNoNoYes
Affiliate/ReferralYesYesYesNoYes
Coupon CodesYesYesYesYesYes

Note:

  • Linode needs a credit card associated with the account first to be able to pay with PayPal later.

General Features

OVHLinodeDigitalOceanScalewayVultr
European data centers32324
DocumentationDocsDocsDocsDocsDocs
Doc. subjective valuation6/109/109/106/108/10
Uptime guaranteed (SLA)99,95%99,9%99,99%99,9%100%
Outage refund/credit (SLA)YesYesYesNoYes
APIYesYesYesYesYes
API DocsAPI DocsAPI DocsAPI DocsAPI DocsAPI Docs
Services status pageStatusStatusStatusStatusStatus
Support Quality
Account Limits10 instancesLimited instances (e.g. 50 VC1S )10 instances
Legal/ToSToSToSToSToSToS

Note:

  • Scaleway has four grades of SLA, the first basic is for free but if you want something better, you have to pay a monthly fee.
  • One of the reasons why the Linode’s documentation is so good and detailed is that they pay you 250$ for write a guide for them if it’s good enough to publish. They are a small team (about 70 people), so it makes sense.
  • The Linode API is not a RESTful API yet, but they are working in an upcoming one.
  • The default limits usually can be increased by asking support, but not in Scaleway, where you would have to pay fo a higher level of support. Those limits are set by default by the providers to stop abusing the accounts.
  • Scaleway also imposes a lot more of limits that can be looked up in the account settings (e.g. 100 images or 25 snapshots).
  • Vultr also impose a limit of a maximum instance cost of 150$ per month and requires a deposit when charges exceed 50$.

European data centers

  • OVH: Gravelines (FR), Roubaix (FR), Strasbourg (FR). It has also a data center in Paris (FR), but is not available for these plans.
  • Linode: Frankfurt (DE), London (GB)
  • DigitalOcean: Amsterdam (NL), Frankfurt (DE), London (GB)
  • Scaleway: Amsterdam (NL), Paris (FR)
  • Vultr: Amsterdam (NL), Frankfurt (DE), London (GB), Paris (FR)

Control Panel

Features

OVHLinodeDigitalOceanScalewayVultr
Subjective control panel evaluation5/106/108/105/109/10
GraphsTraffic, CPU, RAMCPU, Traffic, Disk IOCPU, RAM, Disk IO, Disk usage, Bandwith, TopNoMonthly Bandwith, CPU, Disk, Network
Subjective graphs valuation5/108/109/100/108/10
Monthly usage per instanceNoYesNoNoBandwith, Credits
KVM ConsoleYesYes (Glish)Yes (VNC)YesYes
Power managementYesYesYesYesYes
Reset root passwordYesYesYesNoNo
Reinstall instanceYesYesYesNoYes
First provision timeSeveral hours<1 min<1 minsome minutessome minutes
Median reinstall time~12,5 min~50 s~35 sN/A~2,1 min
Upgrade instanceYesYesYesNoYes
Change Linux KernelNoYesCentOSYesNo
Recovery modeNoYesYesYesBoot with custom ISO
Tag instancesNoYesYesYesYes
Responsive design (mobile UI)NoNoNoNoYes
Android AppOnly in FranceYesUnofficialNoUnofficial
iOS AppYesYesUnofficialNoUnofficial

Notes:

  • The OVH panel has a very old interface, effective but antique and cumbersome.
  • Linode also has an old interface, too much powerful, but not friendly. But in the coming months they are going to deliver a new control panel in Beta.
  • Linode let you choose the Linux Kernel version in the profile of your instance.
  • To reset the root password from the control panel is not a good security measure IMHO, it’s useful, but you already have the KVM console for that.
  • In Vultr you can copy/see the masked default root password, but not reset it. This is necessary because the password is never sent by email.
  • You can reinstall the instances using the same SO/App or choosing another one.
  • Linode reinstall time (they call it rebuild) does not include the boot time, the instance is not started automatically.
  • In Vultr can use a custom ISO or choose one from the library like SystemRescueCD or Trinity Rescue Kit to boot your instance and perform recovery tasks.
  • Linode has an additional console (Lish) that allows you to control your instance even when is inaccessible by ssh and perform rescue or management tasks.
  • In Scaleway you have to set a root password first to get access to the KVM console.
  • The Scaleway’s control panel in the basic account/SLA level is very limited and counter-intuitive, I don’t know if this improves with superior levels.
  • In Scaleway happened to me once that the provision time exceed more than 45 min that I have to cancel the operation (that it was not easy, though).
  • In OVH the first provision of a VPS server it’s a manual process and you have to pass a weird identification protocol on the way, including an incoming phone call in my case.

Instance creation

Operating Systems

OVHLinodeDigitalOceanScalewayVultr
LinuxArch Linux, CentOS, Debian, UbuntuArch, CentOS, Debian, Fedora, Gentoo, OpenSUSE, Slackware, UbuntuCentOS, Debian, Fedora, UbuntuAlpine, CentOS, Debian, Gentoo, UbuntuCentOS, Debian, Fedora, Ubuntu
BSDNoNoFreeBSDNoFreeBSD, OpenBSD
WindowsNoNoNoNoWindows 2012 R2 (16$) or Windows 2016 (16$)
Other OSNoNoCoreOSNoCoreOS

Note:

  • OVH also offers Linux two desktop distributions: Kubuntu and OVH Release 3.

One-click Apps

OVHLinodeDigitalOceanScalewayVultr
DockerYesNoYesYesYes
StacksLAMPNoLAMP, LEMP, ELK, MEANLEMP, ELKLAMP, LEMP
DrupalYesNoYesYesYes
WordPressYesNoYesNoYes
JoomlaYesNoNoNoYes
DjangoNoNoYesNoNo
RoRNoNoYesNoNo
GitLabNoNoYesYesYes
Node.jsNoNoYesYesNo
E-CommercePrestaShopNoMagentoPrestaShopMagento, PrestaShop
Personal cloudCozyNoNextCloud, ownCloudOwnCloud, CozyNextCloud, ownCloud
PanelsPlesk, cPanelNoNoWebmincPanel (15$), Webmin

Notes:

  • Some providers offer more one-click Apps that I do not include here to save space.
  • Some of this apps in some providers require a bigger and more expensive plan that the entry ones below 5$ that I analyze here.
  • Linode does not offers you any one-click app. Linode is old-school, you can do it yourself, and also Linode gives you plenty of detailed documentation to do it that way.
  • OVH uses Ubuntu, Debian or CentOS as SO for its apps.
  • Digital Ocean uses Ubuntu as SO for all of its apps.
  • Vultr uses CentOS as SO for all of its apps.
  • OVH Also offers Dokku on Ubuntu.
  • Do you really need a Panel (like cPanel)? They usually are a considerable security risk with several vulnerabilities and admin rights.

Other features

OVHLinodeDigitalOceanScalewayVultr
ISO images libraryNoNoNoNoYes
Custom ISO imageNoYesNoYesYes
Install scriptsNoStackScriptsCloud-initNoiPXE
Preloaded SSH keysYesNoYesYesYes

Notes:

  • Linode lets you install virtually any SO in your instance in the old-school way, almost as if you’d have to deal with the bare metal. Even when the instance does not boot itself at the end, you have to boot it yourself from the control panel.
  • The Vultr’s ISO image library include several ISOs like Alpine, Arch, Finnix, FreePBX, pfSense, Rancher Os, SystemRescueCD, and Trinity Rescue Kit.
  • The Vultr’s “Custom ISO image” feature allows you to install virtually any SO supported by KVM and the server architecture.
  • Linode does not preload your ssh keys into the instance automatically, but it’s trivial to do it manually anyway (ssh-copy-id).
  • Scaleway has a curious way to provide custom images, a service called Image Builder. You have to create an instance with the Image Builder and from there you are able to create you own ISO image using a Docker builder system that create images that can run on real hardware.

Security

OVHLinodeDigitalOceanScalewayVultr
2FAYesYesYesYesYes
Restrict access IPsYesYesNoNoNo
Account Login LogsNoPartialYesNoNo
SSL QualityA-A+A+AA
DNS SPY ReportBBBBC
HTTP Security headersFECFD
Send root password by emailYesNoNoNoNo
Account password recoveryLinkLinkLinkLinkLink

Notes:

  • Send plain text passwords by email is a very bad practice in terms of security.
  • OVH sends you the root password optionally if you use SSH keys, always in plain text if not.
  • Linode never sends you the root password because you are the one that sets it (even boot the instance for first time).
  • DigitalOcean sends you the passwords only if you don’t use SSH keys, in plain text.
  • Vultr never sends you the root password, only the needed ones for one-click apps.
  • Linode only register the last login time for each user, and does not register the IP.
  • The account password recovery should be always through a reset link by email, and never get you current password back (and in plain text), but you never know… and if you find a provider doing that, you don’t need to know anymore, get out of there as soon as possible and never reuse that password (any password).
  • DNS Spy Report is very useful to those that are going to use the provider to manage their domains.

Plans (≤5$)

Features

OVHLinodeDigitalOceanScalewayVultrVultr
NameVPS SSD 1Linode 10245bucksVC1S20GB SSD25GB SSD
Monthly Price3,62€5$5$2,99€2,5$5$
CPU / Threads1/11/11/11/21/11/1
CPU modelXeon E5v3 2.4GHzXeon E5-2680 v3 2.5GHzXeon E5-2650L v3 1.80 GHzAtom C2750 2.4 GHzIntel Xeon 2.4 GHzIntel Xeon 2.4 GHz
RAM2 GB1 GB512 MB2 GB512 MB1 GB
SSD Storage10 GB20 GB20 GB50 GB20 GB25 GB
Traffic1 TB1 TB500 GB1 TB
Bandwidth (In / Out)100/100 Mbps40/1 Gbps1/10 Gbps200/200 Mbps1/10 Gbps1/10 Gbps
VirtualizationKVMKVM (Qemu)KVMKVM (Qemu)KVM (Qemu)KVM (Qemu)
Anti-DDoS ProtectionYesNoNoNo10$10$
BackupsNo2$1$No0,5 $1$
Snapshots2,99$Free (up to 3)0,05$ per GB0,02 € per GBFree (Beta)Free (Beta)
IPv6YesYesOptionalOptionalOptionalOptional
Additional public IP2$ (up to 16)YesFloating IPs (0,006$ hour if inactive)0,9€ (up to 10)2$ (up to 2) / 3$ floating IPs2$ (up to 2) / 3$ floating IPs
Private NetworkNoOptionalOptionalNo (dynamic IPs)OptionalOptional
FirewallYes (by IP)NoYes (by group)Yes (by group)Yes (by group)Yes (by group)
Block StorageFrom 5€ - 50GBNoFrom 10$ - 100GBFrom 1€ - 50GBFrom 1$ - 10GBFrom 1$ - 10GB
MonitoringYes (SLA)Yes (metrics, SLA)Beta (metrics, performance, SLA)NoNoNo
Load Balancer13$20$20$NoHigh availability (floating IPs & BGP)High availability (floating IPs & BGP)
DNS ZoneYesYesYesNoYesYes
Reverse DNSYesYesYesYesYesYes

Note:

  • OVH hides its real CPU, but what they claim in their web matches with the hardware information reported in the tests (an E5-2620 v3 or E5-2630 v3).
  • Vultr also hides the real CPU, but it could be a Xeon E5-2620/2630 v3 for the 20GB SSD plan and probably a v4 for the 25GB SSD one.
  • Vultr $2.50/month plan is only currently available in Miami, FL and New York, NJ.
  • OVH throttle network speed to 1 Mbps after excess monthly 10 TB traffic
  • The prices for DigitalOcean and Vultr do not include taxes (VAT) for European countries.
  • Linode allows you to have free additional public IPs but you have to request them to support and justify that you need them.
  • Linode Longview’s monitoring system is free up to 10 clients, but also has a professional version that starts at 20$/mo for three client.
  • Linode don’t support currently block storage, but they are working on it to offer the service in the upcoming months.
  • Linode snapshots (called Images) are limited to 2GB per Image, with a total of 10GB total Image storage and 3 Images per account. Disks of recently rebuilt instances are automatically stored as Images.
  • Scaleway also offers for the same price a BareMetal plan (with 4 ARM Cores), but as it is a dedicated server, I do not include it here.
  • Scaleway does not offers Anti-DDoS protection but they maintain that they use the Online.net’s standard one.
  • Scaleway uses dynamic IPs by default as private IPs and you only can opt to use static IPs if you remove the Public IP from the instance.
  • Scaleway uses dynamic IPv6, meaning that IPv6 will change if you stop your server. You can’t even opt to reserve IPv6.

Tests

All the numbers showed here can be founded in the /logs folder in this repository, keep in mind that usually I show averages of several iterations of the same test.

The graphs are generated with gnuplot directly from the tables of this README.org org-mode file. The tables are also automatically generated with a python script (/ansible/roles/common/files/gather_data.py) gathering the data contained in the log files. To be able to add more tests without touching the script, the criteria to gather the data and generate the tables are stored in a separate json file (/ansible/roles/common/files/criteria.json). The output of that script is a /logs/tables.org file that contain tables likes this:

|-
| | Do-5Bucks-Ubuntu | Linode-Linode1024-Ubuntu | Ovh-Vpsssd1-Ubuntu | Scaleway-Vc1S-Ubuntu | Vultr-20Gbssd-Ubuntu | Vultr-25Gbssd-Ubuntu
|-
| Lynis (hardening index) |59 | 67 | 62 | 64 | 60 | 60
| Lynis (tests performed) |220 | 220 | 220 | 225 | 230 | 231
|-

That does not seems like a table, but thanks to the awesome org-mode table manipulation features, only by using the Ctrl-C Ctrl-C key combination that becomes this:

|-------------------------+------------------+--------------------------+--------------------+----------------------+----------------------+----------------------|
|                         | Do-5Bucks-Ubuntu | Linode-Linode1024-Ubuntu | Ovh-Vpsssd1-Ubuntu | Scaleway-Vc1S-Ubuntu | Vultr-20Gbssd-Ubuntu | Vultr-25Gbssd-Ubuntu |
|-------------------------+------------------+--------------------------+--------------------+----------------------+----------------------+----------------------|
| Lynis (hardening index) |               59 |                       67 |                 62 |                   64 |                   60 |                   60 |
| Lynis (tests performed) |              220 |                      220 |                220 |                  225 |                  230 |                  231 |
|-------------------------+------------------+--------------------------+--------------------+----------------------+----------------------+----------------------|

And finally using also a little magic from org-mode, org-plot and gnuplot, that table would generate automatically a graph like the ones showed here with only a few lines of text (see this file in raw mode to see how) and the Ctrl-c " g key combination over those lines. Thus, the only manual step is to copy/paste those tables from that file into this one, and with only two key combinations for table/graph the job is almost done (you can move/add/delete columns very easily with org-mode).

There is another python script (/ansible~/roles/common/files/clean_ips.py) that automatically removes any public IPv4/IPv6 from the log files (only on those that is needed).

WARNING

Performance tests can be affected by locations, data centers and VPS host neighbors. This is inherent to the same nature of the VPS service and can vary very significantly between instances of the same plan. For example, in the tests performed to realize this comparison I had found that in a plan (not included here, becasuse is more than $5/mo) a new instance that usually would give a UnixBench index about ~1700 only achieved an UnixBench index of 629,8. That’s a considerable amount of lost performance in a VPS server… by the same price! Also the performance can vary over time, due to the VPS host neighbors. Because of this I discarded any instance that would report a poor performance and only show “typical” values for a given plan.

Automation

I have chosen Ansible to automate the tests to recollect information from the VPS servers because once that the roles are write down it’s pretty easy to anyone to replicate them and get its own results with a little effort.

The first thing that you have to do is to edit the /ansible/hosts file to use your own servers. In the template provided there are not real IPs present, but serves you as a guide of how to manage them. For example in this server:

[digitalocean]
do-5bucks-ubuntu          ansible_host=X.X.X.X   ansible_python_interpreter=/usr/bin/python3

You should have to put your own server IP. The interpreter path is only needed when there is not a Python 2 interpreter available by default (like in Ubuntu). Also I’m using the variables per group to declare the default user of a server, and I’m grouping servers by provider. So, a complete example for a new provider using a new instance running Ubuntu should be like this:

[new_provider]
new_provider-plan_name-ubuntu   ansible_host=X.X.X.X   ansible_python_interpreter=/usr/bin/python3

[new_provider:vars]
ansible_user=root

And you can add as many servers/providers as you want. If you are already familiar with Ansible, you can suit the inventory file (/ansible/hosts) as you need.

Then, you can start to tests the servers/providers using Ansible by running the playbook, but should be a good idea to test the access first with a ping (from the /ansible folder):

$ ansible all -m ping

If it’s the first time that you are SSHing to a server, you are probably going to be asked to add it to the ~/.ssh/known_hosts file.

Then you can easily execute all the tasks in a server by:

$ ansible-playbook site.yml -f 6

With the -f 6 option you can specify how many forks you want to create to execute the tasks in parallel, the default is 5 but as I use here 6 VPS plans I use also 6 forks.

You can also run only selected tasks/roles by using tags. You can list all the available tasks:

$ ansible-playbook site.yml --list-tasks

And run only the tags that you want:

$ ansible-playbook site.yml -t benchmark

All the roles are set to store the logs of the tests in the /logs/ folder using the /logs/server_name folder structure.

WARNING:

All the tests that I include here are as “atomic” as possible, that is that in every one of them I try to leave the server in a state as close as it was before perform it, with the exception that I keep the logs. By the way, the logs are stored in the /tmp folder intentionally because they will disappear when you reboot the instance. There are three main reasons why I try to make the tests as atomic as possible and do not take advantage of some common tasks and perform them only once:

  • Some plans have so little disk space available that if I do not erase auxiliary files and packages between tests, they run out of space soon, and worse, some of them until the point of make them unavailable to SSH connections (e.g. OVH), making necessary manual intervention in the control panel and ruining the advantage of the automation that Ansible give us.
  • I want as little interference as possible between tests, and try to perform them always in a state close to the default one of the instance. Some of them (e.g. lynis, ports) change their results significantly if they are performed after some of the package/configuration changes that other tests do.
  • In this way, and with a clever use of the Ansible tags, you can perform individual tests without the need of execute the entire Ansible playbook.

Perhaps the only major drawback of this approach is that it consumes more time globally when you perform all the tests together.

Location and SO

All the instances were allocated in London (GB), except for OVH VPS SSD 1 in Gravelines (FR) and Scaleway VC1S in Paris (FR).

All the instances were running on Ubuntu 16.04 LTS

Currently the Vultr’s 20GB SSD plan is sold out and is unavailable temporarily, thus I only performed some tests (and some in a previous version) in an instance that I deleted before new ones become unavailable. I have the intention to retake the test as soon as the plan is available again.

System Performance

UnixBench

UnixBench as is described in its page:

The purpose of UnixBench is to provide a basic indicator of the performance of a Unix-like system; hence, multiple tests are used to test various aspects of the system’s performance. These test results are then compared to the scores from a baseline system to produce an index value, which is generally easier to handle than the raw scores. The entire set of index values is then combined to make an overall index for the system.

Keep in mind, that this index is very influenced by the CPU raw power, and does not reflect very well another aspects like disk performance. In this index, more is better.

I only execute this test once because it takes some time -about 30-45 minutes depending of the server- and the variations between several runs are almost never significant.

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
UnixBench (index, 1 thread)1598.11248.61264.6629.81555.11579.9
UnixBench (index, 2 threads)1115.1

./img/unixbench.png

Individual test indexes of the UnixBench benchmark.

In this table I show the individual tests results that compose the UnixBench benchmark index.

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Dhrystone 2 using register variablesA2510.22150.02061.01057.92530.52474.5
Double-Precision WhetstoneB583.6539.7474.6367.5578.2656.9
Execl ThroughputC1038.9941.8799.5400.0963.81027.8
File Copy 1024 bufsize 2000 maxblocksD2799.51972.72222.51094.42775.32608.8
File Copy 256 bufsize 500 maxblocksE1908.71286.21440.1752.61888.81851.4
File Copy 4096 bufsize 8000 maxblocksF3507.12435.62692.61729.93248.43212.1
Pipe ThroughputG1846.51472.11468.7894.01813.61789.6
Pipe-based Context SwitchingH744.0623.2597.260.3739.0746.3
Process CreationI904.5690.5706.8288.2848.1949.9
Shell Scripts (1 concurrent)J1883.21442.01501.9801.91787.81851.2
Shell Scripts (8 concurrent)K1725.01144.41362.71221.81665.91679.1
System Call OverheadL2410.12034.41955.61154.72461.02366.4

./img/unixbench_detailed.png

Notes:

  • Scaleway VC1S is the only plan that offers two CPU threads, so in the table and the graph I only show the single thread numbers for a more fair comparison.

Sysbench

Notes:

  • I’m only using one thread here for Scaleway’s plan, to a more fair comparison.

Sysbench is a popular benchmarking tool that can test CPU, file I/O, memory, threads, mutex and MySQL performance. One of the key features is that is scriptable and can perform complex tests, but I rely here on several well-known standard tests, basically to compare them easily to others that you can find across the web.

Sysbench cpu

In this test the cpu would verify a given primer number, by a brute force algorithm that calculates all the divisions between this one and all the numbers prior the square root of it from 2. It’s a classic cpu stress test and usually a more powerful cpu would employ less time in this test, thus less is better.

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Sysbench CPU (seconds)31.92237.50239.08046.13030.22230.544

./img/sysbench_cpu.png

Sysbench memory

This test measures the memory performance, it allocates a memory buffer and reads/writes from it randomly until all the buffer is done. In this test, more is better.

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Sysbench RAM rand read (Mb/s)2279.7501334.1621262.5421228.8982146.132
Sysbench RAM rand write (Mb/s)2196.1741310.6241221.2761181.5162062.046

./img/sysbench_ram_mb.png

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Sysbench RAM rand read (IOPS)23344631366183129284212583932197641
Sysbench RAM rand write (IOPS)22488831342079125058912098732111535

./img/sysbench_ram_iops.png

Sysbench fileio

Here is the file system what is put to test. It measures the disk input/output operations with random reads and writes. The numbers are more reliable when the total file size is more greater than the amount of memory available, but due to the limitations that some plans have in disk space I had to restrain that to only 8GB. In this test, more is better.

Notes:

  • It’s very clear that something is going on with OVH in this plan, in all the tests like this that I did the numbers were always close to or even exactly 1000 IOPS and around to 4 MB/s. The only explanation to those numbers that occurs to me is that they are limited on purpose. Seems that other clients with this plan does not have this problem, while others complain about the same results I have.
PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Sysbench file rand read (Mb/s)4.81319.24048.80741.353Temp. unavailable23.022
Sysbench file rand write (Mb/s)4.3155.52921.4002.482Temp. unavailable17.510

./img/sysbench_fileio_mb.png

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Sysbench file rand read (IOPS)123249251249510586Temp. unavailable5984
Sysbench file rand write (IOPS)110514155478635Temp. unavailable4482

./img/sysbench_fileio_iops.png

Sysbench oltp (database)

Here the test measures the database performance. I used the MySQL database for this tests, but the results could be applied also to the MariaDB database. More requests per second is better but less 95% percentile is better.

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
DB R/W (request/second)245.590212.42232.266176.700245.127243.832
request approx. 95% (ms)203.210242.100218.490268.086203.410205.786

./img/sysbench_oltp.png

fio

fio is a benchmarking tool used to measure I/O operations performance, usually oriented to disk workloads, but you could use it to measure network, cpu and memory I/O as well. It’s scriptable and can simulate complex workloads, but I use it here in a simple way to measure the disk performance. In this test, more is better.

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Read IO (MB/s)3.999111.622581.851266.779249.672244.385
Write IO (MB/s)3.99193.635.31784.684192.748194.879

./img/fio_io.png

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Read IOPS99927905145487666946241760913
Write IOPS997233998828211704818648719

./img/fio_iops.png

dd

A classic, the ubiquitous dd tool that is being used forever for tons of sysadmins for diverse purposes. I use here a pair of well-known fast tests to measure the CPU and disk performance. Not very reliable (e.g. the disk is only a sequential operation) but they are good enough to get an idea, and I include them here because many people use them. In the CPU test less is better and the opposite in the disk test.

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
dd CPU (seconds)2.6842.9353.2924.1992.6672.715

./img/dd_cpu.png

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
dd IO (MB/s)550467.4702.6163.6477458.2

./img/dd_io.png

compiler

This test measures the time in seconds that a server takes to compile the MariaDB server. This is not a synthetic test and gives you a more realistic workload to compare them. Also helps to reveal the flaws that some plans have due their limitations (e.g. cpu power in Scaleway and memory available in DO). In this test, less is better.

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Compile MariaDB (seconds)1904.73070.2out of memory5692.7Temp. unavailable2069.3

./img/compile_mariadb.png

Notes:

  • The compilation in DO fails at 65% after about 35min, the process it’s killed when gets out of memory.

transcode video

In this test the measure is the frames per second achieved to transcode a video with ffmpeg (or avconv in Debian). This is also a more realistic approach to compare them, because is a more real workload (even when is not usually performed in VPS servers) and stress heavily the CPU, but making also a good use of the disk and memory. In this test, more is better.

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
FPS5.94.7out of memory3.2Temp. unavailable5.6

./img/transcode.png

Note:

  • In DO the process is killed when ran out of memory.

Network Performance

downloads

This test try to measure the average network speed downloading a 100mbit file and the average sustained speed downloading a 10gb file from various locations. I include some files that are in the same provider network as the plans that I compare here to see how much influence this factor has (remember that Scaleway belongs to Online.net). In the bash script used there are more files and locations, but I only use some of them to limit the monthly bandwidth usage of the plan. In this test, more is better.

100Mbit file IPv4

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Cachefly CDNA11.03384.36712382.567Temp. unavailable182.333
DigitalOcean (GB)B11.990.76713779.633148.333
LeaseWeb (NL)C11.9100.06787.867105.667162.333
Linode (GB)D11.9110.667125.33377.233134.667
Online.net (FR)E11.917.9066.200110.373.267
OVH (FR)F1243.1053.941.8
Softlayer (FR)G11.834.06777.26752.179.533
Vultr (GB)H11.932.867121.66760.2195

./img/downloads_100v4.png

100Mbit file IPv6

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
DigitalOcean (GB)89.7145.667113Temp. unavailable146
LeaseWeb (NL)98.713.6109.967174.333
Linode (GB)109.667126.333111.333113.333
Softlayer (FR)42.22391.56731.23363.633

./img/downloads_100v6.png

10Gbit file IPv4

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
CDN77 (NL)11.96791.665.9120.667Temp. unavailable161.667
Online.net (FR)11.93321.46764.333117.333158.333
OVH (FR)11.96754.241.1537.867158

./img/downloads_10gv4.png

speedtest

This test uses speedtest.net service to measure the average download/upload network speed from the VPS server. To do that I use the awesome speedtest-cli python script to be able to do it from the command line.

Keep in mind that this test is not very reliable because depends a lot of the network capabilities and status of the speedtest’s nodes (I try to choose always the fastest node in each city). But it gives you an idea of the network interconnections of each provider.

In those tests more is better.

Nearest location

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Nearest Download (Mb/s)99.487719.030743.270815.250Temp. unavailable584.740
Nearest Upload (Mb/s)80.552273.677464.403288.13094.037

./img/speedtest_near.png

European cities download

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Madrid98.940390.947376.187367.177Temp. unavailable535.477
Barcelona98.550319.777489.210558.573796.617
Paris96.237343.067720.700339.76493.723
London98.8971395.2901260.607766.2773050.463
Berlin94.233309.860525.137453.267943.980
Rome98.910321.69527.560636.857964.350

./img/speedtest_eur_down.png

European cities upload

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Madrid87.937151.977172.43757.333Temp. unavailable128.560
Barcelona85.670152.757148.08041.480177.963
Paris91.173182.267337.737199.737169.450
London86.360302.350282.380107.260489.013
Berlin86.35399.223206.17075.100194.157
Rome87.387116.9044.35059.053121.390

./img/speedtest_eur_up.png

Web Performance

I’m going to use two popular blog platforms to benchmark the web performance in each instance: WordPress and Ghost. In order to minimize the hassle and avoid any controversies (Apache vs Nginx, which DB, wich PHP, what cache to use, etc) and also make all the process easier I’m going to use the Bitnami stacks to install both programs. Even when I’m not specially fond of Bitnami stacks (I would use other components), being self-contained helps a lot to make easier the task as atomic and revert the server at the end to the previous state. To use two real products, even with dummy blog pages, makes a great difference from using only a “Hello world!” HTML page, specially with WordPress that also stresses heavily the database.

The Bitnami’s Wordpress stack uses Apache 2.4, MySQL 5.7, PHP 7, Varnish 4.1, and Wordpress 4.7

The Ghost stack uses Apache 2.4, Node.js 6.10, SQlite 3.7, Python 2.7 y Ghost 0.11

To perform the tests I’m going to use also another two popular tools: ApacheBench (aka ab) and wrk. In order to do the tests properly, you have to perform the tests from another machine, and even when I could use a new instance to test all the other instances, I think that the local computer is enough to test all of this plans. But there is a drawback, you need a good internet connection, preferably with a small latency and a great bandwidth, because all the tests are going to be performed in parallel. I’m using a symmetric fiber optic internet access with enough bandwidth, thus I did not had any constrain in my side. But with bigger plans, and specially with wrk and testing with more simultaneous connections it would be eventually a problem, in that case a good VPS server to perform the tests would be probably a better solution. I cold use an online service but that would make more difficult and costly the reproducibility of these tests by anyone by their own. Also I could use another tools (Locust, Gatling, etc), but they have more requirements and would cause more trouble sooner in the local machine. Also wrk is enough in their own to saturate almost any VPS web server with very small requirements in the local machine, and faster.

To avoid install or compile any software in the local machine, specially wrk that is not present in all the distributions, I’m going to use two Docker images (williamyeh/wrk and jordi/ab) to perform the tests. In the circumstances of these tests, using Docker almost does not cause any performance loss on the local machine, is more than enough. But if we want to test bigger plans with more stress, then it would be wiser to install locally both tools and perform the tests from them.

Anyway, there is a moment, no matter with software I use to perform the tests (but specially with wrk), that when testing WordPress the requests are so much that the system runs out of memory and the MySQL database is killed and eventually the Apache server is killed too if the test persists enough, until that the server would be unavailable for a few minutes (some times never recover on its own and I had to restart it from the control panel). After all, is a kind of mini DDoS attack what are performing here. This can be improved a lot with other stack components and a careful configuration. The thing here is that all of the instances are tested with the same configuration. Thus, I do not try here to test the maximum capacity of a server as much as I try to compare them under the same circumstances. To avoid lost the SSH connection (and have to perform a manual intervention) with the servers, I limit the connections until a certain point, pause the playbook five minutes and then restart the stack before perform the next test.

In the servers where the memory available is less than 1GB, to be able to install the stacks, I set a swap cache file of 512GB. But to perform the tests I deactivate that cache memory to compare all of them in the default offered conditions.

WordPress

ApacheBench

Requests

This graph show the number of requests achieved with several concurrent connections in 3 minutes, more valid requests is better.

10 connections
PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Total (requests)45623496392139014180
Failed (requests)00000
Valid (requests)456234963921390104180

./img/web_wp_ab_10_requests.png

25 connections

I truncate the graph by the top here because of the excess of invalid requests from DO misrepresents the most important value, the successful requests.

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Total (requests)541243101865342485293
Failed (requests)001854200
Valid (requests)54124310111424805293

./img/web_wp_ab_25_requests.png

50 connections

I truncate the graph by the top here because of the excess of invalid requests from DO misrepresents the most important value, the successful requests.

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Total (requests)559739763028339005267
Failed (requests)003020100
Valid (requests)5597397682390005267

./img/web_wp_ab_50_requests.png

Requests per second

This graph show the requests per second achieved with several concurrent connections in 3 minutes, more is better.

10 connections
PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Requests per second (mean, RPS)25.3419.4021.7821.6623.22

./img/web_wp_ab_10_rps.png

25 connections
PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Requests per second (mean, RPS)30.0623.9323.6029.40

./img/web_wp_ab_25_rps.png

Note:

  • The requests per second in DO is not a valid number because ab makes no distinction between valid and failed requests.
50 connections
PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Requests per second (mean, RPS)31.0922.0921.6729.26

./img/web_wp_ab_50_rps.png

Note:

  • The requests per second in DO is not a valid number because ab makes no distinction between valid and failed requests.
Time per request

This other chart shows the mean time per request and under what time are served the 95% of all requests. Less is better.

10 connections
PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Time per request (mean, ms)394.565515.481459.156461.720430.655
95% request under this time (ms)624840779711725

./img/web_wp_ab_10_times.png

25 connections
PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Time per request (mean, ms)831.5391044.7501059.526850.458
95% request under this time (ms)1157149214861195

./img/web_wp_ab_25_times.png

Note:

  • The times in DO are not a valid numbers because ab makes no distinction between valid and failed requests.
50 connections
PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Time per request (mean, ms)1608.0172263.6032307.7531708.784
95% request under this time (ms)2289312230822489

./img/web_wp_ab_50_times.png

Note:

  • The times in DO are not a valid numbers because ab makes no distinction between valid and failed requests.

wrk

With those tests, using the wrk capacity to saturate almost any server, I increment the connections in three steps (100, 150, 200) under a 3 minutes load to be how the performance of each server is degrading. I could use a linear plot, but that would make me to change the gather python script and I think that’s clear enough in this way.

Of course, the key here is the amount of memory, the plans that support more load are also the ones that have more memory.

More valid requests is better.

100 connections

I truncate the graph by the top here because of the excess of invalid requests (the database is killed to soon) from Vultr misrepresents the most important value, the successful requests.

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Total (requests)519044111487246705644
Timeout (requests)40883956110642574297
Failed (requests)14773
Valid (requests)1102455-100741301347

./img/web_wp_wrk_100.png

150 connections

I truncate the graph by the top here because of the excess of invalid requests (the database is killed to soon) from Vultr & DO misrepresents the most important value, the successful requests.

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Total (requests)6870455432561480994838
Timeout (requests)664744347524709824
Failed (requests)3246494154
Valid (requests)223120-6551000-140

./img/web_wp_wrk_150.png

200 connections

I truncate the graph by the top here because of the excess of invalid requests (the database is killed to soon) from several plans misrepresents the most important value, the successful requests.

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Total (requests)279674471088172269153795
Timeout (requests)50031071104036301216
Failed (requests)227744383787401899653088
Valid (requests)190-198-963650-509

./img/web_wp_wrk_200.png

Ghost

The same tests with wrk as above but in Ghost, a faster and more efficient blog than wordpress. More valid request is better.

100 connections

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Total (requests)126831855272124745912123244
Timeout (requests)96959259392
Failed (requests)9065
Valid (requests)1267358543211257458190123152

./img/web_ghost_wrk_100.png

150 connections

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Total (requests)127445884246901045924121817
Timeout (requests)157147864156156
Failed (requests)63794
Valid (requests)127288882774352457680121661

./img/web_ghost_wrk_150.png

200 connections

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Total (requests)126968863818368544906122767
Timeout (requests)2362291182238230
Failed (requests)80257
Valid (requests)126732861522246446680122537

./img/web_ghost_wrk_200.png

Default Security

Warning: Security in a VPS is your responsibility, nobody else. But taking a look to the default security applied in the default instances of a provider could give you a reference of the care that they take in this matter. And maybe it could give you also a good reference of how they care about their own systems security.

Lynis

Lynis is a security audit tool that helps you to harden and test compliance on your computers, among other things. As part of that is has an index that values how secure is your server. This index should be take with caution, it’s not an absolute value, only a reference. It not covers yet all the security measures of a machine and could be not well balanced to do a effective comparison. In this test, more is better, but take into account that the number of tests performed had also an impact on the index (the number of test executed is a dynamic value that depends on the system features detected).

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Lynis (hardening index)62 (220)67 (220)59 (220)64 (225)60 (230)60 (231)

./img/lynis.png

Notes:

  • The number between round brackets are the number of tests performed in every server.

Open ports

This tests uses nmap (also netstat to double check) to see the network ports and protocols that are open by default in each instance.

open ports IPv4

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Open TCP ports22 (ssh)22 (ssh)22 (ssh)22 (ssh)22 (ssh)
Open UDP ports68 (dhcpc)68 (dhcpc), 123 (ntp)68 (dhcpc), 123 (ntp)

open ports IPv6

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Open TCP ports22 (ssh)22 (ssh)22 (ssh)
Open UDP ports22 (ssh)22 (ssh)22 (ssh)

open protocols

PlanOVH VPS SSD 1Linode 1024DO 5bucksScaleway VC1SVultr 20GB SSDVultr 25GB SSD
Open protocols IPv41 (icmp), 2 (igmp), 6 (tcp), 17 (udp), 103 (pim), 136 (udplite), 255 (unknown)1 (icmp), 2 (igmp), 4 (ipv4), 6 (tcp), 17 (udp), 41 (ipv6), 47 (gre), 50 (esp), 51 (ah), 64 (sat), 103 (pim), 108 (ipcomp), 132 (sctp), 136 (udplite), 242 (unknown), 255 (unknown)1 (icmp), 2 (igmp), 6 (tcp), 17 (udp), 103 (pim), 136 (udplite), 255 (unknown)1 (icmp), 2 (igmp), 6 (tcp), 17 (udp), 136 (udplite), 255 (unknown)1 (icmp), 2 (igmp), 6 (tcp), 17 (udp), 103 (pim), 136 (udplite), 196 (unknown), 255 (unknown)
Open protocols IPv60 (hopopt), 4 (ipv4), 6 (tcp), 17 (udp), 41 (ipv6), 43 (ipv6-route), 44 (ipv6-frag), 47 (gre), 50 (esp), 51 (ah), 58 (ipv6-icmp), 59 (ipv6-nonxt), 60 (ipv6-opts), 108 (ipcomp), 132 (sctp), 136 (udplite), 255 (unknown)0 (hopopt), 6 (tcp), 17 (udp), 43 (ipv6-route), 44 (ipv6-frag), 58 (ipv6-icmp), 59 (ipv6-nonxt), 60 (ipv6-opts), 136 (udplite), 255 (unknown)0 (hopopt), 6 (tcp), 17 (udp), 43 (ipv6-route), 44 (ipv6-frag), 58 (ipv6-icmp), 59 (ipv6-nonxt), 60 (ipv6-opts), 103 (pim), 136 (udplite), 255 (unknown)

Custom DIY distro install

OVHLinodeDigitalOceanScalewayVultr
Distro install in instancePartialPartialYesYesYes

TODO. Pending to automate also this.

Notes:

  • To test the “Distro install in instance” I use a installation script to install Arch Linux from an official Debian instance. The purpose is to test if you are restricted in any way to use a different SO than the ones officially supported.
  • The “Distro install” script fails partially in OVH and Linode, requires your manual intervention, that does not mean that you are not able to do it, only that you’ll probably need more work to do it.