• Howdy! Welcome to our community of more than 130.000 members devoted to web hosting. This is a great place to get special offers from web hosts and post your own requests or ads. To start posting sign up here. Cheers! /Peo, FreeWebSpace.net
managed wordpress hosting

Estimating the capacity of a VPS node

Seraphim

Active Member
Lately I've been wondering this just as kind of an offhand thing.

On average, how many VPSs can be made per physical CPU in a system?

I do know the RAM and Hard Drive are also limiting factors, but someone recently told me that a single quad-core CPU would not be able to reliably support enough VPSs to actually use up much more than 16 GB of RAM, and a node with more than that should have more than one CPU in it.

Anyone here with some experience designing VPS nodes that they'd be willing to share? I already know to provision with lots of ram, and plan on at least RAID1 but preferrably RAID5 for storage. But this CPU thing bothers me, because you can't just break down the assignments numerically like you can for RAM and Storage on a non-oversold setup.
 
Lately I've been wondering this just as kind of an offhand thing.

On average, how many VPSs can be made per physical CPU in a system?

I do know the RAM and Hard Drive are also limiting factors, but someone recently told me that a single quad-core CPU would not be able to reliably support enough VPSs to actually use up much more than 16 GB of RAM, and a node with more than that should have more than one CPU in it.

Anyone here with some experience designing VPS nodes that they'd be willing to share? I already know to provision with lots of ram, and plan on at least RAID1 but preferrably RAID5 for storage. But this CPU thing bothers me, because you can't just break down the assignments numerically like you can for RAM and Storage on a non-oversold setup.

As long as you are using recent hardware (E3's, E5's) a quad core will absolutely be able to handle more then 16GB worth of VPS. Keep in mind that quad core = 8 threads. That means if you sold 1GB VPS with 16GB of memory each one would be guaranteed 50% of a thread bare minimum. And I don't know about you, but all of the VPS that I have just idle at around 1% CPU usage.

tl;dr you could probably fit around 32GB-48GB per E3-1230 as long as you're not selling ridiculously low memory vps like 256MB. Just make sure your RAID array will be able to handle the I/O load. 4x250GB with a HW RAID card and BBU/NAND cache and you should be golden. The more drives the better, though.

If you don't go raid-10 you're basically shooting yourself in the foot and you can expect even less then 16GB a node. RAID-5 is okay but it actually is worse then a raid-1 array when it comes to write performance. So needless to say if you go raid-5 expect less then a single sata 7200 in regards to write speed -- at least on small writes, anyway. Larger ones tend to be little better.
 
Generally speaking, the CPU is never going to be maxed out, or come anywhere close to it on most hosting platforms. VPS is no exception. 99.9% of the time, disk i/o is going to be the bottleneck.

As Tyler mentioned, the E3-1230 is a great platform for VPS. To provide a good VPS service, you don't want to get crazy with the RAM anyway. At 32GB of RAM, you're talking 128, 256mb VPSs on the single system. I don't care what kind of CPU setup you use, that isn't going to work. Even 32 1GB accounts might be stretching it, since they will probably be higher usage. You see hosts boasting about 48GB and 96GB of RAM on their VPS machines all the time. Who the hell wants that? I never go over 24GB and that will be a mix of high-RAM accounts and low-RAM accounts.
 
Definately was going to use the Sandy Bridge or newer CPU types. Though I wasn't sure on how involved the IO bottlenecking was with an active machine, and was thinking of using a RAID1 to start but then later on adding a second RAID1 made out of SSDs for file-intensive clients.

What do you guys think of the SSD-Hybrid drives? They seem to use a small (gig or two) SSD device as an extra cache for a normal hard drive. Possibly might be able to take advantage of that for filesystem bursting to avoid congestion, although any sustained high transfers and it will quickly swamp out. I don't think they're worth the extra cost and the reduced reliability, but maybe I am wrong about them.

From what Tyler is saying though, I should figure .25-.5 threads per VPS to make sure that they don't wait too long for CPU time.
 
Last edited:
To provide a good VPS service, you don't want to get crazy with the RAM anyway. At 32GB of RAM, you're talking 128, 256mb VPSs on the single system. I don't care what kind of CPU setup you use, that isn't going to work. Even 32 1GB accounts might be stretching it, since they will probably be higher usage. You see hosts boasting about 48GB and 96GB of RAM on their VPS machines all the time. Who the hell wants that? I never go over 24GB and that will be a mix of high-RAM accounts and low-RAM accounts.

Kiloserve does great with their 128gb ram 48 cpu core servers. http://kiloserve.com/48-core-nodes
My VPS runs great and is fast all the time.
 
Definately was going to use the Sandy Bridge or newer CPU types. Though I wasn't sure on how involved the IO bottlenecking was with an active machine, and was thinking of using a RAID1 to start but then later on adding a second RAID1 made out of SSDs for file-intensive clients.

What do you guys think of the SSD-Hybrid drives? They seem to use a small (gig or two) SSD device as an extra cache for a normal hard drive. Possibly might be able to take advantage of that for filesystem bursting to avoid congestion, although any sustained high transfers and it will quickly swamp out. I don't think they're worth the extra cost and the reduced reliability, but maybe I am wrong about them.

From what Tyler is saying though, I should figure .25-.5 threads per VPS to make sure that they don't wait too long for CPU time.

No I was just saying if every VPS was maxing CPU they would be able to use 50% of a thread. Like I said, my VPS just sits there at 1% cpu most of the time and realistically even if quite a few of them did use a lot of CPU that doesn't mean every VPS on the node would be using all the CPU they can.

Your bottleneck will be I/O, not CPU. Especially if you use the E3's and E5's. Just don't go crazy on low memory VPS and you will be fine.

RAID-1 is suicide on a VPS node. And as far as SSD goes -- don't. RAID (even hardware raid) will actually hamper the performance on SSDs and they will wear out extremely fast on a VPS node where people use MySQL servers, etc. Probably less then two months worth of service on a full, active node before they are dead.

And RAID-1 won't help you at all as far as redundancy goes for SSDs because they would both still be making the same read/writes. They'll likely both die at the exact same time.
 
Kiloserve does great with their 128gb ram 48 cpu core servers. http://kiloserve.com/48-core-nodes
My VPS runs great and is fast all the time.

Which is actually 2.6GB of RAM per core. For a system with 8 threads like a quad + HT, 24GB of RAM would be comparable performance-wise as long as not all of it was assigned. But that's the kind of magic number I was looking for- about how much RAM should be used for each core to make sure that it runs light and quick while still being able to make a good bottom line.

As for the RAID thing, although I am pretty sure they won't fail at exactly the same time because the drives internally rearrange themselves, but you are completely correct in that there would be little usable performance gain and it would just waste a good SSD. However I beg to differ on them being used in VPS nodes successfully, as a number of providers I've seen offer VPSs with SSD-powered storage for filesystem intensive clients. Although their reliability is still lagging behind that of conventional hard drive technology, they are holding up a lot better than they used to and offer tremendous performance gains for applications suited to them. Intel has SSDs out now with MTBF figures on order of 1.2 million hours- as good as a lot of the Western Digital conventional drives.

Though the price tag of SSDs is still keeping them from being widely used.
 
Last edited:
Which is actually 2.6GB of RAM per core. For a system with 8 threads like a quad + HT, 24GB of RAM would be comparable performance-wise as long as not all of it was assigned. But that's the kind of magic number I was looking for- about how much RAM should be used for each core to make sure that it runs light and quick while still being able to make a good bottom line.

As for the RAID thing, although I am pretty sure they won't fail at exactly the same time because the drives internally rearrange themselves, but you are completely correct in that there would be little usable performance gain and it would just waste a good SSD. However I beg to differ on them being used in VPS nodes successfully, as a number of providers I've seen offer VPSs with SSD-powered storage for filesystem intensive clients. Although their reliability is still lagging behind that of conventional hard drive technology, they are holding up a lot better than they used to and offer tremendous performance gains for applications suited to them. Intel has SSDs out now with MTBF figures on order of 1.2 million hours- as good as a lot of the Western Digital conventional drives.

Though the price tag of SSDs is still keeping them from being widely used.

I'm friends with a host that had two servers with a raid-1 array (2 ssd's on each server) fail at the same time -- both servers, all four ssds. The servers were provisioned on the same day back when he ordered them. Both had similar load.

And they only rearrange themselves internally if you leave part of the drive unpartitioned. If you partition the entire thing, it wont have any failsafe space available.

And yes, SSDs are perfect for regular everyday use -- such as hosting game servers. What they are not good for is anything that involves large transfers, etc. Such as MySQL servers, VPS, and other crap that is write intensive.
 
And yes, SSDs are perfect for regular everyday use -- such as hosting game servers. What they are not good for is anything that involves large transfers, etc. Such as MySQL servers, VPS, and other crap that is write intensive.

But that's also the same applications that SSDs really show the performance gains over conventional drives. I think it depends a lot on the situation and intended usage, and for general use they're not quite worth the cost. They're still far too expensive right now for me to even seriously consider, but I think it might be worth keeping an eye on that technology because sooner or later they'll get it figured out to a point where it can match normal hard drives in cost and longevity.
 
Kiloserve does great with their 128gb ram 48 cpu core servers. http://kiloserve.com/48-core-nodes
My VPS runs great and is fast all the time.

That's great! About 125 1GB VPSs on that machine, 250 512mb VPSs, or a whopping 500 256mb VPSs all sharing a single RAID-10 disk array. Wait until those nodes start filling up, and then let us know how fast it is. ;)

Then you have a node like that crash, or come under a serious DDOS attack (as happened with Kiloserve at the begining of this month, and pretty much every single client account went down) and instead of losing 20 client accounts, you lose 500. I can't even begin to explain all the ways, the silliness that is a setup like that.

On a server with say 24GB of RAM, divided into 1GB VMs, and assuming no server overhead, just to make the math easy tonight, you have 24 clients sharing a 8 disk array vs. 128 clients sharing a 12 disk array. I can guarantee any day of the week, who is going to have the better i/o.

A huge server like that will actually make you more money. Even though it seems ridiculously expensive, it's far cheaper in the long-run to provision servers like that, than it is multiple smaller servers. Do so at your own risk, however!
 
A huge server like that will actually make you more money. Even though it seems ridiculously expensive, it's far cheaper in the long-run to provision servers like that, than it is multiple smaller servers. Do so at your own risk, however!

Which is probably a big underlying factor for why cloud systems have become so popular. They let you pool resources to create one giant VPS node that theoretically is far more resistant to IO bottlenecking and hardware failures than any single machine could possibly be, and also has the power and connectivity capabilities to simply tank out any incoming attacks that aren't so large as to require null routing upstream.

But then that begs the question, how many discs in RAID10 would be a safe minimum for a smaller machine such as the 24GB single quad core example. I already have a good feel for how much space to provision just based on a competitive ratio of space to RAM.
 
Last edited:
Which is probably a big underlying factor for why cloud systems have become so popular. They let you pool resources to create one giant VPS node that theoretically is far more resistant to IO bottlenecking and hardware failures than any single machine could possibly be, and also has the power and connectivity capabilities to simply tank out any incoming attacks that aren't so large as to require null routing upstream.

But then that begs the question, how many discs in RAID10 would be a safe minimum for a smaller machine such as the 24GB single quad core example. I already have a good feel for how much space to provision just based on a competitive ratio of space to RAM.

Four .
 
Yep, you have to have 4 drives. If you're using a 24GB setup, we'll leave 2GB of RAM for server overhead (which is overkill) and a (4) 1TB drive array. That leaves you about 80GB of disk space per 1GB of RAM VPS account. On a 16GB setup, it will be about 125GB of disk space per 1GB of RAM, but the cost per GB of RAM will also be higher. There is a trade-off there. You have to decide which is more important.
 
Back
Top