As the two main forms of infrastructure hosting available to businesses, how a bare metal server compares to a cloud server is a key question companies should be considering when looking to source infrastructure. Your business requirements will determine which one is right for you. In some cases, it could be a mix of both.
To help companies make a more informed decision, we have put together a comprehensive overview of each type of server, including their strengths and weaknesses, and what kind of businesses and services they best lend themselves.
But before we kick off with the difference between the two, let’s start with what a server is and a quick history of hosting.
Quite simply, a server is a computer.
The components that can make up a server are:
Central Processing Unit (CPU)
RAM
Storage
Networking
Power Supply Unit (PSU)
Graphics processing Unit (GPU)
Motherboard
Chassis (case)
But unlike personal computers - laptops, phones, smart watches - servers do not need to be turned off regularly to ensure performance. Servers are designed to run continuously and to sustain power for long periods - up to years at a time.
Servers are extremely fault tolerant, which allows them to run 24/7. This is crucial when businesses around the world rely on continuous uptime. The components in a server are designed at a grade that is much higher and therefore more expensive than a personal computer.
To understand how servers have developed to become the bare metal and cloud servers that we know today, we need to go back to the birth of hosting.
1960s - The first version of the internet was created by The Advanced Research Projects Agency Network (ARPANET) for the U.S. military, enabling different groups to share information. Its success led other institutions around the world to adopt the idea.
1989 - Tim Berners-Lee came up with the protocol that underpins the version of the internet that we know today, the World Wide Web. That protocol was HTTP.
1995 – 2000 - The World Wide Web brought the dot-com boom and the development of websites. Overnight, hosting requirements exploded. In some instances, infrastructure was hosted in people’s basements with fans to keep it cool.
But not everyone had a basement or a space big enough to host the servers they needed. So, some smart people saw an opportunity to build a space with power, cooling and internet connectivity that could be rented out to businesses to install their own servers. This is what we know as colocation.
Very quickly after colocation was created came the idea of shared hosting. These hosting companies would take a whole server, divide up the resources and allow (usually small) companies to rent a part of the server.
It’s a bit like having a pizza and sharing out each slice individually to different people. Although those slices don’t always have to be the same size depending on the amount of server resource each person needs.
Early 2000s - Then came dedicated hosting, also known as bare metal hosting. As applications became more powerful and resource intensive, the minimum requirement for entry became bigger than shared hosting could support. Companies needed a whole bare metal server to themselves but didn’t always have the capital to invest in the hardware and the setting up of that hardware in a colocation space.
Dedicated hosting providers set up the environment, bought the servers and rented them out to companies. It moved infrastructure from a CAPEX model (where you own the asset after a single transaction) to an OPEX model (that requires monthly payments with no ownership at the end of the contract).
These agreements started out as long-term contracts because dedicated hosting providers needed to know that they would get a return on their investment. Since then, the concept is the same, but the market has shifted to shorter and shorter-term contracts. Because of that, dedicated hosting providers charge a higher price per month to de-risk against short-term cancellations and the time taken to customize a server environment to that customer’s specifications.
However, dedicated hosting providers could only provide servers at a particular speed. It takes time for a bare metal server to come online. The speed of business was now so fast that companies needed new servers within minutes. And so virtualization was born.
But before we dive into virtualization and its relationship to cloud servers, let’s first take a more in depth look at what a bare metal server is.
As the name suggests, a bare metal server is a physical server that sits within a data center or rack.
They are also known as single-tenant servers because each server serves just one tenant or client.
The benefit of a bare metal server is that the user has sole access to the server’s resources and can configure the server to their specifications, including the processor, RAM, networking set-up and storage. For example, a client can specify NVMe solid-state drives (SSDs), which provide loading speeds that outstrip standard SATA SSDs and hard disks.
The operating system (OS) is installed directly on the server. The purpose of the operating system is to standardize access to the compute resources of that bare metal server for customers.
A bare metal environment looks like this:
Because a bare metal server is dedicated to a single user, customers benefit from high security, control and performance that is hard to beat.
Cloud servers are somebody else’s bare metal servers that have been virtualized.
Unlike a bare metal server where the operating system sits directly on top, cloud servers have a hypervisor, which creates multiple virtual machines on one server.
The hypervisor splits the server’s resources (CPU, RAM, storage, networking) between these virtual machines. Each virtual machine or cloud server then has the operating system placed on top with applications on top of the OS.
A cloud server environment looks like this:
There are two ways of accessing and using cloud servers: public cloud and private cloud.
When you invest in public cloud, you are buying by the virtual machine. The provider decides where the virtual machine or machines that you have purchased sit on the server. So, while that virtual machine may be dedicated to you, it sits within a shared environment.
Virtual machines offer the ability to instantly scale. However, the public cloud also comes with downsides. The public cloud provider decides the contention ratio for your environment, which means that during busy times the performance of your virtual machine could be degraded.
The private cloud is also a virtualized environment but rather than buying by the virtual machine you buy by the virtualized bare metal server and the underlying resources of the environment are dedicated to you. You control how many virtual machines sit on your server and therefore the contention ratios.
The benefit of this is that you can create a higher contention server to host non-mission critical workloads. For mission-critical workloads that require consistent high performance, you can run them on a lower contention server at a 1:1 ratio.
By having a dedicated, virtualized environment you also have more control over the set-up of the underlying machine and therefore more customization options.
The downside of a private cloud environment is slower scaling than public cloud. If you run out of hardware capacity, you have to provision more underlying physical resource yourself or request it from your hardware provider.
Both types of cloud have helped businesses optimize their infrastructure by allowing them to conduct underlying maintenance on a server by moving the virtual machines to another server, or spin up new virtual machines quickly. It essentially provides enterprise-style complexity without having to go out and buy, say, 100 servers.
Each type of server has its strengths and weaknesses. It’s important to be aware of these when purchasing infrastructure as they could help either enable or prevent you from achieving your wider business goals.
Since the advent of cloud, its popularity amongst businesses has grown significantly. Understandably. Its promise of unparalleled scalability, flexibility and ease of set up is very attractive to companies. Particularly those that are just starting out and don’t have their own IT team to help set up and manage infrastructure.
The big three hyperscalers also offer a certain number of free credits, meaning that startups don’t need to think about infrastructure costs at the start of operations.
With a credit card and a click of a button, businesses can buy cloud servers and get their business up and running. That’s because to spin up a virtual machine, nothing needs to be physically turned on.
So, for companies in their infancy, cloud servers can be a great option. Similarly, for companies that have very unpredictable scaling requirements - think the Netflix and Amazon Primes of this world - the cloud is a great option for handling large peaks in traffic.
However, there are some limitations to cloud servers that it’s important to be aware of, particularly if you have resource-hungry applications or services.
Certain elements of the computing environment aren’t included as standard in the packages that the big three hyperscalers offer. Bandwidth, in particular, can get very expensive very quickly if customers of these cloud providers see big usage peaks.
The hypervisor that sits on top of cloud servers does use a certain amount of resource from the server to run. Cloud hosting providers, therefore, can never guarantee access to full server resources to customers and this becomes an overhead for the vendor which is passed through to customers.
Cloud providers will often over sell the resources of the physical bare-metal machine that the virtual machines are sat on. This can cause contention issues at peak times.
Cloud server providers do this deliberately to “optimize costs” as on average virtual machines are sitting there idling at around 30% usage. So, if you need predictable performance, cloud servers are unlikely going to be able to give you that. It’s predictable today, but your server could be oversubscribed tomorrow.
Where bare metal really comes into its own is for predictable infrastructure usage. Unlike public cloud providers, some bare metal hosting providers configure bare metal servers when customers come on board. It means that they can provide infrastructure at a lower cost because they’re not having to hold configurations in stock. Others hold stock to achieve quick provisioning times, however this is still offered at a more competitive price than hyperscale cloud providers.
Bare metal servers are also the best choice for customers that need consistently high performance. For example, in the adtech world, the smallest delay in an order message being routed to a broker’s server could mean the broker not being able to fill the order because the price that was originally traded at has changed. Having access to the full resource of a bare metal server ensures consistently high performance.
Of course, you don’t have to choose between one or the other. You can use both, together. A highly cost-effective, scalable, flexible, secure and high performance infrastructure could be made up of a baseline of bare metal servers with cloud auto scaling. Scaling still takes place within the cloud but if a peak remains consistent, it can be back-filled by bare metal.
With this hybrid infrastructure approach, customers can benefit from the scalability of the cloud with the consistent performance of bare metal.
When choosing which type of infrastructure is right for you and your specific needs, make sure that you have a clear idea of the overall goals of the business and how infrastructure will help you achieve them. Ultimately, whether you choose bare metal servers or cloud servers depends on what you need them to enable you to do.
Take a look at our dedicated product pages to learn more about bare metal servers and cloud servers, or get in touch with one of our experts if you’d like to discuss your specific needs.