Why should you rent space in a datacentre?
Companies, including most hosting providers, rent racks in a datacentre, mainly because it is less expensive than building their own datacentre. One can rightfully say that a datacentre is the most expensive space of all. You need to solve a lot of things, such as: energy, the related cooling, connectivity, safety of the building, supervision, certifications and construction issues, such as the load bearing capacity of the floor, etc. The second reason to rent a rack is the need for high speed on the internet. Another frequent reason for renting a rack or entire cages is business continuity and disaster recovery. For example, banks and state administrations have data, which must be backed up because the data must be available regardless of damage for any reason, e.g. floods. If the bank, where the data are primarily stored, gets flooded by water, everything can go on working from a backup location – e.g. card payments, cashpoint withdrawals and the like.
Everything that a datacentre needs to have
Moreover, strict fire protection regulations are enforced in the building, and fire-extinguishing devices must be installed (they extinguish by means of a special gas), as racks often include a lot of expensive technologies. The last safety elements are racks or cages lockable with a key, card or fingerprint.
Each datacentre consumes a huge amount of energy, so no power failure is acceptable. A datacentre should be connected to at least two switchboards independent of each other and should have sufficient power input. In spite of that, you need to prepare for a power failure, so datacentres must have UPSs (batteries) which have sufficient capacity for several minutes, until a diesel aggregate can kick in. They can run indefinitely as long as you just keep adding oil. Of course, both UPSs and aggregates must be backed up, so that if one fails, the other can take its place.
Price of energy consumption
If a rack has a power consumption of say 4kW (often you may hear the term dedicated input), it means that you must also scale the air conditioning so that it can cool down the given power input, because equipment such as servers, switches, routers, etc. transform energy into heat. Thus, a dedicated power input means a “coolable dedicated power input”. Energy consumption is based on this. The energy consumption of air-conditioning and other technological devices must be added to each kW consumed by the customer. This means that the consumption is multiplied by the coefficient PuE, which includes even indirect energy consumption. The PuE is usually in the range 1.2 – 1.8 and differs for each datacentre. Alternatively, you can also pay according to “label consumption”, which includes a calculation of how much power a particular device consumes in Watts and how much the customer should pay.
The first type of datacentre connectivity is an internet connection, which connects it to the central point of the internet (Internet Exchange). Frankfurt (pertaining to DE-CIX) is the largest Internet Exchange in Europe. Another condition is that a datacentre should be connected by means of at least two independent optical routes to other points (Internet Exchange) to prevent a connectivity failure.
Besides an internet connection, companies often want to connect their rack to their corporate MPLS VPN network (L3) or to create an Ethernet interconnection (L2) to the central office. Owing to this, you will find several telecommunication operators in each datacentre, and ISP operators often possess datacentres.
The last issue related to connectivity is public IP addresses, which can be divided into IPv4 and IPv6. You may have already heard that there has been a lack of public version 4 (IPv4) IP addresses lately; these addresses are a very important article for a hosting company. Most internet connections have their public IP addresses, and racks as well. An IP serves as a mail address in this case, so your browser or application knows where to send data. Each hosting services also have their own IP addresses, either dedicated or shared.
Technologies of telecom & mobile operators
Another reason why operators have their own datacentres is the necessity of optimum availability of their own technologies. Which technologies are these? The backbone technology of the network, mobile and network operators and large voice core systems. All this requires maximum availability, but operator facilities owned by the datacentre, where these technologies are situated, are inaccessible to the public.
As I wrote above, each consumed kW of electricity generates heat, and the devices need to be cooled down so that they can work non-stop without overheating. You will most often come across two ways of cooling by air. The first is free cooling, when you draw in air from outside into the datacentre and you remove the hot air or use it to heat other spaces. The second method of cooling by air is classic air conditioning. There are other ways, such as cooling by oil, but they have not been used in datacentres so far.
There is another aspect of cooling: either you are cooling the entire hall or you are using the so-called row-based cooling. Row-based cooling cools only a row of multiple racks, which are connected by means of a roof and into which the servers are drawing in cold air and blowing hot air out the other side.
Network operations centre (NOC)
A datacentre also includes a non-stop network operations centre or Help-desk, especially in the case of telecommunications companies. Network operations centre solve demands from customers, which are then solved further. Help-desk employees are mostly operators with technical skills, though a system engineer, who is able to repair and adjust a lot of things, should work there as well (or be on emergency call).
Besides the classic ISO certifications, such as ISO 9000 (quality management) ISO 14000 (ecology) and ISO 27000 (information safety), some datacentres are subject to inspections by The National Safety Authority because they have public contracts. However, the most frequently required certificate is the Tier certification from the private company uptimeinstitute.com. Because it is quite expensive, only few datacentres have this certification . Other companies declare they are “in compliance” with this certification. There are four TIER certifications, related mainly to availability:
- TIER I – guarantees 99.6% availability, datacentre without redundant elements
- TIER II – guarantees 99.7% availability, and a datacentre with one supply and cooling distribution processor, but also with the support of redundant elements
- TIER III – guarantees 99.98% availability, and a datacentre which has a back-up safety system or more active safety and cooling elements (including redundant components)
- TIER III – guarantees 99.99% availability, and the best secured datacentre, which has more active supply and cooling elements, including redundant components and a system of failure prevention to provide the 99.99% accessibility
Latest posts by Radek Kucera (see all)
- How Does the Internet Work? - 2016-08-19
- Tutorial: How to redirect domain.com to blogger.com - 2016-02-24
- How to create your own web pages - 2015-09-25