PRODUCTS » DATA CENTERS » Introduction
Modern data centers are significantly different from early versions of this method of installing servers and other equipment. Original data centers were founded on the backbone of Internet connections from the 1990s. Large data halls were built primarily as a space perfectly protected in terms of security, uninterrupted power supply and with adequate capacity of communication lines, mostly optical. The individual cabinets were then leased to users for their technical and Internet applications. These centers almost always had raised floors with high loading, beneath which were located all cabling and cooling systems. Cooling was mostly centralised so the entire room was air conditioned regardless of the distribution of the thermal load and without the ability to effectively regulate cooling for each cabinet or the data hall. With the development of telecommunications, with new protocols and an increase in the transmission line capacity high-speed connections have become available without the need to place the device directly onto the backbone connections. As well, another revolution took place on another front – processing power and storage capacity. Processor performance grew dramatically, multi-core processors began to appear along with new operating systems. Hard drives and other storage media multiplied in their capacity. Server operating systems began to use available resources for sharing multiple, simultaneously running applications and it was then only a small step to sharing one physical computer for running multiple operating systems simultaneously – to virtualisation. The majority of companies now run their applications either on their own servers dedicated to specific applications or using the services of the ever popular virtualisation and cloud-hosting. Both of these methods require a high density of installed computing power. Because running businesses and institutions is a critical application, it requires power-fail safety, physical protection and also controlled cooling. All these aspects are covered by the concept of a data center. Over time the standard was set for the design and construction of data centers. Cabinets are placed in groups, usually in the form of two rows spaced 1,200 mm apart (two standard raised floor tiles). The aisle between the cabinets is then roofed and closed at the ends by sliding doors. For really large data centers, dividing doors can also be found within these units, which split them down into smaller sections. The main product of our company‘s data center solution are data cabinets with high load (from 1200kg to 1800kg), accompanied with other components such as aisle roof in variety of types, self-closing sliding aisle door, blanking panels etc. Cabinets can be colocated (divided in multiple boxes) with variety of front and rear door, locks and other functionality. In the cases where is not possible to use the raised floor (low room height, low permisisble floor loading and so on) we can offer an alternative, in the form of In-Row cooling units with top media inlet and condensate pump. Tjis advanced solution offers extra large installed cooling capacity in a small footprint.
This critical stage of the data center building has an indefinite solution. It depends on the cabinet arrangement, distribution of heat load and its size, the choice of thermal scheme (hot / cold aisle, zonal distribution of cold etc.) and many other aspects. When selecting the most suitable arrangement it is necessary to take into account the type of cooling system (under-floor cooling, In-Row cooling units …) and with regard to the coolant used, also selecting the outer part of the system. Choice of the cooling medium must be done with respect to outdoor climatic conditions, the distance of the data center from the external units and the elevation between them. Depending on conditions, we can choose water cooling with appropriate addition of antifreeze, or system operating with liquid refrigerant gas. With regard to safety and redundancy required for service operations it is necessary to design the complete system properly, meaning inside the data center and on the side of radiators or condensers. Furthermore, it is necessary to think about the requirements of humidity control. Humidity less than 30 % carries a risk of damage to the installed equipment by static electricity surge; high humidity can lead to condensation. In our portfolio you will find the cooling systems of leading manufacturers active in this highly specialized field of data centers and telecommunication equipment cooling for many years. Thanks to the close cooperation and support of their development teams, we can offer proven and guaranteed solutions. Designing functional, reliable, financially and operationally economical cooling systems for the data center is not an easy matter and specialists, who will recommend the optimum solution in terms of investment and operating costs, are fully available.