Categories
Datacenter Interviews

“Demand for scale and speed delivered at the right economics is opening the door for a new breed of Hyperscale Service Provider being sought by the biggest Internet-based businesses.” – Chris Ortbals, QTS.

The rapid adoption of public cloud and the onset of new technologies like the internet of things, neural networks, artificial intelligence, machine learning and mega-scale online retailing are reshaping the data center industry, driving demand for data center capacity and cloud connectivity.

QTS is the leading data center provider that serves current and future needs of both hyperscale and hybrid colocation customers via software-defined data center experience. We recently interviewed Chris Ortbals – Executive Vice President, Products & Marketing, QTS, to know about his take on the changing data center requirements and QTS strategy of redefining the data center.

1. Please share an overview of QTS’ journey from inception till date with DHN readers. How has it transformed from being a single data center to becoming one of the leading national data center providers?

QTS is the creation of Chad Williams, a business and real-estate entrepreneur who had a strong vision of what a data center can and should be. Williams foresaw increasing IT complexity and demand for capacity and recognized the opportunity for large, highly secure, multi-tenant data centers, with ample space, power and connectivity.

Chris Ortbals Executive Vice President, Products & Marketing QTS

In 2005, QTS was formally established with the purchase of a 370,000 square foot Atlanta-Suwanee mega data center. Williams focused on building an integrated data center platform delivering a broad range of IT infrastructure services ranging from wholesale to colocation, to hybrid and multi-cloud, to hyperscale solutions, and backed by an unwavering commitment to customer support.

Since then, we have grown both organically and through acquisition into one of the world’s leading data center and IT infrastructure services providers, and in 2013 we began trading on the New York Stock Exchange under symbol (NYSE: QTS).

Today, QTS offers a focused portfolio of hybrid colocation, cloud, and hyperscale data center solutions built on the industry’s first software-defined data center and service delivery platform and is a trusted partner to 1,200 customers, including 5 of the worlds’ largest cloud providers. We own, operate or manage more than six million square feet of data center space encompassing 26 data centers, 600+ megawatts of critical power, and access to 500+ networks including connectivity on-ramps to the world’s largest hyperscale companies and cloud providers.

More recently, we have been accelerating momentum as a hyperscale data center provider able to meet unique requirements for scale and speed-to-market delivered at the right economics being sought by the biggest Internet-based businesses.

2. Throw some light on the recent QTS strategy of redefining the data center. What’s the Software-Defined Data Center approach, how do you plan to execute it and how will it help hyperscale and hybrid colocation customers?

We believe that QTS’ Service Delivery Platform (SDP) enables QTS as one of the first true Software Defined Data Centers (SDDC) with 100% completeness of vision. It is an architectural approach that facilitates service delivery across QTS’ entire hybrid colocation and hyperscale solutions portfolio.

Through policy-based automation of the data and facilities infrastructure, QTS customers benefit from the ability to adapt to changes in real-time, to increase utilization, performance, security and quality of services. QTS’ SDP approach involves the digitization, aggregation and analysis of more than 4 billion data points per day across all of QTS’ customer environments.

For hybrid colocation and hyperscale customers, it allows them to integrate data within their own applications and gain deeper insight into the use of their QTS services within their IT environments. It is a highly-automated, cloud-based approach that increases visibility and facilitates operational improvements by enabling customers to access and interact with information related to their data center deployments in a way that is simple, seamless and available on-demand.

3. How do you differentiate yourself from your competition?

QTS software-defined service delivery is redefining the data center to enable new levels of automation and innovation that significantly improves our customers’ overall experience. This is backed by a hi-touch, enterprise customer support organization that is focused on serving as a trusted and valued partner.

4. How does it feel to receive the industry leading net promoter score for the third consecutive year?

We were extremely proud to announce that in 2017 we achieved our all-time high NPS score of 72 and the third consecutive year that we have led the industry in customer satisfaction for our data centers across the U.S.

Our customers rated us highly in a range of service areas, including customer service, physical facilities, processes, responsiveness service of onsite staff and our 24-hour Operations Service Center.

As our industry-leading NPS results demonstrate, our customers continue to view QTS as a trusted partner. We are also starting to see the benefits of our service delivery platform that is delivering new levels of innovation in how customers interact with QTS and their infrastructure, contributing to even higher levels of customer satisfaction.

5. QTS last year entered into a strategic alliance with AWS. Can you elaborate what is CloudRamp and how will it simplify cloud migration?

AWS came to us last year telling us that a growing number of their customers were requiring colocation as part of their hybrid IT solution. They viewed QTS as a customer-centric colocation provider with the added advantage of our Service Delivery Platform that allowed us to seamlessly integrate colocation with AWS as a turnkey service available on- demand.

We entered a strategic collaboration with AWS to develop and deliver QTS CloudRampTM – direct connected colocation for AWS customers made available for purchase online via the AWS Marketplace.

By aligning with AWS, we were able to offer an innovative approach to colocation, bridging the gap between traditional solutions and the cloud. The solution is also groundbreaking in that it marked the first time AWS had offered colocation to its customers and signaled the growing demand for hybrid IT solutions. At the same time, it significantly accelerated time-to-value for what previously had been a much slower purchasing and deployment process.

For enterprises with requirements extending beyond CloudRamp, QTS and AWS provide tailored, hybrid IT solutions built upon QTS’ highly secure and reliable colocation infrastructure optimized for AWS.

6. Tell us something about Sacramento-IX. How will the newly deployed Primary Internet Exchange Hub in QTS Sacramento Data Center facilitate interconnection and connectivity solutions?

QTS is strongly committed to building an unrestricted Internet ecosystem and we are focused on expanding carrier neutral connectivity options for customers in all of our data centers.

Interconnection has evolved from a community driven effort in the 90’s to a restrictive, commercial industry dominated by a few large companies. Today there is a movement to get back to the community driven, high integrity ecosystem, and QTS is aligning our Internet exchange strategy as part of this community.

A great example is how the Sacramento Internet Exchange (Sacramento-IX) has deployed its primary Internet Exchange hub within QTS’ Sacramento data center. It is the first internet exchange in Sacramento and is being driven by increased traffic network performance demands in the region. It expands QTS’ Internet ecosystem and simplifies our customers network strategies by providing diverse connectivity options allowing them to manage network traffic in a more cost-effective way.

Once considered the backup and recovery outpost for the Bay area, Sacramento has quickly become a highly interconnected and a geostrategic network hub for northern California. It also solidifies our Sacramento data center as one of the most interconnected data centers in the region and as the primary west coast connectivity gateway for key fiber routes to Denver, Salt Lake City and points east.

7. Hyperscale data centers are growing at an accelerated pace and are expected to soon replace the traditional data centers. Can you tell us some factors/reasons that aid the rise of hyperscale data centers?

The rapid adoption of public cloud, the Internet of things, artificial intelligence, neural networks, machine learning, and mega-scale online retailing are driving unprecedented increases in demand for data center capacity and cloud connectivity.

Hyperscale refers to the rapid deployment of this capacity required for new mega-scale Internet business models. These Hyperscale companies require a data center growth strategy that combines speed, scalability and economics in order to drive down cost of compute and free up the capital needed to feed the needs of their core businesses. Think Google, Uber, Facebook, Amazon, Apple, Microsoft and many more needing huge capacity in a quick timeframe. They are looking for mega-scale computing capacity inside hyperscale data centers that can deliver economies of scale not matched by conventional enterprise data center architectures.

This demand for scale and speed delivered at the right economics is opening the door for a new breed of Hyperscale Service Provider being sought by the biggest Internet-based businesses. These are data centers whose ability to deliver immense capacity must be matched by an ability to provide core requirements for speed, quality, operator excellence, visibility and economics, that leaves out a majority of conventional hosting and service providers who are not interested in or capable of meeting them.

And while the organization may have need for very large geostrategic 20, 40, 60 megawatt deployments, typically they want a provider that can deliver it incrementally to reduce risk and increase agility.

8. Throw some light on your current datacenters and future expansion plans.

Chad Williams’ had the vision for identifying large, undervalued – but infrastructure-rich – buildings (at low cost basis) that could be rapidly transformed into state of the art “mega” data center facilities to serve growing enterprise demand for outsourced IT infrastructure services.

In Chicago, the former Chicago Sun Times printing plant was transformed into a 467,000 square foot mega data center. In Dallas and Richmond, former semi-conductor plants are now state of the art mega data centers encompassing more than 2 million square feet. And in Atlanta, the former Sears distribution center was converted into a 967,000 square foot mega data center that is now home to some of the world’s largest cloud and social media platforms.

However, in some cases, a greenfield approach is the more viable option. In Ashburn Va. the Internet capital of the world, we are building a new 427,000 square foot facility from the ground up that is expected to open later this summer. Expansion plans also call for new data center builds in Phoenix and Hillsboro, Oregon.

9. What is your datacenter sustainability and efficiency strategy?

At QTS, we understand that being a good environmental steward takes much more than just a simple initiative. That’s why we have focused our efforts on developing a company-wide approach – one that utilizes reused and recycled materials, maximizes water conservation and improves energy savings.

Central to this is our commitment to minimizing the data center carbon footprint and utilizing as much renewable fuel as possible by implementing a 3-pronged sustainability approach featuring solutions in containment and power usage effectiveness (PUE) metric products.

This encompasses:

       1. Develop and Recycle Buildings

Part of our data center sustainability strategy is reusing brownfield properties and transforming them into state-of-the-art data centers.

        2. Water Conservation

With a large data center comes a big roof that is capable of harvesting rainwater. We collect millions of gallons of water using a harvesting system on a portion of the roof.

        3. Energy Efficiency

As a data center provider, cooling is a critical part of our job and is approximately 30% of the electricity load at the data center.

QTS is one of the first data center companies to invest in renewable energy specifically for its hybrid colocation and hyperscale customers.

A recent example is a multi-year agreement with Citi to provide 100% renewable power for our 700,00 sq. ft. mega data center in Irving, Texas. The power and renewable energy credits will come from the Flat Top Wind Project, a 200 megawatt utility-scale wind energy facility in central Texas. QTS will purchase 15 MW of 100% renewable power for its Irving data center, with plans for a similar agreement in its Fort Worth data center later this year.

The investment supports QTS’ commitment to lead the industry in providing clean, renewable energy alternatives for QTS hybrid colocation and hyperscale customers that include five of five largest cloud providers and several global social media platforms.

In addition to the new wind power initiative in Texas, QTS’ New Jersey data center features a 14 MW solar farm to offset emissions associated with power consumption at that facility. QTS plans to expand renewable power initiatives in existing and new data centers including those being planned for Phoenix and Hillsboro, OR.

10. What’s in the roadmap for the year 2018?

QTS is now executing on our 2018 strategic growth plan that involves continued innovation with the Service Delivery Platform. It enables a software-defined data center experience for hyperscale and hybrid colocation customers. QTS’ SDP represents a big data approach enabling customers to access and interact with information related to their specific IT environment by aggregating metrics and data from multiple sources into a single operational view.

More importantly, it provides customers the ability to remotely view, manage and optimize resources in real time in a cloud-like experience, which is what customers increasing expect from their service providers. In addition, through a variety of software-defined networking platforms, enterprises can now get direct connectivity to the world’s largest cloud providers with real-time visibility and control over their network infrastructure using QTS’ SDP application interface.

Categories
Cloud Cloud News Datacenter

IBM expands z14 mainframe portfolio to make it a better fit for cloud datacenters

IBM unveiled two new miniaturized mainframes, which are easier to integrate into public or private cloud environments and cloud datacenters.

The new models — IBM z14 Model ZR1 and IBM LinuxONE Rockhopper II, will make mainframe a more adequate fit for the modern data centers.

IBM z14 Model ZR1 is based on IBM z14 mainframe technology, and features a 19-inch single-frame case design. This design, IBM said, will allow easy placement into modern datacenter rack of the same kind which are used to frame regular servers.

Z14 ZR1 is an improved version of the earlier Z13 mainframe. It offers 10% more throughput, and 8 terabytes of memory (two times more than the Z13 mainframe).

Many enterprises in financial sector rely on IBM Z-series systems. The ZR1 systems can be a lot more useful for financial sector, allowing them to process up to 850 million encrypted transactions every day.

The other mainframe system, IBM LinuxONE Rockhopper II, has been designed to offer a cloud platform with high availability, encryption and performance at scale. These features are packaged through an enterprise Linux server that easily fits in datacenter.

LinuxONE brings industry-leading security to Linux environments, using IBM Secure Service Container (SSC) technology. The application can be made ready for SSC deployment just by integrating it into Docker container. The developers can manage these applications using Docker and Kubernetes tools.

LinuxONE provides enough computational power to test up to 330,000 Docker containers, allowing developers to “compose high-performance applications and embrace a microservices architecture without latency or scale constraints.”

IBM said that the mainframe is a stable, secure and mature environment for IT initiatives, including the proliferation of blockchains.

For instance, 87 percent of all credit card transactions and nearly $8 trillion in payments a year are processed on mainframes,” wrote IBM in a blog post. Furthermore, the platform manages 29 billion ATM transactions each year, equivalent to nearly $5 billion per day. If you’re traveling by plane you can thank a mainframe, as they are also responsible for processing four billion passenger flights each year.”

Also read: IBM enables developers to build & run Kubernetes containers on bare metal cloud

Moreover, the z14 ZR1 and LinuxONE increases the capacity, performance, memory and cache across all aspects of the system.

Categories
Articles Business

Why immersion cooling is imminent to power the next generation datacenters?

Real-time video streaming, online gaming as well as mobile devices already account for 60% of all data traffic, and it is predicted that this will rise to 80% by 2020 – ACEEE Summer Study on Energy Efficiency in Buildings.

In our always on life style- whether we like something on Facebook, stream the latest movie or post on Instagram, we generate huge amount of data. Every online activity involves massive amounts of data that’s stored in different datacenters, ranging from small closets to large server rooms to mammoth like cloud datacenters.

Datacenters power our connected devices and store billions of gigabytes of data which we and business organizations use, to meet our daily transaction processing needs, across the globe. These datacenters house IT equipment – servers, networking and storage equipment, and consume massive energy to run them and to cool the heat which dissipates from the IT equipment.

“Cooling is by far the biggest user of electrical power in the data center,” says Steve Carlini, global director of data center solution marketing at Schneider Electric, a vendor of data center power and cooling products. “That area may take up to 40 or 50 percent of all the power going into your data center.”

Global datacenter energy consumption is increasing as data centers usually run round the clock, throughout the year and consume roughly 3% of all the globally generated power. They generate approximately 4% of greenhouse gas emissions, and this places ICT industry at par with the airline industry. Global CO2 emissions of data centers are estimated to have the fastest growing carbon footprint from across the whole ICT sector.

The intense power requirements needed to run and cool data centers now account for almost a quarter of global carbon dioxide emissions from ICT, according to analyst firm Gartner.

How much energy does a server farm/ data center consume?

According to a report by Lawrence Berkeley National Laboratory, data centers are on track to use 73 billion kwh by 2020. If not for increase in efficiency since 2010, data centers would likely be using 200 billion kwh by 2020. They predict that data center energy use will grow by 4% between 2014 and 2020.

The power draw for data centers range according to their sizes. For a rack of servers in a closet, power draw is few kilowatts and it is several tens of Megawatt for large data centers. They consume electricity equivalent to be used by a small town. This growing energy consumption is one of the biggest data center power problems.

As stated above, it can not only be attributed primarily to the IT demands and cooling equipment, but to lighting, power distribution and other requirements as well. The cooling system accounts for up to 40% in average of the energy, with the most efficient systems using 24% of the total energy and the least efficient 61%.

The datacenter power is distributed to chillers, cooling towers and water pumps.

Water chillers are the most energy-consuming since they supply chilled water to the cooling coil in order to keep the indoor temperature low enough to remove the heat emitted by the servers and other equipment.

Datacenters, Cooling and Technological Advances

Over the last few years, the demand for data centers among cloud service providers, enterprises, government agencies, colocation providers and telecommunication organizations has increased exponentially due to the adoption of advanced technologies such as cloud-based services for their operational business needs.

Also, there has been a rapid growth of new technological trends like big data analytics, Artificial Intelligence/machine learning, cryptocurrencies and Internet of Things.

This rapid growth of machine learning and AI applications is contributing to the building up of new services, enhanced products and is also boosting demand for powerful high – performance computing hardware. This rising popularity in turn is driving demand for more data centre space and have design implications for high-density racks to be powered and cooled.

Bitcoin mining also burns huge amount of electricity. Below screengrab shows the rising index of Bitcoin energy consumption.

Thus, the trends of increasing data center capacity and power use by information technology equipment, re-emphasize the need to reduce cooling load and data centre PUE to form green data centers/green IT.

What is PUE? Power usage effectiveness(PUE) is the metric used to determine data centre energy efficiency. An ideal PUE is 1.0. which indicates the maximum attainable efficiency with no overhead energy.

Datacenter cooling solutions – Traditional (Air Cooling) vs Liquid immersion cooling

Cooling solutions are an integral part of data centers and they can be classified into two types namely, air-based and liquid-based cooling.

Air based cooling Liquid immersion cooling
Hard to reduce PUE under 2 PUE of less than 2
Air circulation is uneven Coolant is evenly pumped through the pod
Installation costs are large 25% less implementation costs as compared to air solution
Raised flooring requirements No raised floorings
Large server rooms require extra air fans No need to move any air
Hardware requires fans Does not require fans
Air pushes hazardous particles Sealed and clean environment
Noisy environment – Fans and Indoor units generate more than 80db noise Silent operation environment

Why immersion cooling is imminent to power the next generation datacenters?

With air based datacenter cooling methods, cooling high-density racks is very difficult, and a complete server room is full of high-density racks.

Liquid immersion cooling deployed datacenters are more compact, modular, green and highly efficient. They save up to 99 percent electricity than traditional data center cooling which uses chillers, heat pump and HVAC.

Server immersion cooling allows companies to drastically reduce their data center energy load, irrespective of their PUE. Hardware or servers are kept submerged in a liquid (generally oil based) which is dielectric and thermally conductive.

This liquid submersion cooling technology allows data centers to use evaporative or adiabatic cooling towers instead of chiller-based air cooling.

Use cases: Current commercial applications for immersion cooling range from data center oriented solutions for commodity server cooling, bitcoin mining, server clusters, HPCC applications, cloud-based and web hosting architectures.

Thus, demand for these oil-based data center solutions is on the rise and offerings too have become more professionalized.

Barcelona based Immersion cooling solution startup, Submer Technologies need a special mention here for their oil-based data center product.

Submer- A way to socially responsible data centers

Submer has developed a cooling solution for data centers based on immersion cooling, keeping in mind the pains of operating hardware in data centers. Submer’s immersion cooling solution uses coolant fluid which is 100% biodegradable dielectric fluid and ensures best PUE of 1.03.

This server cooling system by Submer helps its users to achieve >45% savings on traditional electricity bill and hyper scaler efficiency.

It allows datacenters to extract 5o KW heat capacity in 22u format.

Product helps in increasing computing density per sqm. It can be put next to racks. Thus, users can save 75% of the space by placing more units one by the other. Each device has 50 KW heat dissipating capacity.

You can scale it to any volume in a very small space as it is 1mx 120.

Use cases: Any infrastructure or hardware. Built for hosters, cloud providers, regular datacenters, edge computing, cryptocurrency mining, blockchain and research datacenters.

“It’s mainly the mindset that needs to be changed, everyone is used to air- based cooling systems and its kind from 6os we have been working from the same way, but working with a fluid is just as comfortable as we have designed it in a way that extracting a server placing it or maintenance rails being able to switch some hardware and dipping it back in takes you exactly the same time that it would be in an air basis.so it’s easy to use and integrate. You can retrofit existing hardware running in base tracks very easily into existing data center infrastructure or deployed as stand-alone”.- Daniel Pope, Co-founder & CEO, Submer.

SmartPod and Crypto pod: Two solutions in two sizes

  • SmartPod: This datacenter immersion cooling solution is optimized for variable workloads. It comes in a 22u and 45u configuration to dissipate 25kW – 50kW plus. SmartPod 45u is for medium density servers and Smart Pod 22u for HPC / high density computing.
  • CryptoPod: This product is built specifically for crypto blockchain and is optimized for always-on workloads like bitcoin BTC / cryptocurrency mining, blockchain.

Many data center operators struggle with data centre management with increased OPEX year-over-year due to increased power consumption by cooling systems but Submer promises ROI in less than a year.

Thus, with immersion cooling deployment, the green revolution technology, data center operators can build data center facilities to deliver high efficiency and reduce power consumption.

Categories
News

IBM increases Datacenters to enable organizations build futuristic applications leveraging IoT & AI

IBM announced the opening of four new datacenters; two in London; one in San Jose, California; and another one in Sydney, Australia.

The announcement came one day after the company’s second quarter 2017 earnings report, which failed to earn good revenues.

With this, IBM now has nearly 60 Datacenters, spread across 19 countries to support the global expansion of its cloud services.

John Considine, General Manager for cloud infrastructure services, IBM said, “IBM operates cloud data centers in nearly every major market around the world, ensuring that our clients can keep their data local for a variety of reasons – including performance, security or regulatory requirements.”

He further added, “We continue to expand our cloud capacity in response to growing demand from clients who require cloud infrastructure and cognitive services to help them compete on a global scale.

Image Credit: IBM

By expanding the cloud footprint, IBM plans to give global organizations access to new facilities that will help them build future oriented IoT (Internet of Things), blockchain and AI (Artificial Intelligence) applications through proper cloud infrastructure facility.

Also Read: AI and cognitive computing – spearheading enterprise digital transformation

With this, clients will also get the flexibility to store and manage data based on their choice.

IBM also says that the new datacenters will help clients meet regulatory compliance standards, as they are one of the first companies to adopt EU’s Data Protection code for CSPs (Cloud Service Providers).

With access to more than 150 APIs, organizations can take help of the cognitive and data computing power technologies which are at the core of these four new datacenters.

A recent report by IDC predicted a $266 Billion worldwide spending on cloud services and infrastructure, which will drive the datacenter industry growth as well.

With this expansion, IBM will be able to strengthen its cloud services by supporting clients’ easy migration to cloud.