Categories
Newss

VMware brings cloud experience to entire data center with acquisition of Avi Networks

VMware is acquiring Avi Networks to advance its strategy for bringing the public cloud experience to the entire data center.

Avi Networks is a leading provider of multi-cloud app delivery services. Hundreds of organizations around the world including Fortune 500 companies are using its services.

It provides a Software Load Balancer, Intelligent Web Application Firewall, Advanced Analytics and Monitoring and a Universal Service Mesh. Enterprises can run these services across private and public clouds. Further, the services provide support for applications that run on VMs, containers and bare metal.

With the acquisition of Avi Networks, VMware will be in the right position to provide customers a one-stop software-defined networking solution for the modern multi-cloud era.

“VMware is committed to making the data center operate as simply and easily as it does in the public cloud, and the addition of Avi Networks to the growing VMware networking and security portfolio will bring us one step closer to this goal after the acquisition closes,” said Tom Gillis, senior vice president and general manager, networking and security business unit, VMware.

“This acquisition will further advance our Virtual Cloud Network vision, where a software-defined distributed network architecture spans all infrastructure and ties all pieces together with the automation and programmability found in the public cloud. Combining Avi Networks with VMware NSX will further enable organizations to respond to new opportunities and threats, create new business models, and deliver services to all applications and data, wherever they are located,” added Gillis.

Once the acquisition closes, VMware will integrate load balancing capabilities of Avi with VMware NSX Data Center to help enterprises overcome the complexity of legacy systems and ADC appliances.

The Avi platform automates application delivery with closed-loop analytics, template-driven configuration, and integration with management products. It uses advanced analytics to monitor performance. Avi technology is secure, dynamic and multi-cloud fabric that runs across private and public clouds enabling the applications to remain unchanged while running in different computing environments.

“Upon close, customers will be able to benefit from a full set of software-defined L2-7 application networking and security services, on-demand elasticity, real time insights, simplified troubleshooting, and developer self-service.” said Amit Pandey, chief executive officer, Avi Networks.

The deal is expected to close in the second quarter of VMware’s fiscal year 2020, which closes on August 2. This transaction won’t have material impact on fiscal 2020 operating results.

READ NEXT: VMware acquires Bitnami to augment multi-cloud efforts

Categories
Datacenter Interviews

“Demand for scale and speed delivered at the right economics is opening the door for a new breed of Hyperscale Service Provider being sought by the biggest Internet-based businesses.” – Chris Ortbals, QTS.

The rapid adoption of public cloud and the onset of new technologies like the internet of things, neural networks, artificial intelligence, machine learning and mega-scale online retailing are reshaping the data center industry, driving demand for data center capacity and cloud connectivity.

QTS is the leading data center provider that serves current and future needs of both hyperscale and hybrid colocation customers via software-defined data center experience. We recently interviewed Chris Ortbals – Executive Vice President, Products & Marketing, QTS, to know about his take on the changing data center requirements and QTS strategy of redefining the data center.

1. Please share an overview of QTS’ journey from inception till date with DHN readers. How has it transformed from being a single data center to becoming one of the leading national data center providers?

QTS is the creation of Chad Williams, a business and real-estate entrepreneur who had a strong vision of what a data center can and should be. Williams foresaw increasing IT complexity and demand for capacity and recognized the opportunity for large, highly secure, multi-tenant data centers, with ample space, power and connectivity.

Chris Ortbals Executive Vice President, Products & Marketing QTS

In 2005, QTS was formally established with the purchase of a 370,000 square foot Atlanta-Suwanee mega data center. Williams focused on building an integrated data center platform delivering a broad range of IT infrastructure services ranging from wholesale to colocation, to hybrid and multi-cloud, to hyperscale solutions, and backed by an unwavering commitment to customer support.

Since then, we have grown both organically and through acquisition into one of the world’s leading data center and IT infrastructure services providers, and in 2013 we began trading on the New York Stock Exchange under symbol (NYSE: QTS).

Today, QTS offers a focused portfolio of hybrid colocation, cloud, and hyperscale data center solutions built on the industry’s first software-defined data center and service delivery platform and is a trusted partner to 1,200 customers, including 5 of the worlds’ largest cloud providers. We own, operate or manage more than six million square feet of data center space encompassing 26 data centers, 600+ megawatts of critical power, and access to 500+ networks including connectivity on-ramps to the world’s largest hyperscale companies and cloud providers.

More recently, we have been accelerating momentum as a hyperscale data center provider able to meet unique requirements for scale and speed-to-market delivered at the right economics being sought by the biggest Internet-based businesses.

2. Throw some light on the recent QTS strategy of redefining the data center. What’s the Software-Defined Data Center approach, how do you plan to execute it and how will it help hyperscale and hybrid colocation customers?

We believe that QTS’ Service Delivery Platform (SDP) enables QTS as one of the first true Software Defined Data Centers (SDDC) with 100% completeness of vision. It is an architectural approach that facilitates service delivery across QTS’ entire hybrid colocation and hyperscale solutions portfolio.

Through policy-based automation of the data and facilities infrastructure, QTS customers benefit from the ability to adapt to changes in real-time, to increase utilization, performance, security and quality of services. QTS’ SDP approach involves the digitization, aggregation and analysis of more than 4 billion data points per day across all of QTS’ customer environments.

For hybrid colocation and hyperscale customers, it allows them to integrate data within their own applications and gain deeper insight into the use of their QTS services within their IT environments. It is a highly-automated, cloud-based approach that increases visibility and facilitates operational improvements by enabling customers to access and interact with information related to their data center deployments in a way that is simple, seamless and available on-demand.

3. How do you differentiate yourself from your competition?

QTS software-defined service delivery is redefining the data center to enable new levels of automation and innovation that significantly improves our customers’ overall experience. This is backed by a hi-touch, enterprise customer support organization that is focused on serving as a trusted and valued partner.

4. How does it feel to receive the industry leading net promoter score for the third consecutive year?

We were extremely proud to announce that in 2017 we achieved our all-time high NPS score of 72 and the third consecutive year that we have led the industry in customer satisfaction for our data centers across the U.S.

Our customers rated us highly in a range of service areas, including customer service, physical facilities, processes, responsiveness service of onsite staff and our 24-hour Operations Service Center.

As our industry-leading NPS results demonstrate, our customers continue to view QTS as a trusted partner. We are also starting to see the benefits of our service delivery platform that is delivering new levels of innovation in how customers interact with QTS and their infrastructure, contributing to even higher levels of customer satisfaction.

5. QTS last year entered into a strategic alliance with AWS. Can you elaborate what is CloudRamp and how will it simplify cloud migration?

AWS came to us last year telling us that a growing number of their customers were requiring colocation as part of their hybrid IT solution. They viewed QTS as a customer-centric colocation provider with the added advantage of our Service Delivery Platform that allowed us to seamlessly integrate colocation with AWS as a turnkey service available on- demand.

We entered a strategic collaboration with AWS to develop and deliver QTS CloudRampTM – direct connected colocation for AWS customers made available for purchase online via the AWS Marketplace.

By aligning with AWS, we were able to offer an innovative approach to colocation, bridging the gap between traditional solutions and the cloud. The solution is also groundbreaking in that it marked the first time AWS had offered colocation to its customers and signaled the growing demand for hybrid IT solutions. At the same time, it significantly accelerated time-to-value for what previously had been a much slower purchasing and deployment process.

For enterprises with requirements extending beyond CloudRamp, QTS and AWS provide tailored, hybrid IT solutions built upon QTS’ highly secure and reliable colocation infrastructure optimized for AWS.

6. Tell us something about Sacramento-IX. How will the newly deployed Primary Internet Exchange Hub in QTS Sacramento Data Center facilitate interconnection and connectivity solutions?

QTS is strongly committed to building an unrestricted Internet ecosystem and we are focused on expanding carrier neutral connectivity options for customers in all of our data centers.

Interconnection has evolved from a community driven effort in the 90’s to a restrictive, commercial industry dominated by a few large companies. Today there is a movement to get back to the community driven, high integrity ecosystem, and QTS is aligning our Internet exchange strategy as part of this community.

A great example is how the Sacramento Internet Exchange (Sacramento-IX) has deployed its primary Internet Exchange hub within QTS’ Sacramento data center. It is the first internet exchange in Sacramento and is being driven by increased traffic network performance demands in the region. It expands QTS’ Internet ecosystem and simplifies our customers network strategies by providing diverse connectivity options allowing them to manage network traffic in a more cost-effective way.

Once considered the backup and recovery outpost for the Bay area, Sacramento has quickly become a highly interconnected and a geostrategic network hub for northern California. It also solidifies our Sacramento data center as one of the most interconnected data centers in the region and as the primary west coast connectivity gateway for key fiber routes to Denver, Salt Lake City and points east.

7. Hyperscale data centers are growing at an accelerated pace and are expected to soon replace the traditional data centers. Can you tell us some factors/reasons that aid the rise of hyperscale data centers?

The rapid adoption of public cloud, the Internet of things, artificial intelligence, neural networks, machine learning, and mega-scale online retailing are driving unprecedented increases in demand for data center capacity and cloud connectivity.

Hyperscale refers to the rapid deployment of this capacity required for new mega-scale Internet business models. These Hyperscale companies require a data center growth strategy that combines speed, scalability and economics in order to drive down cost of compute and free up the capital needed to feed the needs of their core businesses. Think Google, Uber, Facebook, Amazon, Apple, Microsoft and many more needing huge capacity in a quick timeframe. They are looking for mega-scale computing capacity inside hyperscale data centers that can deliver economies of scale not matched by conventional enterprise data center architectures.

This demand for scale and speed delivered at the right economics is opening the door for a new breed of Hyperscale Service Provider being sought by the biggest Internet-based businesses. These are data centers whose ability to deliver immense capacity must be matched by an ability to provide core requirements for speed, quality, operator excellence, visibility and economics, that leaves out a majority of conventional hosting and service providers who are not interested in or capable of meeting them.

And while the organization may have need for very large geostrategic 20, 40, 60 megawatt deployments, typically they want a provider that can deliver it incrementally to reduce risk and increase agility.

8. Throw some light on your current datacenters and future expansion plans.

Chad Williams’ had the vision for identifying large, undervalued – but infrastructure-rich – buildings (at low cost basis) that could be rapidly transformed into state of the art “mega” data center facilities to serve growing enterprise demand for outsourced IT infrastructure services.

In Chicago, the former Chicago Sun Times printing plant was transformed into a 467,000 square foot mega data center. In Dallas and Richmond, former semi-conductor plants are now state of the art mega data centers encompassing more than 2 million square feet. And in Atlanta, the former Sears distribution center was converted into a 967,000 square foot mega data center that is now home to some of the world’s largest cloud and social media platforms.

However, in some cases, a greenfield approach is the more viable option. In Ashburn Va. the Internet capital of the world, we are building a new 427,000 square foot facility from the ground up that is expected to open later this summer. Expansion plans also call for new data center builds in Phoenix and Hillsboro, Oregon.

9. What is your datacenter sustainability and efficiency strategy?

At QTS, we understand that being a good environmental steward takes much more than just a simple initiative. That’s why we have focused our efforts on developing a company-wide approach – one that utilizes reused and recycled materials, maximizes water conservation and improves energy savings.

Central to this is our commitment to minimizing the data center carbon footprint and utilizing as much renewable fuel as possible by implementing a 3-pronged sustainability approach featuring solutions in containment and power usage effectiveness (PUE) metric products.

This encompasses:

       1. Develop and Recycle Buildings

Part of our data center sustainability strategy is reusing brownfield properties and transforming them into state-of-the-art data centers.

        2. Water Conservation

With a large data center comes a big roof that is capable of harvesting rainwater. We collect millions of gallons of water using a harvesting system on a portion of the roof.

        3. Energy Efficiency

As a data center provider, cooling is a critical part of our job and is approximately 30% of the electricity load at the data center.

QTS is one of the first data center companies to invest in renewable energy specifically for its hybrid colocation and hyperscale customers.

A recent example is a multi-year agreement with Citi to provide 100% renewable power for our 700,00 sq. ft. mega data center in Irving, Texas. The power and renewable energy credits will come from the Flat Top Wind Project, a 200 megawatt utility-scale wind energy facility in central Texas. QTS will purchase 15 MW of 100% renewable power for its Irving data center, with plans for a similar agreement in its Fort Worth data center later this year.

The investment supports QTS’ commitment to lead the industry in providing clean, renewable energy alternatives for QTS hybrid colocation and hyperscale customers that include five of five largest cloud providers and several global social media platforms.

In addition to the new wind power initiative in Texas, QTS’ New Jersey data center features a 14 MW solar farm to offset emissions associated with power consumption at that facility. QTS plans to expand renewable power initiatives in existing and new data centers including those being planned for Phoenix and Hillsboro, OR.

10. What’s in the roadmap for the year 2018?

QTS is now executing on our 2018 strategic growth plan that involves continued innovation with the Service Delivery Platform. It enables a software-defined data center experience for hyperscale and hybrid colocation customers. QTS’ SDP represents a big data approach enabling customers to access and interact with information related to their specific IT environment by aggregating metrics and data from multiple sources into a single operational view.

More importantly, it provides customers the ability to remotely view, manage and optimize resources in real time in a cloud-like experience, which is what customers increasing expect from their service providers. In addition, through a variety of software-defined networking platforms, enterprises can now get direct connectivity to the world’s largest cloud providers with real-time visibility and control over their network infrastructure using QTS’ SDP application interface.

Categories
Cloud Cloud News News

Worldwide public cloud market to hit $186.4 billion, with hyperscale cloud providers dominating it: Gartner

According to a new analysis by Gartner, public cloud services market is projected to grow from $153.5 billion in 2017 to $186.4 billion in 2018, which is a rise of 21.4 percent.

Amongst the cloud segments, IaaS (Infrastructure-as-a-service) was identified as the fastest growing segment, predicted to grow 35.9 percent in 2018, reaching $40.8 billion, led by leading IaaS providers like Amazon Web Services (AWS) and Microsoft Azure.

Source: Gartner

SaaS (Software-as-a-service) was again identified as one of the largest segments of the cloud market with a revenue growth expected of 22.2 percent, to hit $73.6 billion in 2018. Gartner also predicted that by 2021, SaaS will reach 45 percent of the total application software spending.

SaaS based application models are becoming a preferred choice for most of the enterprises. Sid Nag, who is a research director at Gartner, thinks that the SaaS demands are changing with users seeking more purpose-built solutions that can meet their specific business outcomes.

Under PaaS (Platform-as-a-Service) segment, dbPaaS (database platform as a service) is seeing the highest demand, expected to hit $10 billion by the year 2021. As a result, the hyperscale cloud providers are expanding their range of services to include dbPaaS.

Talking about the high demand of dbPaaS, Mr. Nag said that the customers should explore other dbPaaS service offers apart from the one offered by the large service providers, to avoid any lock-in.

Despite the high forecast rates, Gartner expects the growth rate to stabilize from 2018 onwards, due to the maturity that cloud services might gain within the IT segment.

One of the primary challenges here is to avoid vendor lock-in. With most of the big cloud providers like AWS, Microsoft etc. offering major cloud services, companies that once use any vendor’s cloud platform can find it very expensive and complicated to move away again.

Gartner said that this scenario might give rise to new demands by customers who want easy migration of their apps and data, without any penalties.

Categories
Cloud News

8 must haves for Cloud MSPs, how colocation data center is among key drivers of digital transformation: CloudFest

The annual festival for the cloud, telecom, infrastructure and hosting industry – CloudFest, successfully came to a close on 16th March 2018. The week-long event was the ultimate celebration of the cloud with fun, knowledge and networking exchanges.

The sixth edition of CloudFest saw a huge turnaround with over 7000 attendees, 1300+ CEOs, 2500+ companies, 250+ speakers and 80+ countries.

The sessions included latest announcements, product exhibitions, case studies and panel discussions.

One of the important highlights of the event was the CloudFest Hackathon that brought together open source technology leaders and communities together, who did some real-time coding.

The sessions covered everything from fog computing to domain, to security, to infrastructure, to colocation and big data.

Here’s a quick recap of the key sessions held at CloudFest 2018:

SONM – Decentralized fog computing

Igor Lebedev – SONM (Supercomputer Organized by Network Mining) CTO, discussed about the various challenges of current architecture related to computing workloads, networking requirements and storage amounts. He advocated Microservices and MPP (Massively Parallel Processing) as the solutions to increasing architectural challenges. Under fog computing, the tasks are solved right where they are. GPUs per him, are 100 times more powerful than the usual CPUs.

Verisign – NameStudio API

Ebrahim Keshavarz – Product Management at Verisign, introduced company’s new NameStudio API – a sophisticated domain names’ suggestion tool that can deliver countless .com and .net domain names and suggest relevant domains across a large number of TLDs.

He said that the domain names are the common denominator for various online services.

And thus, finding the perfect domain name is very important. Verisign’s NameStudio API is backed with various machine learning algorithms and can be easily integrated across any online platform.

Hubgets – Communication against the machine

Bogdan Carstoiu – the CEO at Hubgets, talked about the increasingly growing market for unified communications. He discussed how Artificial Intelligence (AI) is having a strong implication in the communication industry. AI based smart bots can save a lot of time wasted in repetitive tasks by suggesting answers to the people.

Solar Archive – Delivering Shareholder value as a Managed Service Provider

David Clayton – non-executive director at Solar Archive, talked about the shareholder value and how can MSPs maintain the capital flow in their business.
He identified infrastructure, talent, brand and many other points as important assets of a cloud MSP.

He discussed the importance of investing in right infrastructure and talent to manage the MSP business and grow revenue. The must-haves of a successful MSPs are:

Here, he also emphasized the importance of email archiving which he said will grow at a CAGR of 14% during 2015-2019. 60% of the business data is on email as such it is important to protect and secure the email conversations. Here, MSPs stand a huge opportunity by offering email archiving services.

VMware – Avoid the Silo

Graham Crich – EMEA Director at VMware Cloud Provider Program at VMware, talked about the evolution of the computing from mainframe to mini and present-day virtualization platforms. He also discussed about the silos that are being built across a particular cloud service and how cloud providers can help their end customers break these silos.

He talked about VMware and AWS partnership and how this will lead to new opportunities for the cloud providers. As session takeaways, he advised CSPs to avoid being siloed, by embracing the whole hybrid cloud operating model and choosing the right tools and management layer to integrate the multi-cloud environment.

InterNetX – Hitting the cloud with Kubernetes

Killian Ries – Senior System Engineer and Marco Revesz – Business Automation Evangelist, at InterNetX, talked about Kubernetes and how they migrated a monolithic application to a micro-service based cloud application which was hosted on Kubernetes. Through a case study, they presented a solution to develop, deploy and run an application on modern DevOps standards.

Infradata – Evolution of Hosting

Remco Hobo – security architect at Infradata, talked about the evolution of the hosting industry. He identified digital economy, customer demands and technology as major drivers of the current technology landscape. He brought in the various challenges and opportunities of hosting industry:


Here, hosting providers need to specialize in their specific sectors to stay relevant in the highly competitive market.

Asseco Data Systems – Security in the Cloud

Andrzej Dopierala – President of the Management Board at Asseco Data Systems, talked about the security in the cloud. He discussed the rising security challenges in the digital market. Here, he introduced various tools like SimplySign for PDF signing, Certum Code Signing, Certum SSL and Certum Email Signing as the most popular trust services. By expanding these services to the cloud, the security of data can be enhanced.

e-shelter services – Colocation, enabler for Hybrid and Multi-cloud solutions

Toan Nguyen – Director Business Development & Cloud Platform at e-shelter services, talked about colocation services and how they can be the enabler for hybrid and multi cloud solutions.

He said that the cloud, Edge Computing, IoT and digitalization are the leading trends that drive customer’s demand for scalable and agile data centers. This will require the present-day data center and colocation service providers to reframe their business strategies to answer the evolving customer needs.

He emphasized the role of colocation data center and connectivity as the key for digital transformation.

HPE – Cloud28+ – Beyond a Single cloud story

Xavier Poisson Gouyou Beauchamps – VP Service Providers and Cloud28+ WW at HPE, discussed about HPE’s partner ecosystem and how it helps partners to extend their sales reach and create new revenue opportunities. He told how HPE is combining hybrid IT innovations – software defined infrastructure, private cloud with the speed and agility of public cloud, and management of multi-cloud and Hyperconverged infrastructure to redefine cloud at the edge.

He also announced new updates to the Cloud28+ platform.

Apart from the sessions, there were various innovative products and technology models that were presented during the event. One of the mention worthy products was RackNap which is a cloud services delivery platform. We got in touch with its COO (Chief Operating Officer) Sabarinathan Sampath, who told us about the product in detail, and how it is helping businesses globally to automate their products/services delivery.

CloudFest had some out-of-the-box sessions by Technology Futurist – Ian Khan, who is a strong advocator of technologies like cloud, AI and blockchain. He said, “If business success can be attributed to one thing, it is trust,” and that should be the base when companies interact with new systems.

Alexander Schulz, Slackline World Record Holder, motivated the attendees by sharing his own experience of becoming a slackline record holder. He said that stepping out of the comfort zone is one of the important steps towards achieving success in personal as well as professional front.

CloudFest, overall, was an event that celebrated cloud every day and through every session. We are very excited for the next edition of CloudFest.

Meanwhile, you can have a look at the short video on CloudFest:

Categories
Articles Cloud Cloud News Datacenter

In the cloud era, Dedicated Servers remain an attractive option

You have to hand it to cloud vendors — they have incredible marketing. If you are new to infrastructure hosting, you might be forgiven for thinking that the cloud is the only option, that anything else is just so last century.

In reality, dedicated servers remain the go-to infrastructure platform for many experienced engineers. Why? Because dedicated servers are cost effective, offer unbeatable performance, and, most importantly, control.

A dedicated server is just what it sounds like: a powerful computer with on-board processors, RAM, and storage, located in a data center that provides power, cooling, and redundant connections to the Internet.

Cloud platforms use virtual servers, on top of which they layer a hypervisor, guest operating systems, and the user’s software. Dedicated servers are sometimes called bare metal servers because your operating system and software run as close to the physical hardware — the bare metal — as possible.

There are good reasons for a virtualization layer: if you want to deploy dozens of servers in seconds, you want a cloud platform. But that’s not what most server hosting customers need.

  • Cost Effective

Dedicated servers have a reputation for being expensive which isn’t entirely undeserved. On average, a dedicated server costs more than a cloud server, and the cheapest cloud servers are less expensive than the cheapest dedicated servers. But if you compare the price-to-power ratio, dedicated servers come out way ahead. A cloud server with equivalent resources to a particular dedicated server is almost certainly more expensive.

Dedicated servers provide the most bang for the buck.

  • Performance

Because dedicated server software runs close to the bare metal, there’s no virtualization overhead. Every resource is applied to your workload. Additionally, dedicated servers scale further vertically than cloud servers — if you need a really powerful server, go dedicated because you’ll pay less while benefiting from a more powerful machine.

  • Privacy and Security

Public clouds are multi-tenant hosting environments: many organizations use the same underlying hardware. Now, while that’s not necessarily a problem, cloud users have no insight into the underlying network, virtualization layer, and hosting environment. They simply don’t know how seriously the platform takes security or privacy.

  • Control

Most importantly, bare metal dedicated servers provide complete control: it’s your server and you can do whatever you want with it. Managing the server is your responsibility. It’s up to you to keep it secure. But if you know what you’re doing, that’s by far the best scenario. I can’t tell you how often I’ve commiserated with experienced system administrators as they wait for their cloud vendor to fix something that they could fix in minutes if only they had access and control.

For experienced server admins, it doesn’t get much better than an enterprise-grade dedicated server.

Also read: 5 Cloud Computing Predictions for 2018 that will define the cloud industry for good

About Guest Author-

Chris Schwarz is the CEO of Cyber Wurx, a premium colocation services provider with a world-class Data Center in Atlanta, Georgia that also specializes in Dedicated Server Hosting and VPS Hosting. Check out their hosting blog at https://cyberwurx.com/blog/.

Categories
Cloud Cloud News Datacenter

VMware teams up with NxtGen for scalable and secure cloud environments 

The industry-leading virtualization software company, VMware, has partnered with NxtGen, the emerging leader in the datacenter and cloud services space, to help enterprises decrease time to market, reduce capex and development costs, with new solutions across public cloud and on-premise data centers.

As part of the alliance, the companies will deliver new solutions as part of the NxtGen Infinite Datacenter, provided from NxtGen’s Mumbai and Bengaluru data centers.

“Service providers like NxtGen can leverage VMware technology to help customers from decreased time to market, reduced capital expenditure and lowered development costs to stay competitive,” said Arun Parameswaran, Managing Director, VMware India.

With the growth in business needs and use cases, the enterprises go for multiple public and private cloud services along with the on-premise datacenter. It makes the IT infrastructure complex, and issues related to management and security arise.

NxtGen Infinite Datacenter provides the ability to integrate the internal and external environments of enterprise data. It makes it easy for enterprises to implement a private cloud on premise, and further move to hybrid cloud as well.

VMware will extend the datacenter expertise of NxtGen through its VMware NSX. Using NxtGen’s VMware NSX, the enterprises who run workloads on on-premise infrastructure, will be able to run the workloads into a public cloud as well, in case the demand arises.

“We look forward to continuing to strengthen NxtGen’s vision of an ‘Infinite Data Center’ through our partnership with VMware,” said AS Rajgopal, CEO, NxtGen.

Additionally, the integration also provides security, load-balancing, as well as Disaster-as-a-Service (DaaS) capabilities. VMware protects its software and virtual machines using App Defense.

VMware’s partnership with NxtGen will also promote hybrid cloud adoption.

Also read: VMware enhances its hybrid cloud platform with new cloud management and container networking capabilities

Last year, it had partnered with Amazon Web Services to launch VMware Cloud on AWS, which helped customers enjoy the cloud computing benefits without shelling out anything extra for any data center management software of VMware.

Categories
Acquisition Cloud News

Veeam acquires N2WS, strengthens its position as data protection provider for AWS Cloud 

Switzerland-based software firm, Veeam, has acquired N2WS, a leading provider of IaaS data protection, with a cloud-native backup and disaster recovery solution designed especially for AWS workloads.

Veeam is known for its Availability for the Always-On Enterprise solution which helps organizations meet recovery time and point objectives (RTPO) for any data, application or cloud.

By acquiring N2WS, Veeam aims to enhance its product portfolio with Availability solutions for different apps, data and across physical, virtual and multi-cloud environments. With the move, Veeam will be able to access N2WS’s data protection technology and R&D for integrating IaaS data protection for AWS workloads into Veeam Availability Platform, while N2WS will have access to nearly 55,000 resellers and 18,000 CSPs of Veeam.

“As enterprises look to migrate more workloads to the public cloud, having a robust and intuitive data protection and Availability solution is imperative,” said Peter McKay, Co-CEO and President at Veeam. “By combining Veeam’s industry-leading capabilities in protecting virtual, physical and cloud environments with N2WS’ leadership in AWS data protection, we have a strong solution to deliver on the needs of the digital enterprise. N2WS has experienced incredible growth in the last 12 months and it will continue to operate as a standalone business to best position the company to provide AWS data protection – the same way Veeam transformed protection for VMware environments a decade ago. Together, we will achieve great things; this is a game-changer in every sense!”

Public cloud adoption is rapidly increasing (AWS in particular), and it seems that Veeam has timed the acquisition really well. The company might see strong growth in the market in 2018.

“Joining forces with one of the world’s fastest growing software companies is very exciting for the N2WS team and for our customers,” said Jason Judge, CEO at N2WS. “We will further accelerate our rapid growth and the development of our top-rated solutions by leveraging the world-class team that Veeam has established. We also look forward to assisting Veeam customers explore their public cloud strategies with our years of innovation in public cloud storage.”

Additionally, Veeam will now help enterprises of all sizes to run considerable number of workloads in public cloud, and enable its existing customers to take advantage of Cloud Protection Manager (CPM) from N2WS.

Also read: 5 Cloud Computing Predictions for 2018 that will define the cloud industry for good

For acquisition, Veeam paid $42.5 million in cash. N2WS will continue to keep its brand name, but will be called “A Veeam Company”.

Categories
Cloud Datacenter News

HPE simplifies multi-cloud management with OneSphere

At its Discover 2017 customer conference in Madrid, HPE introduced a simplified multi-cloud management platform called OneSphere that provides a combined experience across public clouds, on-premises private clouds and software-defined infrastructure.

HPE has designed OneSphere to address the needs of developers, IT operators, data scientists, researchers and enterprises to build clouds, deploy apps, and gain insights faster.

“Our customers need a radically new approach – one that’s designed for the new hybrid IT reality,” stated Ric Lewis, senior vice president and general manager, Software-Defined and Cloud Group at HPE. “With HPE OneSphere, we’re abstracting away the complexity of managing multi-cloud environments and applications so our customers can focus on what’s important – accelerating digital transformation and driving business outcomes.”

OneSphere contains a SaaS portal through which it offers access to a set of IT resources including public cloud services and on-premises environments. It also offers unified experience across clouds, sites, orchestration tools, PaaS and containers, which results in minimizing the requirement of specialized skills.

HPE said that managing the multi-cloud environments with traditional management solutions is complicated, and needs multiple points of management, consuming more resources and costs. They are also difficult to set up and manage since most companies use a combination of public cloud and on-premises resources.

HPE OneSphere has been designed to simplify all these management complications, providing users a one-stop access to all their applications and data from anywhere.

It works across containerized workloads, bare metal applications, and VMs, enabling internal stakeholders to compose hybrid clouds.

OneSphere streamlines DevOps so that enterprises get deep insights across the public and on-premises environments. It enables them to speed up the cycle times, better productivity, and generate cost-savings.

Also read: HPE’s new high-density compute and storage solutions to help businesses adopt HPC and AI applications

HPE said that OneSphere is a solution to accelerate digital transformation, and is ideal for businesses that want to capitalize on digital disruption and enable a broad range of new customer experiences.

It will be available from January 2018.

Categories
Cloud News Datacenter News

Data Center Infrastructure Market Growth Lags, Public Cloud Flourishes: Synergy Research Group

Synergy Research Group’s new Q2 data shows that the quarterly spending on overall data center hardware and software has grown by only 5% in the last 2 years, while spending on the public cloud portion of the same has grown by 35%.

The spending on traditional, non-cloud data hardware and software has also decreased by 18%. But private cloud infrastructure market has grown, though less compared to public cloud.

The total revenues of data center infrastructure equipment (cloud & non-cloud hardware and software), in the second quarter were over $30 billion, of which 30% accounted for public cloud infrastructure.

Operating Systems, networking, servers, storage and virtualization software accounted for 96% of the overall data center infrastructure market, with the remaining being comprised by management and network security software.

“With cloud service revenues continuing to grow by over 40% per year, enterprise SaaS revenue growing by over 30%, and search/social networking revenues growing by over 20%, it is little wonder that this is all pulling through continued strong growth in spending on public cloud infrastructure,” said John Dinsdale, a Chief Analyst and Research Director at Synergy Research Group.

“While some of this is essentially spend resulting from new services and applications, a lot of the increase also comes at the expense of enterprises investing in their own data centers. One outcome is that public cloud build is enabling strong growth in ODMs and white box solutions, so the data center infrastructure market is becoming ever more competitive,” he added.

Original Design Manufacturers (ODMs) hold the largest part of the public cloud market, where Cisco is leading as an individual vendor, followed by Dell EMC and HPE.

Dell EMC was reportedly the leader in private cloud market in Q2, followed by HPE and Microsoft.

Also read: Dell EMC and HPE compete for top spot in server market as worldwide server shipments grew by 2.4 percent in Q2 2017: Gartner report

By segment, HPE is leading server revenues, while Dell EMC leads strong in storage, and Cisco is dominating the networking segment. Microsoft appears in the rankings majorly due to its dominance in server operating system and virtualization applications.

Other than Cisco, Dell EMC, HPE, and Microsoft, the other leading vendors in the market are IBM, Huawei, VMware, Lenovo, Oracle and NetApp.

Categories
Cloud News

Cisco launches cloud-based database management-as-a-service for its UCS and HyperFlex customers 

Cisco recently announced the launch of its cloud – based management platform – Cisco Intersight, for its UCS (Unified Computing System) and HyperFlex Systems.

With this, the company aims to present new ways of systems management.

Cisco Intersight will solve the challenges that customers face due to more and more distributed applications and data; mobility and IoT demanding computing beyond data center capabilities; and the fact that the traditional IT models to manage systems are insufficient to meet the complexities, scale and velocity demanded by the modern IT.

With decreasing server sales growth, the move seems to be an effort by the company to make its hyper-converged infrastructure offerings – UCS and Hyperflex, more attractive by simplifying its data center management.

As per a statement given by Todd Brannon, Director of Product marketing, Cisco, to a news portal, the new datacenter management software-as-a-service is a part of company’s 18 months long multi-year project called Starship, under which the company plans to migrate its current system management software and its customers to a cloud type data management software.

Intersight will remove the necessity to install, monitor or control Cisco’s datacenter software separately. The cloud-based software will give a single view for managing and handling all the technical aspects of operations related to the datacenter.

The company also aims to evolve the new platform and improve its capability to solve complexities. For this, it plans to add the power of AI and machine learning to the platform, through which it will collect customers’ data to understand how to diagnose problems automatically without any manual intervention.

By moving management software to the cloud, we relieve our customers of the burden of maintaining systems management software and hardware. It’s important to note that this approach can take the form of a public (Cisco managed) cloud service and also a private (customer-hosted) cloud model,” per the company blog.

The company believes that a management cloud will promote faster delivery of new services and functionalities, easy extensibility, compliance assurance due to consistent deployment and integration with Cisco TAC (Technical Assistance center).

Cisco’s move to reposition its data center software to a SaaS based model will certainly be advantageous for the company.

Cisco will be talking more about Intersight by streaming live webinar on September 26, 2017 at 10:00am (Pacific time) and 1:00pm (Eastern Time) on TechWiseTV.

Its existing customers can join the technical preview to know more about the product.

Page 1 of 3
1 2 3