Categories
Acquisition Datacenter Newss

Nvidia acquires Israeli networking firm Mellanox for $6.9B

Nvidia is acquiring the Israel-based networking firm Mellanox for around $6.9 billion in cash.

Mellanox is a prominent supplier of end-to-end Ethernet and InfiniBand smart interconnect solutions and services for servers and storage. These solutions help companies to boost datacenter efficiency by optimizing throughput and latency, providing data faster to applications, and unlocking system performance capability.

Nvidia is a leading company in the world in high-performance computing (HPC), whereas Mellanox is an early innovator in high-performance interconnect technology. The acquisition will combine the expertise of Nvidia and Mellanox.

“The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters. Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine,” said Jensen Huang, founder and CEO of NVIDIA.

More than half of the world’s Top 500 supercomputers and leading hyperscale datacenters are using Nvidia’s computing platform and Mellanox’s interconnects technology.

With the acquisition of Mellanox, Nvidia will optimize datacenter-scale workloads across the entire computing, networking and storage stack, the companies said. This will help customers to achieve higher performance, greater utilization and lower operating cost.

“We’re excited to unite NVIDIA’s accelerated computing platform with Mellanox’s world-renowned accelerated networking platform under one roof to create next-generation datacenter-scale computing solutions. I am particularly thrilled to work closely with the visionary leaders of Mellanox and their amazing people to invent the computers of tomorrow,” added Jensen Huang.

Nvidia and Mellanox have been working closely for a while now. They jointly contributed in the development of world’s two fastest supercomputers—Sierra and Summit.

Following the completion of acquisition, Nvidia will continue to invest in local excellence and talent in Israel. No changes will be made to customer sales and support.

Also read: NetApp and Nvidia join forces to help businesses accelerate journey into AI revolution

“We share the same vision for accelerated computing as NVIDIA,” said Eyal Waldman, founder and CEO of Mellanox. “Combining our two companies comes as a natural extension of our longstanding partnership and is a great fit given our common performance-driven cultures. This combination will foster the creation of powerful technology and fantastic opportunities for our people.”

Image source: Nvidia

Categories
Datacenter Interviews

“Submer is a trailblazer and our innovations will be the catalyst to re-define datacenter design and operations”— Jeff Brown, MD North America, Submer Immersion Cooling

The modern businesses run on data. This data is housed by data centers within thousands of servers. The demand for datacenter capacity has increased manifolds in the last few years. But, these datacenters have a profound impact on our environment.

The power consumed by the servers churning information is responsible for the rising operational costs and growing carbon emissions. But not everybody is fully aware of this and the time has come for datacenters to be more efficient AND greener.

We interviewed Jeff Brown, Managing Director North America, Submer Immersion Cooling. Submer provides highly efficient, eco-friendly immersion cooling for the datacenters that helps save up to 99% of cooling costs and up to 87% of the physical space. This helps enterprises save energy while helping reduce global warming.

Jeff has over 25 years of experience in data center sales and operations. Before Submer, Brown has served as CCO for UK2 Group and Vice President of Sales for Savvis, the $1.5 billion cloud infrastructure and hosted technology services division of CenturyLink. He also held commercial leadership positions at VeriSign, Equinix and CompuServe; and developed an impressive reputation for building, scaling and restoring growth to technology sales organizations.

Read on as he discusses his new role at Submer, why immersion cooling is important for the greener future of the data centers and more.

1. Let’s start with a brief overview of your new role at Submer Immersion Cooling as the Managing Director, North America.

I am the first Submer employee to be based in the United States and am focused on establishing and scaling our business in North America.  Initially, this includes developing market & brand awareness; building local manufacturing capacity; coordinating partner support; and growing our customer base.

2. With an experience of more than 25 years in Data Center Sales and Operations, what opportunities do you see in the Northern American market for Submer datacenter solutions?

At the risk of stating the obvious, the datacenter industry continues to grow and evolve at a breakneck pace.  This unabated growth has attracted many new entrants and new ideas about how to provide technology services more effectively.

Submer is one of these trailblazers and we believe our innovations will be the catalyst to completely re-defining how datacenters are designed and operated.  In fact, re-thinking the datacenter will unlock new innovations for OEMs and software developers too.  And all of that will revolutionize the economics of providing digital services around the globe. Pretty heady stuff!

3. Tell us about Submer’s webinar on HPC Immersion cooling – first in a series of webinars about HPC.

Our first priority is to help educate the market about the capabilities and benefits of using liquid as the primary method for managing heat in the datacenter.  One way we are doing this, at the macro level, is to host a monthly webinar to share knowledge and real-world applications of this technology.  It’s convenient, low-cost and quickly addresses our audience around the globe.

We chose to focus on the High-Performance Computing (HPC) community first since it adopted liquid cooling strategies long ago and there is a great deal of empirical, real-world evidence of what can be accomplished.  We were fortunate to have Daniele Rispoli from ClusterVision join us as a respected expert in HPC and Super-computing to share his insights.

4. That’s a great initiative. If correct, you will soon be having a second webinar on Hardware Design Considerations for Immersion Cooling on 5th March. Could you tell our readers about it?

Yes, our next webinar is right around the corner and, like the first, will feature an industry expert to share insights on what server manufactures are doing to prepare their equipment to exploit the benefits of full immersion cooling solutions like ours.  In this case, Alain Wilmouth, the CEO of 2CSRI will join Submer’s CEO, Daniel Pope, on March 5th.  Your readers can register here.

5. What’s your take on the immersion cooling vs air cooling and how do you see the market demand?

Well, there is a great deal to discuss about the differences between air and liquid cooling systems and why organizations are motivated to consider making a change.  If I had to distill it down to a couple of points, I’d highlight five key things:

  • Efficiency: Liquid is more than 1,000 times more efficient than air at transferring heat.
  • Proven: Liquid cooling solutions have been perfected, deployed around the world and are reliable.
  • Cost Effective: Liquid cooling solutions are less expensive to build and operate than air-based systems.
  • Workforce Safety: Liquid cooling solutions significantly improve the workplace environment compared to air-based systems.
  • Green: Liquid cooling solutions conserve much more of our planet’s critical resources than air-based systems.

In terms of demand, there is no question that it’s been building rapidly in the last few years and becoming more ubiquitous across all IT segments.  We are seeing broad interest across private and public industries, geographies and workload capacities.

6. Why immersion cooling for HPC, hyperscalers and datacenters is important for the greener future of data centers?

As a whole, datacenters currently consume 6% of the electricity generated worldwide and its projected to be as high as 20% in a few years; and within the datacenter, hyperscalers and HPC are by far the largest class of users.  And, despite a great deal of progress in the last decade, most electricity is still produced from non-renewable sources.

So, the datacenter industry as a whole can have the greatest impact on its carbon profile by targeting the largest users first; however, the benefits of liquid immersion cooling on the environment is applicable to users of all sizes.

7. What are the building blocks for the NextGen Data Center? Please comment in light of Submer’s cutting-edge cleantech solution.

The next generation of datacenters will be orders of magnitude more efficient.  All of the critical resources in play within a datacenter (land, electricity, water, money) become better utilized by switching to a liquid immersion cooling solution.  Denser computing deployments mean we can do more with less.  I also expect datacenter operation to become more automated through the use of robotic technologies; which is another area Submer is developing.

8. You have held various leadership roles so far. What factors drive/can drive growth in technology sales organizations amidst the dynamic technology landscape?

In many ways, as dynamic as the high-tech industry is; the key to sales success has really stayed the same in my opinion.  Listen, be responsive and create value for your customer.  It’s easier said than done, of course, but leading with these three ingredients has never failed to produce a high-performing, feared and revered sales team for me.  It helps to have a great product too!

Also read: CERN to use Submer’s SmartPod solution for datacenter cooling and high-density computing

9. So, in your leadership, what new developments can we look forward to at Submer?

Considering that 50% of the global datacenter capacity is centered in the US along with most of the hyperscalers, I expect we will quickly ramp up our profile and organization here to meet the burgeoning demand.  It will surely be a team effort, but, personally, I hope to be central to building a truly great organization that is, in fact, revered in our market and a highly sought-after place to work.  Perhaps, if I’m lucky, I can one day claim a legitimate contribution to making the world a better place for all its inhabitants.

10. Besides work, what are you passionate about?

At work, I put a lot of emphasis on work-life balance with the people I can influence.  I also practice what I preach.  So, for me, being present and involved in the lives of my family and friends is at the top of the list.  More specifically, I try to be outdoors and active whenever possible.  If I’m not thinking about cooling the datacenter, you’ll find me mountain biking, golfing or zipping across the wake on an old-school slalom ski.

Categories
News

Dell EMC’s new Machine and Deep Learning solutions to bring power of HPC and data analytics to enterprises

Dell EMC at SuperComputing conference 2017, announced its new machine learning and deep learning solutions to bring the power of HPC (high performance computing) and data analytics to the mainstream enterprises.

The new solutions combine HPC and data analytics to empower enterprises with opportunities in image processing, fraud detection, financial investment analysis and other similar areas through ready bundles for easier deployment.

Artificial Intelligence techniques like machine and deep learning have been increasingly deployed and used by enterprises but not many have the required skill set and expertise to manage such systems effectively. Here, Dell EMC’s new solutions built around its expertise in HPC and data analytics, offer customers the ability to leverage maximum insights from their collected data for faster and better performances.

Our customers consistently tell us that one of their biggest challenges is how to best manage and learn from the ever-increasing amount of data they collect daily. With Dell EMC’s high-performance computing experience, we’ve seen how our artificial intelligence solutions can deliver critical insights from this data, faster than ever before possible. Working with our strategic technology partners, we’re able to bring these powerful capabilities to all enterprises. When you think about what this means for industries like financial services or personalized medicine, the possibilities are endless and exciting.Armughan Ahmad, senior vice president/general manager, Hybrid Cloud and Ready Solutions, Dell EMC.

The new solution bring together optimized pre-tested and validated servers, storage and networking for machine and deep learning applications. Simplified identification, analysis and automation of data patterns will help customers use data insights in a variety of applications from facial recognition in security to studying human behavior in retail industry.

Customers will also be benefitting from the introduction of new Dell EMC PowerEdge C4140 server – supporting NVIDIA latest generation technology.

The Dell EMC Ready Bundles will be available in the first innings of 2018 via Dell EMC and its efficient channel partners, while Dell EMC PowerEdge C4140 will be available by December 2017.

With AI going mainstream, technology vendors like IBM, HPE and Dell EMC are pushing their services to be HPC and AI capable, and help developers and enterprises deploy HPC applications. While, HPE has been a leading name in HPC and supercomputing for years, other vendors are also increasing their efforts in the realm. This include IBM integrating its deep learning PowerAI enterprise software with its Data Science Experience.

Categories
News

HPE’s new high-density compute and storage solutions to help businesses adopt HPC and AI applications

Hewlett Packard Enterprise (HPE) recently announced new high-density compute and storage solutions that will help enterprises utilize the power of high performance computing (HPC) and artificial intelligence (AI) applications.

The new solutions are a part of HPE’s Apollo compute platform.

There’s no doubt that HPC and AI have moved to the mainstream from the fringes and now find usage in financial trading, engineering, computer- aided design, text analytics and video surveillance, helping drive new revenue streams and efficient operations. But, these areas need parallel processing and storage means to perform well. HPE Apollo portfolio comes as a solution to these issues.

HPC and AI applications are not exclusive to big research organizations and corporations; they can drive efficiency and innovation in the day-to-day business of every company,” said Bill Mannel, Vice President and General Manager, HPC and AI Segment Solutions, Hewlett Packard Enterprise.

Today, HPE is augmenting its proven supercomputing and large commercial HPC and AI capabilities with new high-density compute and storage solutions to accelerate market adoption by enabling organizations of all sizes to address challenges in HPC, big data, object storage and AI with more choice and flexibility.

The purpose-built new and enhanced offerings include:

HPE Apollo 2000 Gen10 System

The multi-server platform with next-generation capabilities and plug and play system configuration is ideal for customers who have limited data center space and want to support enterprise HPC and deep-learning web applications. With support for NVIDIA Tesla V100 GPU accelerators, it enables deep learning training across various types of environment.

HPE Apollo 70

With Apollo 70, HPC customers get more choice and flexibility, as it provides easy access to HPC technology, supporting standard HPE provisioning, performance software and cluster management. It is ideal for memory intensive workloads with the capacity to deliver up to 33 percent more memory bandwidth.

HPE Apollo 4510 Gen10 System

It is an ideal platform for customers who want to optimize the retention and placement of large amount of data. It is designed for object storage and thus delivers one of the highest storage capacities. It has a higher performance delivery with 16 percent more cores from its previous generation.

HPE LTO-8 Tape

Tapes are again emerging as an added layer of protection against cybercrime and ransomware attacks, to lower the risk of complete data loss. HPE’s latest data retention solution allows customers to save primary storage content into tape to reduce the cost associated with storing data overtime. HPE LTO-8 tape provides 2X capacity in same data center footprint as the LTO-7 – previous generation data cartridge.

HPE displayed its HPC and AI solutions at Supercomputing 2017 conference, where it also bagged the award for best HPC server and top supercomputing achievement. It will also display its Superdome Flex for in-memory HPC.

HPE Apollo 2000 and HPE Apollo 4510 have been made available, while HPE LTO-8 Tape will be available in Decmeber’17 followed by HPE Apollo 70 in 2018.

Categories
Cloud Datacenter News

Microsoft brings Cray supercomputers to Azure

Microsoft has announced an exclusive strategic alliance with global supercomputer leader Cray, to provide dedicated Cray supercomputing systems (Cray XC and Cray CS) in its Azure datacenters. It will enable customers to run AI, HPC, advanced analytics, and modeling and simulation workloads seamlessly on Azure cloud.

Cray’s Aries interconnect and its tightly coupled system architecture addresses the ever-increasing demands for real-time insights, compute capability, and scalable performance by the modern enterprises. With the new partnership, cloud customers can now harness power of supercomputing and artificial intelligence in an agile and cost-effective way.

The Cray systems will integrate with Azure VMs, Azure Data Lake Storage, Azure Machine Learning Services, as well as Microsoft AI platform to offer better workflows, collaboration, performance, scalability, and elasticity to customers.

“Using the enterprise-proven power of Microsoft Azure, customers are running their most strategic workloads in our cloud,” said Jason Zander, corporate vice president, Microsoft Azure, Microsoft Corp. “By working with Cray to provide dedicated supercomputers in Azure, we are offering customers uncompromising performance and scalability that enables a host of new previously unimaginable scenarios in the public cloud. More importantly, we’re moving customers into a new era by empowering them to use HPC and AI to drive breakthrough advances in science, engineering and health.”

With Cray supercomputers in Azure, the researchers, scientists, and analysts will be empowered with the capability of training AI deep learning models in fields of medicine and autonomous vehicles. The pharmaceutical and biotech scientists can now perform whole genome sequencing reducing time from computation to cure.

Geophysicists can speed up oil field analysis with reduced exploration risks through better seismic imaging fidelity. Aerospace and automotive engineers can now perform crash simulation, and computational fluid dynamic simulations, create digital twins for quick and proper maintenance and product development. The tasks that used to take weeks and months till now, will now be done within minutes and days.

“Our partnership with Microsoft will introduce Cray supercomputers to a whole new class of customers that need the most advanced computing resources to expand their problem-solving capabilities, but want this new capability available to them in the cloud,” said Peter Ungaro, president and CEO of Cray. “Dedicated Cray supercomputers in Azure not only give customers all of the breadth of features and services from the leader in enterprise cloud, but also the advantages of running a wide array of workloads on a true supercomputer, the ability to scale applications to unprecedented levels, and the performance and capabilities previously only found in the largest on-premise supercomputing centers. The Cray and Microsoft partnership is expanding the accessibility of Cray supercomputers and gives customers the cloud-based supercomputing capabilities they need to increase their competitive advantage.”

The supercomputing capabilities in cloud can transform businesses and bring innovation in the coming years. Microsoft has been continuously investing in this field for last several years, and had acquired Cycle Computing for better hybrid HPC deployments.

Also read: Azure advancements remove cloud adoption barriers, going hybrid made easier

The Cray XC and Cray CS supercomputers with attached Cray ClusterStor storage systems will be directly connected to Microsoft Azure network, and will be available for customer-specific provisioning in Microsoft Azure datacenters. Customers can also leverage the Cray Urika-XC analytics software suite and CycleCloud for hybrid HPC management.