Categories
Datacenter News

Nvidia prohibits datacenter deployment of GeForce GPUs 

Nvidia recently made a big change to the licensing agreement of its GeForce software which doesn’t allow users to deploy GeForce GPUs and Titan GPUs in data centers. Certainly, the users aren’t happy about it. 

Graphical User Interfaces (GPUs), being a common choice for artificial intelligence researchers, has helped Nvidia to surge 85% in its stock price in 2017.  

The customers aren’t happy about the changes to End-User Licensing Agreement (EULA) because it doesn’t allow them to deploy the GeForce and Titan based graphic cards in the data centers provided by other service providers including Amazon Web Services and Microsoft Azure.  

Here is the statement from the EULA- “No Datacenter Deployment. The SOFTWARE is not licensed for datacenter deployment, except that blockchain processing in a datacenter is permitted.” 

The changes are forcing users to go for expensive Tesla GPUs inside data centers, instead of lower-cost processors. The new Tesla V100 costs around $8000 while the Titan V starts at only $3000. 

To defend itself, Nvidia said that it made changes to EULA to prevent the potential misuse of its GeForce and Titan GPUs which were not built for demanding, and large-scale enterprise environments.  

“NVIDIA addresses the unique mechanical, physical, management, functional, reliability, and availability needs of servers with our Tesla products, which include a three-year warranty covering data center workloads, NVIDIA enterprise support, guaranteed continuity of supply and extended SKU life expectancy for data center components. This has been communicated to the market since the Tesla products were first released,” said Nvidia in statement to CNBC. 

Also read: Nvidia is ending driver support for 32-bit operating systems 

With this change in EULA, many leading companies using GeForce were affected, but Nvidia looks unmoved, and doesn’t seem to change its policy any sooner.  

 

Categories
News

Microsoft’s new services to simplify migration of VMware workload to cloud

Microsoft recently announced new services to help enterprises move existing on-premise VMware workloads to Azure – Azure Migrate and VMware virtualization on Azure.

The new services have been designed to help developers at every stage of VMware migration to Azure.

Microsoft’s intelligent cloud gives enterprises access to supercomputing powers on pay-as-you-go basis, which makes it popular amongst both small and large enterprises. VMware, on the other hand, is a leader when it comes to virtualization.

Azure Migrate is touted as an appropriate means of moving VMware workloads to the Azure cloud. The free service will be launched on November 27th, and will be available to all the customers of Azure.

Azure migrate will support multi-server application migration unlike other cloud vendors with single server migration capabilities. It does so through discovery and assessment, migration and resource, and cost optimization. The new service will thus, discover and analyze all VMware workloads running on the private datacenters and then provide a way to migrate it to the public cloud using Azure Site Recovery service.

Microsoft also allows integration of VMware workloads with Azure services, which means customers can use their Azure services with VMware workloads without requiring any migration or deployment, enabling data management and security across cloud and on-premises. This consists of Azure Backup, Update/Configuration management, Azure Site Recovery, and Azure Security Center.

While Azure Migrate will support migration of on-premises VMware workloads to cloud, VMware virtualization on Azure is a bare metal solution that will have the capacity to run the full VMware stack on Azure hardware. Microsoft is offering this service in collaboration with VMware-certified partners and expects to make it available in the coming year.

The new services came a week before AWS’s annual conference – re: Invent, in Las Vegas. Microsoft has given another edge to its cloud service portfolio by adding these services. It is a strategic move as it will also boost Microsoft hybrid cloud services, as Azure Migrate will give customers the choice to move some, but not necessarily all their data to the cloud.

Categories
Cloud Datacenter News

Microsoft brings Cray supercomputers to Azure

Microsoft has announced an exclusive strategic alliance with global supercomputer leader Cray, to provide dedicated Cray supercomputing systems (Cray XC and Cray CS) in its Azure datacenters. It will enable customers to run AI, HPC, advanced analytics, and modeling and simulation workloads seamlessly on Azure cloud.

Cray’s Aries interconnect and its tightly coupled system architecture addresses the ever-increasing demands for real-time insights, compute capability, and scalable performance by the modern enterprises. With the new partnership, cloud customers can now harness power of supercomputing and artificial intelligence in an agile and cost-effective way.

The Cray systems will integrate with Azure VMs, Azure Data Lake Storage, Azure Machine Learning Services, as well as Microsoft AI platform to offer better workflows, collaboration, performance, scalability, and elasticity to customers.

“Using the enterprise-proven power of Microsoft Azure, customers are running their most strategic workloads in our cloud,” said Jason Zander, corporate vice president, Microsoft Azure, Microsoft Corp. “By working with Cray to provide dedicated supercomputers in Azure, we are offering customers uncompromising performance and scalability that enables a host of new previously unimaginable scenarios in the public cloud. More importantly, we’re moving customers into a new era by empowering them to use HPC and AI to drive breakthrough advances in science, engineering and health.”

With Cray supercomputers in Azure, the researchers, scientists, and analysts will be empowered with the capability of training AI deep learning models in fields of medicine and autonomous vehicles. The pharmaceutical and biotech scientists can now perform whole genome sequencing reducing time from computation to cure.

Geophysicists can speed up oil field analysis with reduced exploration risks through better seismic imaging fidelity. Aerospace and automotive engineers can now perform crash simulation, and computational fluid dynamic simulations, create digital twins for quick and proper maintenance and product development. The tasks that used to take weeks and months till now, will now be done within minutes and days.

“Our partnership with Microsoft will introduce Cray supercomputers to a whole new class of customers that need the most advanced computing resources to expand their problem-solving capabilities, but want this new capability available to them in the cloud,” said Peter Ungaro, president and CEO of Cray. “Dedicated Cray supercomputers in Azure not only give customers all of the breadth of features and services from the leader in enterprise cloud, but also the advantages of running a wide array of workloads on a true supercomputer, the ability to scale applications to unprecedented levels, and the performance and capabilities previously only found in the largest on-premise supercomputing centers. The Cray and Microsoft partnership is expanding the accessibility of Cray supercomputers and gives customers the cloud-based supercomputing capabilities they need to increase their competitive advantage.”

The supercomputing capabilities in cloud can transform businesses and bring innovation in the coming years. Microsoft has been continuously investing in this field for last several years, and had acquired Cycle Computing for better hybrid HPC deployments.

Also read: Azure advancements remove cloud adoption barriers, going hybrid made easier

The Cray XC and Cray CS supercomputers with attached Cray ClusterStor storage systems will be directly connected to Microsoft Azure network, and will be available for customer-specific provisioning in Microsoft Azure datacenters. Customers can also leverage the Cray Urika-XC analytics software suite and CycleCloud for hybrid HPC management.

Categories
Cloud News Datacenter News

Riverbed previews its new Azure-Ready Edge platform at Microsoft Ignite 2017

Riverbed Technology, Inc. at Microsoft Ignite 2017, previewed Riverbed SteelFusion Azure-Ready Edge – a technology that will extend Microsoft Azure Cloud storage to the remote network edges.

Riverbed Technology works towards making websites, applications, networks, cloud data centers and remote offices better through hybrid networking, cloud SD-WAN, SaaS, big data and mobile technologies.

With this, the company plans to extend the flexibility and benefits of Azure cloud to the remote and network endpoint users.

SteelFusion will make use of Microsoft Hyper-V as the virtualization platform to give remote locations’ direct access to Azure cloud and will be used as the primary storage tier. This will promote easy provisioning, protection and recovery of data from Azure.

Paul O’Farrell, Senior Vice President and General Manager of SteelHead, SteelFusion and SteelConnect at Riverbed said, “Over 1,200 enterprises have chosen SteelFusion as an edge IT platform, dramatically streamlining operations for remote business locations. We value our long-standing relationship with Microsoft, and with the SteelFusion Azure-Ready Edge, we will provide joint customers with even more options for managing edge IT.”

He further added, “With this solution, organizations will be able to instantly realize the benefits of using Azure cloud storage wherever they do business.

Riverbed SteelFusion, per the company, is the first and only SD (Software-defined)-Edge solution that has the capability to deliver local performance along with data convergence and lower total-cost-of-ownership(TCO) for distributed organizations.

It makes use of a technology that delivers modern, cloud-like experience to those organizations that have to repeatedly manage complex and ROBO (Remote office branch office) locations. Thus, it permits remote storage, networking and backup infrastructure to be converged into a single appliance with a response time similar to that of a local storage.

Tad Brockway, Partner Director Program Management at Microsoft said that Microsoft Azure has always supported its customers and enterprises implementing a cloud-first strategy achieve more efficiency, scalability and cost savings benefits that are essential to keep pace in the digitally evolving world.

He further added, “With Riverbed SteelFusion, we will be extending the same benefits of the Azure cloud to the edge of the network, which remains a critical component to the ongoing success of the business in today’s distributed IT landscape.”

The SteelFusion Azure Ready Edge will be highly useful for complex workloads managing hybrid applications and services in ROBO locations with a SD platform that can centralize remote data and other operational processes in a cloud or hybrid cloud environment.

Riverbed is previewing the product at currently ongoing Microsoft Ignite Conference in Orlando, Florida.

Categories
News

Microsoft Azure’s confidential computing to offer first of its kind data security capabilities

Microsoft has been spending over one billion dollars per year on cybersecurity to make Azure the most trusted cloud platform. Taking a step further in Azure’s data security abilities, Microsoft has now introduced Azure confidential computing, a collection of services and features that offers a protection missing from public clouds – encryption of data while being used.

Azure confidential computing will allow applications running on Azure to keep data encrypted not only when it’s at rest or in transit, but also when it’s being computed on in-memory. It will keep data secure even from Microsoft’s administrators, hackers, and government warrants.

The data is protected inside a Trusted Execution Environment (TEE), also called enclave. None of the data or operations inside can be viewed from outside even through a debugger. Only authorized code is allowed to access the data. The operations are denied and environment is disabled, if the code is altered or tampered.

The developers be able to use different TEEs without having to change their codes. Initially the Confidential Computing supports two TEEs- Virtual Secure Mode (VSM) and Intel SGX. VSM, a software-based TEE, is implemented by Hyper-V in Windows 10 and Windows Server 2016.

“While many breaches are the result of poorly configured access control, most can be traced to data that is accessed while in use, either through administrative accounts or by leveraging compromised keys to access encrypted data,” wrote Mark Russinovich, Azure CTO.

The Azure team was working on this cloud solution, leveraging the Intel SGX technology, along with Microsoft Research, Intel, Windows, and with its Developer Tools group, for over four years.

Also read: Microsoft announces preview of burstable VM series

Microsoft will also be extending its in-house enterprise blockchain tools to provide additional security for SQL Server and SQL Database instances in Azure.

Azure is providing its confidential computing running in Microsoft Data centers in over 40 regions. It is now available in private preview as part of a special early access program.

Categories
Cloud News News

Microsoft announces preview of burstable VM series

Microsoft has announced the public preview of its new Azure VM family – the B-series, which will offer cost efficiency and burstability to workloads that are running in Azure.

These VMs are best suited for workloads that do not need a continuous CPU performance like web servers, small databases and development/test environments.

The B-series VMs are designed to optimize performance during less workload, and burst capacity during increased workloads.

The concept is quite similar to what AWS and Google have been offering through T2 instances and f1-micro and g1 small instances, respectively.

During less workloads, the B-series VMs will run in low points without utilizing the full capacity of the CPU. However, the user will pay for the full CPU only. Once the VM accumulates enough credits, it can burst above its baseline up to 100% when any application requires high performance.

Thus, these VM sizes will give cost flexibility to the end-users, who can adjust CPU usage during less and heavy workloads.

The company has introduced 6 VM sizes of B-series during preview:

Source: Microsoft

The VM size ranges from single core VM with 1 GiB memory for $0.012 per hour (in case of Linux) to eight core VMs with 32 GiB memory for $0.376 per hour (Linux). Prices for Windows are on a bit higher side.

Microsoft has previewed the VM in US – West 2, US – East, Europe – West and Asia Pacific – Southeast. The company plans to extend regions later this year.

Developers, who want to participate in the preview need to put status quota request in the supported regions.

Categories
News

Microsoft and Red Hat unite to help enterprises adopt containers easily for enhanced cloud experience

Microsoft and Red Hat announced a partnership recently which focuses on Red hat’s OpenShift container platform supporting Windows Server containers – a container technology of Microsoft, used for operating Windows applications in containers.

This includes Red Hat OpenShift Dedicated on Microsoft Azure, Windows Server containers on Red Hat OpenShift and SQL Server on Red Hat Enterprise Linux and Red Hat OpenShift.

With the move, the companies joint focus will be to help enterprises get better agility and drive digital transformation with hybrid cloud. This will enable more and more enterprises to lift their workloads from Windows server and take them to containers to modernize their development workflows.

Alongside Microsoft, Red Hat is providing a way for organizations to make technology choices that matter to them, from containerized workloads to public cloud services, without adding complexity. Combined with our integrated support teams, we’re able to offer a pathway to the digital transformation that offers the capabilities, flexibility, and choice required to power the future of enterprise IT.” – Matthew Hicks, vice president, Software Engineering, OpenShift and Management, Red Hat

The companies also plan to provide enterprise customers with co-located integrated joint support to help them meet all challenges with ease.

Also read: Microsoft strengthens its position in the evolving container space with ACI Service

OpenShift, a Red Hat’s computer software product built from Kubernetes, will support both Linux and Windows container workloads via one platform across various hybrid cloud environments, making it simpler for organizations to modernize and scale operations using a single infrastructure stack.

Also, Red Hat OpenShift Dedicated on Microsoft Azure will help enterprise IT teams to deliver business value and foster innovation instead of micro-management of resources.

Microsoft and Red Hat are aligned in our commitment to bring enterprise customers the hybrid cloud solutions they need to modernize their businesses as they shift to operate in a cloud-native world. Today, we’re extending this commitment as we again join forces to bring fully integrated solutions that simplify container adoption and help customers make the most of their hybrid cloud strategies.” – John Gossman, Lead Azure Architect, Microsoft Corp.

Red Hat OpenShift Dedicated on Microsoft Azure will be open for general availability in May 2018.

Categories
News

Despite revenue decline, Cisco fights back with new acquisition and plans for product extensions

Cisco Systems Inc. recently announced its intent to buy Springpath Inc, leading hyperconvergence software provider for $320 Million in cash.

The decision came few days after the computer networking giant announced its fourth quarter earnings report of a declining revenue base.

Total revenue was down 4% recorded at $12.41 billion.

The revenue from many of its products was considerably low (5% down). The primary products revenue was down in the fourth quarter, while services saw an increase by 1%. Security and wireless offerings recorded 3% and 5% upsurge. While other revenue sources were all in the decline, including switching and routing (9% down year-over-year).

Reportedly, analysts said that they don’t expect much improvement to the business in coming quarters, but the company can perform well in the long run. This can be due to the increased efforts towards acquisition of software and subscription businesses.

The intention to buy Springpath can be considered as one such effort. Cisco’s HyperFlex HCI (hyper-converged infrastructure) is an Original Equipment Manufacturer(OEM) of Springpath.
The duo, thus, have been working very closely since 2012, when Springpath was founded. And, most of the customers and channel partners expected the companies to merge businesses in future.

Rob Salvagno, Vice-President, Corporate Business Development, Cisco said, “This acquisition is a meaningful addition to our data center portfolio and aligns with our overall transition to providing more software-centric solutions,”

He further added, “Springpath’s file system technology was built specifically for hyperconvergence, which we believe will deliver sustainable differentiation in this fast-growing segment. I’m excited to be able to provide our customers and partners with the simplicity and agility they need in data center innovation.”

The company also announced its plans to extend ACI (Application Centric Infrastructure) into public cloud segment.

In a company blog it said that now the customers will be able to run applications across their private as well as public cloud. The service will soon be available on Microsoft Azure, Amazon Web Services and Google Cloud platform.

With this, the company aims to offer maximum flexibility to its customers. Currently, ACI service supports multiple hypervisors, Linux containers and bare metal servers. The company even co-engineered with over 65 Data Center ecosystem partners who run their products with ACI. This has helped ACI turn into the most flexible and widely deployed SDN (Software defined networking) solution.

And now by extending the same facility to public cloud domain, customers will get more benefits.

With such developments and efforts in revamping product line, it would be interesting to see how Cisco jumps back with higher revenues.

Stay tuned for updates!

Categories
News

Microsoft strengthens its position in the evolving container space with ACI Service

Microsoft is swiftly but strategically contributing to open-source projects to strengthen its position in the cloud market. Recently, the cloud juggernaut announced two new game-changing decisions – introduction of Azure Container Instances (ACI) to drive innovation in the container space and joining Cloud Native Computing Foundation (CNCF) that’s hosted by The Linux Foundation as a platinum member, as a part of Microsoft’s continued engagement with the education and open-source community at large.

The new Azure service will provide users to instantly access containers allowing them to build applications quickly, without requiring any Virtual Machine infrastructure management. ACI is a unique, easy and fast service in the cloud as the container will start within seconds and will be billed for usage in seconds.

It will let the user applications perfectly fit on the infrastructure with versatile sizing capabilities. Users will be able to easily keep track of individual containers with role-based access and billing tags.

The Container Instances for Linux have been made available in the public preview but support for Windows containers will be available in coming weeks. It can be deployed either from a template or from Azure Command Line Interface (CLI).

Users can also deploy it from a public repository like Docker Hub or even pull from their private repository with the help of Azure Container Registry. The deployed containers will be separated from others through virtualization techniques.

Microsoft’s another move to join the entity that supports open source Kubernetes container orchestration project, will further strengthen its Azure containerization platform.

Also Read: Microsoft’s hiked quarterly earnings confirm its booming cloud business

Kubernetes was originally developed under Google. It was made open-source in 2015 when it came under CNCF. Now, it stands as a major technology that helps developers run their container applications anywhere.

Microsoft also mentioned in a blog that ACI tool is not an orchestration product, but it will work with such products (orchestration) to control container deployments. The company is also launching an ACI connector for Kubernetes, which will help link the two services.

Credit: Microsoft

By associating with Kubernetes, Microsoft aims to support the key technology trusted by many customers to help them built what they want.

Microsoft is trying to keep Azure relevant with all the latest technology trends and user demands. This announcement is another step towards this direction.

Categories
News

Microsoft Inspire 2017 wraps up, here’s what you missed

Microsoft’s biggest partner event – Inspire 2017, came to an end yesterday.

The event was a great success with a count of over 17,000 attendees, some major product announcements and partner led digital transformation goals.

The day 1 keynote session focused on empowering organizations through new and advanced technology products like Microsoft 365.

Microsoft’s corporate VP, One Commercial Partner – Ron Huddleston, emphasized the immense opportunity available to the partners through the new partner program.

He said that the new One Commercial Partner Program will work with the partners in three important areas:

  • Build With – The Microsoft partner development experts will work with those partners who need any help around building their own IP, practice or any other capability.
  • Go-to-Market – The Partner experts’ team will help the partners bring their product into the market through offers.
  • Sell With – The channel managers will work hand in hand with the Microsoft sales team to ensure that the right partner solution is available to the right customer at the right time.

While Day 1 keynote focused on new products, programs and partner opportunities, the Day 2 keynote covered the implementation part.

Judson Althoff, Executive VP, Worldwide Commercial Business, talked about the four important pillars of digital transformation at Microsoft – Engage Customers, Empower Employees, Optimize Operations and Transform Products. 

By providing some real-world examples, he explained how partners can play an important role in empowering end customers to ride the digital wave.

The examples that covered wide range of industries from healthcare to retail to manufacturing and mining, helped the partners understand how they can understand customer needs more quickly and act even faster.

He further added that to better the bonding between Microsoft and its partners they would focus on two important areas of selling solutions. One is Azure Co-sell that will assist the partners who create any solution with Microsoft Azure; second are the channel managers who will help partners build and sell powerful solutions within Azure.

The final vision keynote of Day 3 focused on some new efforts by the company in regards with privacy, new sales team reorganization and overall policies they would be using to have an impact on the world.

It included announcements around GDPR policy compliance that will be in effect next year, and that Microsoft is ready to help partners comply with the new rules.

Microsoft also discussed its AI for Earth program, originally announced at AI event in London. Under this, it will provide access to cloud and other AI computing resources, lighthouse projects and technology trainings which is a $2million commitment in the next fiscal year.

Overall, the event brought new inspiration for the partners and strengthened their belief that Microsoft is a partner led company who makes partners stand at the forefront of the digital transformation.

Page 2 of 4
1 2 3 4