Categories
Newss

Intel acquires Barefoot Networks to boost data center capabilities

Intel is acquiring Barefoot Networks, an emerging leader in Ethernet switch silicon and software, to improve the potential of its data center chips. The chip maker aims to meet the performance and ever-changing demand of the hyperscale cloud.

“We’ve discussed previously the amazing fact that over half of the world’s data was generated in the past two years and only 2% of that data has been analyzed. Driven by that reality, we’re always asking ourselves how we can better enable our customers to harness the potential of this data, by moving, storing and processing it with the speed and efficiency that they demand,” wrote Navin Shenoy, executive vice president and general manager of the Data Center Group at Intel Corporation, in a blog post.

Founded in 2013, Barefoot Networks specializes in the programmability and flexibility required to meet the performance and rising demand of the hyperscale cloud. Acquisition of Barefoot will support the focus on Intel on end-to-end cloud networking and infrastructure leadership. It will also enable Intel to remain focused on delivery of new workloads, and capabilities of its data center customers.

Further, Intel believes adding Barefoot Networks will improve the flow of the information and create more advanced data center interconnects.

“Barefoot Networks team is a great complement to our existing connectivity offerings. Barefoot Networks will add deep expertise in cloud network architectures, P4-programmable high-speed data paths, switch silicon development, P4 compilers, driver software, network telemetry and computational networking,” explained Shenoy.

This deal is likely to close in the third quarter of 2019.

READ NEXT: Intel and industry leaders form new consortium to accelerate datacenter performance

Image source: Intel

Categories
Newss

Microsoft open sources AI algorithms that powers Bing search engine

Microsoft is open sourcing the AI-powered software algorithms it uses to power Bing search services, on GitHub. Called Space Partition Tree And Graph (SPTAG), the algorithm makes Bing an advanced search engine using deep learning models. These models enable Bing to deliver millions of pieces of information, called vectors, in milliseconds.

By open sourcing SPTAG, Microsoft wants to empower developers to use these algorithms for their own use cases where end-users perform searches among large data troves.

In the past, web search was very easy. The end users just expected some relevant results from the search engines. But things have changed today. The same users now can perform searches by dropping an image in the search box.

They can ask digital assistants like Cortana to search for things. Plus, when they ask a question, they expect the search engine to return the exact answer, rather than showing a list of pages with answers. This is making things difficult for traditional search engines.

Advancements in AI is powering the search engines like Bing to address such challenges. Microsoft uses SPTAG algorithm for Bing to power it with vector search.

“Vector search makes it easier to search by concept rather than keyword. For example, if a user types in “How tall is the tower in Paris?” Bing can return a natural language result telling the user the Eiffel Tower is 1,063 feet, even though the word “Eiffel” never appeared in the search query and the word “tall” never appears in the result,” explained Microsoft in a blog post.

The Bing team mentioned that open sourcing the SPTAG algorithm will help enterprises developers to power their applications with capabilities that can identify a language being spoken based on an audio snippet. It can also make apps capable of identifying pictures.

Also read: Microsoft rolls out new AI capabilities in Azure for developers and enterprises

“Even a couple seconds for a search can make an app unusable,” noted Rangan Majumder, group program manager on Microsoft’s Bing search and AI team.

“We’ve only started to explore what’s really possible around vector search at this depth.”

Categories
Cloud Cloud News Uncategorized

Microsoft rolls out new AI capabilities in Azure for developers and enterprises

Ahead of the Microsoft Build 2019 Developer Conference, the tech giant is rolling out a number of new tools and services to help developers and enterprises to harness the potential of artificial intelligence (AI).

“AI is fueling the next wave of transformative innovations that will change the world. With Azure AI, our goal is to empower organizations to apply AI across the spectrum of their business to engage customers, empower employees, optimize operations and transform products,” wrote Eric Boyd, Corporate Vice President, Azure AI, in a blog post.

Project Brainwave’s Azure Machine Learning Hardware Accelerated Models are now generally available. Announced for a preview last year, it helps in speeding up the training of AI models. Further, Microsoft has pushed the preview of these models for edge computing, in collaboration with Dell Technologies and HPE.

The company is adding support for ONNX Runtime for NVIDIA TensorRT and Intel nGraph to provide high-speed inferencing on NVIDIA and Intel chipsets.

Azure Machine Learning service is getting new capabilities to allow developers, data scientists, and DevOps professionals to increase productivity, operationalize models at scale, and innovate faster. For instance, there is an automated machine learning UI that will allow customers to train ML models just with a few clicks.

Azure Machine Learning will also have a zero-code visual interface, and notebooks to provide developers and data scientists a code-first ML experience.

The hardware accelerated models are also becoming generally available in Azure Machine Learning. These models run on FPGAs in Azure for low-latency and low-cost inferencing. For Databox Edge, it is currently available in preview.

The Machine Learning service is also getting MLOps or DevOps for ML capabilities. These capabilities include Azure DevOps integration to enable Azure DevOps to be used to manage the entire ML lifecycle.

Also read: Microsoft Teams PowerShell module now up for grabs

Furthermore, Microsoft is also previewing a new service called Azure Open Datasets to help customers improve the accuracy of ML models using rich, curated open data and reduce the time spent on data discovery and preparation.

Categories
Event News Newss

The shift to Edge to be the most profound change in decades: Edge Europe 2019

The global Edge Computing Market is expected to reach USD 3.24 billion by 2025, expanding at a CAGR of 41.0% during the forecast period.*

Over the last few years, we have seen the prodigious value brought to the market by the cloud computing and the shift towards centralized data centers. While the cloud infrastructure has proven to be an effective and economic platform for businesses, it has also increased the amount of data generated.

With the onset of the internet of things (IoT) and demand for a connected world, computing is more dispersed and centralized infrastructures alone will fail to be a silver bullet. The rapid progression of an increasingly connected and smart world as a result of IoT and other advanced technologies necessitates bringing the computing closer to the resources with the consumer – or what we call the Edge.

This trend of distributed architecture will further multiply with the convergence of new technologies like 5G, artificial intelligence, smart cities, autonomous cars, micro-datacenters, edge facilities etc. thereby, marking the shift towards the Edge Computing.

Edge Europe 2019 – a part of the global Edge Congress Series, will extensively cover this shift towards edge on January 31st 2019 at Amsterdam, Netherlands.

Organized by BroadGroup, the one-day congress will have everything you will need for an all-encompassing edge ecosystem. The agenda includes sessions and thought leadership across critical IT infrastructure market. It will bring together the thought leaders, innovators, enterprise users, investors and IT solution providers under one roof.

Why attend Edge Europe 2019?

  • To learn about the constantly evolving edge infrastructure landscape.
  • To know about the impact of edge on the data center.
  • To be a part of the next industrial revolution.
  • To know about the role of automation, AI and machine learning.
  • To discuss and learn about the connection between edge computing and digital transformation.

A look at the speaker lineup

The event will have key sessions led by expert leaders, like:

  • Cyril Perducat, EVP IoT & Digital Offers, Schneider Electric.
  • Ben Tian, Senior Strategic Director, Alibaba Cloud Research.
  • Andreas Keiger, Executive Vice President Global Business Unit IT, Rittal
  • Cara Mascini, CEO & Founder, EdgeInfra B.V.
  • Chad Boulanger, Managing Director, EMEA, FogHorn Systems.
  • Wilfried Dudink, Director Content Solutions, Product Management EMEA, CenturyLink.

You can check the entire list here.

DHN is the official media partner of the event. Register for Edge Europe 2019 using code DHNEDGEE19 and get 15% discount.

Source:

*https://www.grandviewresearch.com/press-release/global-edge-computing-market

Categories
Cloud Cloud News

ZeroStack adds AI-as-a-service capability to its platform

The leading self-driving cloud provider ZeroStack is adding AI-as-a-service capability to its platform. With the new capability, the company will enable its customers to provide one-click deployment of GPU resources and deep learning frameworks to users.

Enterprises and MSPs leverage ZeroStack platform to automate cloud infrastructure, applications, and operations. It allows them to focus on services that accelerate their businesses, simplify operations, and reduce costs.

Artificial intelligence (AI) and machine learning solutions are trending today, and reshaping the experiences in computing. With the availability of modern machine learning and deep learning frameworks like TensorFlow, PyTorch, and MXNet, the AI applications have become more viable than ever.

However, enterprises and MSPs often find it difficult to deploy, configure, and execute the AI frameworks and tools. Also, it becomes complicated to manage their inter-dependencies, versioning, and compatibility with servers and GPUs.

With new AI-as-a-service capability, ZeroStack aims to give its customers the power to automatically detect GPUs and make them available for users. The new innovation will also take care of all the operating system (OS) and CUDA library dependencies, allowing users to focus on AI development.

“ZeroStack is offering the next level of cloud by delivering a collection of point-and-click service templates,” said Michael Lin, director of product management at ZeroStack. “Our new AI-as-a-service template automates provisioning of key AI tool sets and GPU resources for DevOps organizations.”

Additionally, the company mentioned that users can enable GPU acceleration with dedicated access to multiple GPU resources for an order-of-magnitude faster inference latency and user responsiveness. GPUs within hosts can be shared across users in a multi-tenant manner.

Also read: Top 4 AI engines to look out for in 2019

To optimize the utilization of new AI-as-a-service capability, admins of ZeroStack self-driving cloud will be able to configure, scale, and allow fine-grained access control of GPU resources to end users.

Categories
Cloud Datacenter News Newss

Witness the most transformative opportunity of the next decade, at Edge 2018

The edge computing market size is expected to grow from USD 1.47 Billion in 2017 to USD 6.72 Billion by 2022, at a Compound Annual Growth Rate (CAGR) of 35.4% during the forecast period.1

A voluminous increase in the amount of data generated from millions of devices streaming music, videos, social networking, is creating a high network load and demand for more bandwidth. Edge computing is one of the most feasible solutions when it comes to improving user experience and optimizing bandwidth usage.

Increased load on cloud infrastructure, growing application range in various types of industries, and an increase in the number of intelligent and connected devices are the main driving forces for the Edge Computing Market.

Amongst this growing market for Edge Computing, organizations need to equip themselves with information about the latest and upcoming trends in the edge computing and edge datacenters, the strong market players, opportunities etc.

To explore how computing data at the edge will connect everything, BroadGroup is conducting Edge 2018 on October 24-25, 2018 at Austin, Texas.

Let’s quickly take a sneak peek into Edge 2018

Edge 2018 will be a highly interactive platform which will see the coming together of world leading edge players and datacenter leaders to discuss new opportunities that exist in the edge ecosystem.

Edge 2018 will respond to the emerging opportunities coming forth due to the growing inclination towards distributed infrastructure through the convergence of new technologies like AI (Artificial Intelligence), IoT (Internet of things), 5G like smart cities, autonomous cars, micro-datacenters, connected things and edge facilities.

Edge represents the most metamorphic and profound change for decades, connecting things everywhere and enabling new ways of analyzing and managing the data needed to alter society, drive change and prioritize information critical to local infrastructure operations enabled by 5G and AI,” said Philip Low, the Chairman of BroadGroup.

Highlights of Edge 2018

1. Anixter Workshop and Opening Reception

Anixter’s pre-conference workshop and opening reception on a day before the event – 23rd October, will feature speakers from Black & Veatch and Equinix. The workshop will explore key trends and best practices to be followed within the edge data center environments.

2. MeetMe

MeetMe is a golden opportunity for all the registered delegates to arrange meetings with the edge visionaries and discuss key trends in the edge market.

3. Infrastructure Masons Edge Meeting

Edge Congress will also conduct a meeting of iMasons – an association of digital infrastructure professionals. Though it is primarily for the iMasons members, non-members can also come and share their knowledge.

4. Automation and AI Panel

The panel will discuss the role of AI and automation in managing data at the point of collection to ensure that the networks are not overburdened, which in turn can increase agility and production.

5. Edge Leaders Panel Discussion

Led by IGR President Iain Gillott, the panel will discuss about how the rising demand for IT services is driving the growth of datacenters.

6. Farewell BBQ Hosted by Vertiv

At the end of Day 2, the attendees will get to enjoy the authentic taste of Texas at the Farewell BBQ hosted by Vertiv.

7. Picturesque Views of Austin Texas

Attendees can enjoy the beautiful environment and culture of Texas, its inspiring cuisine, and stunning views, along with networking at the main event venue.

A look at the speakers

The event will have eminent C-level speakers, data center leaders and edge experts who will share their knowledge through respective sessions. A quick look at speakers:

  • Kevin Brown – Senior Vice President of Innovation & CTO, Schneider Electric, IT Division
  • Gaurav Chawla – VP and Fellow, Dell EMC and EdgeX Foundry Member
  • Kit Colbert – VP & CTO, Cloud Platform Business Unit, VMware
  • Mark Collier – Chief Operating Officer, OpenStack Foundation
  • Jeff Ferry – VP, Goldman Sachs
  • Alon Gavrielov – Infrastructure Planning & Growth, Cloudflare
  • Andrew Jimenez – VP of Technology, Anixter
  • Mike McBride – Sr. Director, Innovation & Strategy, Huawei Technologies
  • Jimmy D. Pike – SVP and Senior Fellow, Dell EMC

You can access the full list here.

Event Sponsors

Schneider Electric is the founding partner of the Edge 2018 while Dell EMC and Vertiv are the global platinum sponsors.

You can check other sponsors here.

DHN is the media partner of Edge 2018

With DHN being the media partner of Edge 2018, our readers and subscribers get 15% discount, while registering. Use code DHNEdge18 to avail the discount.

Less than a week’s time is left for registration now. So, hurry!

Follow us @DailyHostNews to stay updated.

1https://www.marketsandmarkets.com/PressReleases/edge-computing.asp

Categories
Uncategorized

Microsoft snaps up deep learning startup Lobe to make AI development easier

Microsoft is acquiring San Francisco-based startup Lobe to help developers easily build, train and ship deep learning and AI (artificial intelligence) models.

Lobe provides a simple visual interface that allows users to build custom deep learning models, train them, and ship them directly to applications without writing any code.

Dee learning is a subset of machine learning which is used to build artificial neural networks. The technology has numerous applications, such as driverless cars. It teaches the cars to recognize the traffic signals or to differentiate between a lamppost and a pedestrian.

Although the deep learning technology has made a good progress over the last few years, but Microsoft said that the development process of deep leaning systems is still complex and slow.

“In many ways though, we’re only just beginning to tap into the full potential AI can provide. This in large part is because AI development and building deep learning models are slow and complex processes even for experienced data scientists and developers,” wrote Kevin Scott, Executive VP and CTO, Microsoft, in a blog post.

“To date, many people have been at a disadvantage when it comes to accessing AI, and we’re committed to changing that.”

Lobe and Microsoft aim to make the AI development easier. With Lobe AI platform, users will only need to drag in a folder of training examples from their desktop. Lobe will automatically build the custom deep learning model and initiate the training. When the process finishes, users can export the trained models and ship them directly to apps.

Lobe Team at Microsoft

Microsoft has been working on making the AI development easier for everyone over the last few years. The tech giant snapped up conversational AI startup Semantic Machines in May, and then acquired Bonsai in June to bolster its vision of making AI accessible to all.

Also read: Microsoft’s Visual Studio Team Services is now Azure DevOps

“We look forward to continuing the great work by Lobe in putting AI development into the hands of non-engineers and non-experts.  We’re thrilled to have Lobe join Microsoft and are excited about our future together to simplify AI development for everyone,” added Scott.

Categories
Cloud Cloud News Newss

HPE to invest $4 billion in intelligent edge technologies over the next four years

Hewlett Packard Enterprise (HPE) is planning to invest $4 billion into intelligent edge technologies and services over the next four years.

Antonio Neri, CEO of HPE, announced the company’s plan at Discover conference. Edge computing or intelligent edge, as HPE refers to it, covers most of the modern technologies entering the market.

Aim of the new investment is to help customers turn all their data from every edge to any cloud into intelligence, said HPE. It will drive seamless interactions between people and things, and provide personalized user experiences.

HPE will specifically invest in research and development to transform the services, products, and consumption models covering cybersecurity, automation, and artificial intelligence (AI) and machine learning.

“Data is the new intellectual property, and companies that can distill intelligence from their data —whether in a smart hospital or an autonomous car—will be the ones to lead,” said Antonio Neri. “HPE has been at the forefront of developing technologies and services for the Intelligent Edge, and with this investment, we are accelerating our ability to drive this growing category for the future.”

Gartner forecasted that 75% of enterprise-generated data will be created and processed outside traditional, centralized data center or cloud by 2022, up from around 10% in 2018. Most of the digital business projects will be created at the edge.

HPE plans to meet all the demands of intelligent edge with the new investment plan. The company is already a prominent player in fast-growing market for intelligent edge. For instance, HPE’s Aruba and Pointnext services are helping Gatwick Airport in becoming the most technologically advanced airport in the world.

Also read: HPE simplifies hybrid cloud management with GreenLake Hybrid Cloud

“The next evolution in enterprise technology will be in edge-to-cloud architecture,” continued Neri.  “Enterprises will require millions of distributed clouds that enable real-time insights and personalized experiences exactly where the action is happening.”

Categories
Datacenter Interviews

“Demand for scale and speed delivered at the right economics is opening the door for a new breed of Hyperscale Service Provider being sought by the biggest Internet-based businesses.” – Chris Ortbals, QTS.

The rapid adoption of public cloud and the onset of new technologies like the internet of things, neural networks, artificial intelligence, machine learning and mega-scale online retailing are reshaping the data center industry, driving demand for data center capacity and cloud connectivity.

QTS is the leading data center provider that serves current and future needs of both hyperscale and hybrid colocation customers via software-defined data center experience. We recently interviewed Chris Ortbals – Executive Vice President, Products & Marketing, QTS, to know about his take on the changing data center requirements and QTS strategy of redefining the data center.

1. Please share an overview of QTS’ journey from inception till date with DHN readers. How has it transformed from being a single data center to becoming one of the leading national data center providers?

QTS is the creation of Chad Williams, a business and real-estate entrepreneur who had a strong vision of what a data center can and should be. Williams foresaw increasing IT complexity and demand for capacity and recognized the opportunity for large, highly secure, multi-tenant data centers, with ample space, power and connectivity.

Chris Ortbals Executive Vice President, Products & Marketing QTS

In 2005, QTS was formally established with the purchase of a 370,000 square foot Atlanta-Suwanee mega data center. Williams focused on building an integrated data center platform delivering a broad range of IT infrastructure services ranging from wholesale to colocation, to hybrid and multi-cloud, to hyperscale solutions, and backed by an unwavering commitment to customer support.

Since then, we have grown both organically and through acquisition into one of the world’s leading data center and IT infrastructure services providers, and in 2013 we began trading on the New York Stock Exchange under symbol (NYSE: QTS).

Today, QTS offers a focused portfolio of hybrid colocation, cloud, and hyperscale data center solutions built on the industry’s first software-defined data center and service delivery platform and is a trusted partner to 1,200 customers, including 5 of the worlds’ largest cloud providers. We own, operate or manage more than six million square feet of data center space encompassing 26 data centers, 600+ megawatts of critical power, and access to 500+ networks including connectivity on-ramps to the world’s largest hyperscale companies and cloud providers.

More recently, we have been accelerating momentum as a hyperscale data center provider able to meet unique requirements for scale and speed-to-market delivered at the right economics being sought by the biggest Internet-based businesses.

2. Throw some light on the recent QTS strategy of redefining the data center. What’s the Software-Defined Data Center approach, how do you plan to execute it and how will it help hyperscale and hybrid colocation customers?

We believe that QTS’ Service Delivery Platform (SDP) enables QTS as one of the first true Software Defined Data Centers (SDDC) with 100% completeness of vision. It is an architectural approach that facilitates service delivery across QTS’ entire hybrid colocation and hyperscale solutions portfolio.

Through policy-based automation of the data and facilities infrastructure, QTS customers benefit from the ability to adapt to changes in real-time, to increase utilization, performance, security and quality of services. QTS’ SDP approach involves the digitization, aggregation and analysis of more than 4 billion data points per day across all of QTS’ customer environments.

For hybrid colocation and hyperscale customers, it allows them to integrate data within their own applications and gain deeper insight into the use of their QTS services within their IT environments. It is a highly-automated, cloud-based approach that increases visibility and facilitates operational improvements by enabling customers to access and interact with information related to their data center deployments in a way that is simple, seamless and available on-demand.

3. How do you differentiate yourself from your competition?

QTS software-defined service delivery is redefining the data center to enable new levels of automation and innovation that significantly improves our customers’ overall experience. This is backed by a hi-touch, enterprise customer support organization that is focused on serving as a trusted and valued partner.

4. How does it feel to receive the industry leading net promoter score for the third consecutive year?

We were extremely proud to announce that in 2017 we achieved our all-time high NPS score of 72 and the third consecutive year that we have led the industry in customer satisfaction for our data centers across the U.S.

Our customers rated us highly in a range of service areas, including customer service, physical facilities, processes, responsiveness service of onsite staff and our 24-hour Operations Service Center.

As our industry-leading NPS results demonstrate, our customers continue to view QTS as a trusted partner. We are also starting to see the benefits of our service delivery platform that is delivering new levels of innovation in how customers interact with QTS and their infrastructure, contributing to even higher levels of customer satisfaction.

5. QTS last year entered into a strategic alliance with AWS. Can you elaborate what is CloudRamp and how will it simplify cloud migration?

AWS came to us last year telling us that a growing number of their customers were requiring colocation as part of their hybrid IT solution. They viewed QTS as a customer-centric colocation provider with the added advantage of our Service Delivery Platform that allowed us to seamlessly integrate colocation with AWS as a turnkey service available on- demand.

We entered a strategic collaboration with AWS to develop and deliver QTS CloudRampTM – direct connected colocation for AWS customers made available for purchase online via the AWS Marketplace.

By aligning with AWS, we were able to offer an innovative approach to colocation, bridging the gap between traditional solutions and the cloud. The solution is also groundbreaking in that it marked the first time AWS had offered colocation to its customers and signaled the growing demand for hybrid IT solutions. At the same time, it significantly accelerated time-to-value for what previously had been a much slower purchasing and deployment process.

For enterprises with requirements extending beyond CloudRamp, QTS and AWS provide tailored, hybrid IT solutions built upon QTS’ highly secure and reliable colocation infrastructure optimized for AWS.

6. Tell us something about Sacramento-IX. How will the newly deployed Primary Internet Exchange Hub in QTS Sacramento Data Center facilitate interconnection and connectivity solutions?

QTS is strongly committed to building an unrestricted Internet ecosystem and we are focused on expanding carrier neutral connectivity options for customers in all of our data centers.

Interconnection has evolved from a community driven effort in the 90’s to a restrictive, commercial industry dominated by a few large companies. Today there is a movement to get back to the community driven, high integrity ecosystem, and QTS is aligning our Internet exchange strategy as part of this community.

A great example is how the Sacramento Internet Exchange (Sacramento-IX) has deployed its primary Internet Exchange hub within QTS’ Sacramento data center. It is the first internet exchange in Sacramento and is being driven by increased traffic network performance demands in the region. It expands QTS’ Internet ecosystem and simplifies our customers network strategies by providing diverse connectivity options allowing them to manage network traffic in a more cost-effective way.

Once considered the backup and recovery outpost for the Bay area, Sacramento has quickly become a highly interconnected and a geostrategic network hub for northern California. It also solidifies our Sacramento data center as one of the most interconnected data centers in the region and as the primary west coast connectivity gateway for key fiber routes to Denver, Salt Lake City and points east.

7. Hyperscale data centers are growing at an accelerated pace and are expected to soon replace the traditional data centers. Can you tell us some factors/reasons that aid the rise of hyperscale data centers?

The rapid adoption of public cloud, the Internet of things, artificial intelligence, neural networks, machine learning, and mega-scale online retailing are driving unprecedented increases in demand for data center capacity and cloud connectivity.

Hyperscale refers to the rapid deployment of this capacity required for new mega-scale Internet business models. These Hyperscale companies require a data center growth strategy that combines speed, scalability and economics in order to drive down cost of compute and free up the capital needed to feed the needs of their core businesses. Think Google, Uber, Facebook, Amazon, Apple, Microsoft and many more needing huge capacity in a quick timeframe. They are looking for mega-scale computing capacity inside hyperscale data centers that can deliver economies of scale not matched by conventional enterprise data center architectures.

This demand for scale and speed delivered at the right economics is opening the door for a new breed of Hyperscale Service Provider being sought by the biggest Internet-based businesses. These are data centers whose ability to deliver immense capacity must be matched by an ability to provide core requirements for speed, quality, operator excellence, visibility and economics, that leaves out a majority of conventional hosting and service providers who are not interested in or capable of meeting them.

And while the organization may have need for very large geostrategic 20, 40, 60 megawatt deployments, typically they want a provider that can deliver it incrementally to reduce risk and increase agility.

8. Throw some light on your current datacenters and future expansion plans.

Chad Williams’ had the vision for identifying large, undervalued – but infrastructure-rich – buildings (at low cost basis) that could be rapidly transformed into state of the art “mega” data center facilities to serve growing enterprise demand for outsourced IT infrastructure services.

In Chicago, the former Chicago Sun Times printing plant was transformed into a 467,000 square foot mega data center. In Dallas and Richmond, former semi-conductor plants are now state of the art mega data centers encompassing more than 2 million square feet. And in Atlanta, the former Sears distribution center was converted into a 967,000 square foot mega data center that is now home to some of the world’s largest cloud and social media platforms.

However, in some cases, a greenfield approach is the more viable option. In Ashburn Va. the Internet capital of the world, we are building a new 427,000 square foot facility from the ground up that is expected to open later this summer. Expansion plans also call for new data center builds in Phoenix and Hillsboro, Oregon.

9. What is your datacenter sustainability and efficiency strategy?

At QTS, we understand that being a good environmental steward takes much more than just a simple initiative. That’s why we have focused our efforts on developing a company-wide approach – one that utilizes reused and recycled materials, maximizes water conservation and improves energy savings.

Central to this is our commitment to minimizing the data center carbon footprint and utilizing as much renewable fuel as possible by implementing a 3-pronged sustainability approach featuring solutions in containment and power usage effectiveness (PUE) metric products.

This encompasses:

       1. Develop and Recycle Buildings

Part of our data center sustainability strategy is reusing brownfield properties and transforming them into state-of-the-art data centers.

        2. Water Conservation

With a large data center comes a big roof that is capable of harvesting rainwater. We collect millions of gallons of water using a harvesting system on a portion of the roof.

        3. Energy Efficiency

As a data center provider, cooling is a critical part of our job and is approximately 30% of the electricity load at the data center.

QTS is one of the first data center companies to invest in renewable energy specifically for its hybrid colocation and hyperscale customers.

A recent example is a multi-year agreement with Citi to provide 100% renewable power for our 700,00 sq. ft. mega data center in Irving, Texas. The power and renewable energy credits will come from the Flat Top Wind Project, a 200 megawatt utility-scale wind energy facility in central Texas. QTS will purchase 15 MW of 100% renewable power for its Irving data center, with plans for a similar agreement in its Fort Worth data center later this year.

The investment supports QTS’ commitment to lead the industry in providing clean, renewable energy alternatives for QTS hybrid colocation and hyperscale customers that include five of five largest cloud providers and several global social media platforms.

In addition to the new wind power initiative in Texas, QTS’ New Jersey data center features a 14 MW solar farm to offset emissions associated with power consumption at that facility. QTS plans to expand renewable power initiatives in existing and new data centers including those being planned for Phoenix and Hillsboro, OR.

10. What’s in the roadmap for the year 2018?

QTS is now executing on our 2018 strategic growth plan that involves continued innovation with the Service Delivery Platform. It enables a software-defined data center experience for hyperscale and hybrid colocation customers. QTS’ SDP represents a big data approach enabling customers to access and interact with information related to their specific IT environment by aggregating metrics and data from multiple sources into a single operational view.

More importantly, it provides customers the ability to remotely view, manage and optimize resources in real time in a cloud-like experience, which is what customers increasing expect from their service providers. In addition, through a variety of software-defined networking platforms, enterprises can now get direct connectivity to the world’s largest cloud providers with real-time visibility and control over their network infrastructure using QTS’ SDP application interface.

Categories
Cloud Cloud News News

Salesforce marks its largest acquisition, buys MuleSoft for $6.5 billion 

Salesforce has acquired the enterprise software integration firm MuleSoft for $6.5 billion, marking its largest-ever acquisition.

MuleSoft builds enterprise-grade software called Anypoint Platform, which enables companies to easily build and scale an application network of data, apps and devices across cloud and on-premise.

With this acquisition, MuleSoft will now power Salesforce Integration Cloud, which will help enterprises to unlock data across legacy systems, devices and cloud apps. It will speed up their digital transformation journey and provide a more connected customer experience.

“Every digital transformation starts and ends with the customer,” said Marc Benioff, Chairman and CEO, Salesforce. Together, Salesforce and MuleSoft will enable customers to connect all of the information throughout their enterprise across all public and private clouds and data sources—radically enhancing innovation. I am thrilled to welcome MuleSoft to the Salesforce Ohana.”

Salesforce has been empowering enterprises to connect more effectively with their customers through better insights from their data with Einstein, it’s artificial intelligence platform. MuleSoft will help it to access data across the company and thus, this acquisition will now enable Salesforce to have all systems in one place for taking care of its ML and AI operations.

Suggested Reading: IBM and Salesforce partnership brings together Watson and Einstein – to help enterprises make smarter decisions  

Salesforce will pay $44.89 per share for MuleSoft, a 36% premium over MuleSoft’s closing share price on March 19, 2018. Each share of MuleSoft will equal $36 in cash and 0.0711 shares of Salesforce common stock. The deal is expected to close by the end of July 2018.

“With the full power of Salesforce behind us, we have a tremendous opportunity to realize our vision of the application network even faster and at scale,” said Greg Schott, MuleSoft Chairman and CEO. “Together, Salesforce and MuleSoft will accelerate our customers’ digital transformations enabling them to unlock their data across any application or endpoint.” 

MuleSoft went public just last year, but presented a strong growth when the company closed Q4 2017 with revenue of $88.7 million. It was an increase of over 60% as compared to previous year’s final quarter. The company has nearly 1200 customers worldwide including McDonald, Verizon and Coca-Cola.

Page 1 of 5
1 2 3 5