Cloud Cloud News

Amazon reportedly acquiring CloudEndure for $250 million

Amazon is reportedly acquiring the cloud computing company CloudEndure in a deal worth $250 million.

As per the sources, the deal has already happened, but both the companies are yet to comment on the confirmation of the deal.

Based in Israel, CloudEndure provides disaster recovery, continuous backup, and live migration services for physical virtual and cloud-based source to Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), VMware, Oracle Cloud, and OpenStack.

Since its launch in 2012, CloudEndure has raised over $18 million from Dell EMC, VMware, Mitsui, Infosys, and Magma Venture partners.

If the acquisition happens, a thing to wonder is whether the public cloud giant will continue to support the other cloud providers on CloudEndure.

Addition of CloudEndure to the data protection and backup services of AWS can play a significant role for the company. Although it is not clear what Amazon is going to do with the disaster recovery startup.

It seems that AWS is focusing on security and compliance services. Last year, the company had acquired the Cambridge-based cybersecurity startup Sqrrl to bolster its public cloud security. Sqrrl provides Sqrrl provides a Threat Hunting platform which utilizes link analysis, machine learning, and multi-petabyte scalability. The solution helps in detecting the advanced threats faster.

Also read: AWS brings VMware Cloud on AWS in customer’s own data center with Outposts

According to the sources cited to Globes, the deal is expected to close in the coming days.

Cloud Cloud News News

Secura announces cloud-based disaster recovery solution enabled by VMware vCloud Availability

Secura is delighted to announce Cloud DR, a disaster recovery as a service solution which leverages VMware vCloud Availability, one of the latest cloud technology advancements from VMware, a leading innovator in enterprise software.

Secura’s Cloud DR solution offers simple, cost-effective Disaster Recovery as a Service (DRaaS) for VMware-based platforms, as either a bolt-on service to customers of Secura’s Virtual Private Cloud (VPC) or any organizations with VMware infrastructure on its site or in a data center.

Built around VMware vCloud Availability, Cloud DR offers native disaster recovery protection for any VMware workloads with near instant recovery of protected virtual machines at the click of a button.

Cloud DR can be entirely self-provisioned and managed using VMware solutions, VMware vCenter and vSphere Replication, and customers can configure Recovery Point Objective (RPO) and Recovery Time Objective (RTO) settings, access non-disruptive sandbox testing on demand and manage replication, migration and failover.

“We’re really excited to have worked with VMware to offer this innovative cloud-based Disaster Recovery as a Service to customers in the UK,” said Secura CTO, Dan Nichols.

“Our Cloud DR solution makes it simple for businesses of any size to employ native, robust VMware-based disaster recovery services in a cost-effective way that does not require significant upfront investment.”

Secura’s Cloud DR service removes the need to purchase additional hardware or software, connecting any VMware-based platform to robust disaster recovery on Secura’s own VPC infrastructure.

Easy online management, metered storage and usage-based costs make it cost effective and simple to implement enterprise-level disaster recovery.

Also read: Secura Partners with HyTrust to Offer Robust Virtual Machine Level Data Encryption as a Service

“We are committed to delivering innovative solutions to our customers to accelerate their digital transformations,” said Alanzo Blackstock, Director, UKI Partner Organization at VMware.

“The opportunity to work with Secura in the UK on its Cloud DR solution, leveraging the latest VMware vCloud Availability technology, will enable Secura to offer simple, cost-effective cloud-based disaster recovery services that seamlessly support their customers’ environments.”

Cloud Cloud News Datacenter

Microsoft enables replication of Azure VMs to other regions with new disaster recovery feature

Microsoft has announced a new feature called Azure Site Recovery (ASR) to its Azure Availability Zones. It will allow customers to replicate zone-pinned virtual machines to other regions within a geographic cluster.

The tech giant had introduced Azure AZs last year to provide unique fault-tolerated physical locations within an Azure region with an independent power, network and cooling. An AZ consists of a number of datacenters and houses infrastructure to support highly available and mission critical applications.

Customers can choose to deploy multiple virtual machines across multiple zones within a region for infrastructure-as-a-service (IaaS) applications. These VMs are then physically separated across zones,and a virtual network is created using load balancers at each site.

“On rare occasions, an entire region could become unavailable due to major incidens such as natural disasters. Non-transient, large scale failures may exceed the ability of high availability (HA) features and require full-fledged disaster recovery (DR),” wrote Sujay Talasila, Senior Program Manager,Cloud + Enterprise, in a blog post.

Azure Site Recovery is aimed to complete the resiliency continuum for applications running on Azure VMs. It is a built-in disaster recovery as a service (DRaaS) in Azure that will help enterprises to keep doing their business even in the cases of major IT outages.

By deploying replication, failover, and recovery processing using Azure Site Recovery, enterprises can keep their applications running during planned and unplanned outages.

ASR is a native DRaaS, and Gartner has recognized Microsoft a leader in 2018 Magic Quadrant for DRaaS on the basis of completeness of vision and ability to execute.

Also read: Azure Machine Learning service now generally available

Azure Site Recovery is now generally available in all regions that support Availability Zones.

Cloud Cloud News Datacenter News

KVM powered Red Hat Virtualization 4.2 with notable upgrades, to deliver improved integration across open hybrid cloud

Red Hat announced the latest version of its KVM-powered virtualization platform— Red Hat Virtualization 4.2, aimed at modernizing the traditional applications, and innovating the cloud-native and container-based applications.

The latest version of Red Hat Virtualization comes with several new features including a simplified user interface (UI), new capabilities for virtual networking, integration with other Red Hat services, high-performance VM options, and more.

“Red Hat Virtualization allows our customers to access the economics, performance and agility they need from their virtualized infrastructure. With enhancements in Red Hat Virtualization 4.2 including improved user interface, user-managed networking, disaster recovery features and tighter integration with the rest of Red Hat’s portfolio, we expect organizations to find it even easier to use and manage Red Hat Enterprise Linux hypervisor,” said Gunnar Hellekson, director, product management, Linux and Virtualization, Red Hat.

The user interface in RHV 4.2 uses PatterFly project, a UI framework for enterprise web applications, to provide cohesive look and feel. Red Hat said that the new UI is easy to use and more intuitive, so that users can easily move between management tools without relearning the basic functions. It will also support the other services from Red Hat, including the CloudForms.

RHV latest version will provide high performance VM option, which can run the VMs at nearly the speed of bare metals. It will be useful for demanding workloads like big data analysis or artificial intelligence.

For better disaster recovery (DR), RHV can now use storage at primary as well as failover sites with more reliable and constant data replication. Further, the Red Hat Ansible Playbooks and Roles will power the RHV for automated failover and failback of DR processes. It will restrict the human errors and reduce the data and operational losses.

RHV 4.2 will have integration with Open Virtual Network (OVN) for native SDN solution through Open vSwitch. It will automate the management of network infrastructure, and free up the work of network admins.

Red Hat Virtualization 4.2 will have integration with other Red Hat services, including Ansible Automation, Gluster Storage, CloudForms, OpenStack Platform, and Satellite.

Moreover, RHV will now support Nvidia virtual GPU (vGPU) solutions, and Cisco ACI integration. There are new metrics and logging features for improved real-time reporting and visualizations capabilities.

Also read: Microsoft and Red Hat bring OpenShift to Azure as a jointly managed service

RHV 4.2 is now generally available as a standalone offering, as a part of Red Hat Cloud Suite and Red Hat Virtualization Suite, as well as an integrated offering with Red hat Enterprise Linux.

Existing customers of RHV can upgrade to 4.2 through Red Hat Customer Portal.

Articles Web Security

How to effectively prepare a business to mitigate consequences of an aggressive cyber-attack?

After a series of malicious cybersecurity incidents in 2017 surfaced affecting large companies and private organizations all over the world, cyber – security alerts have become the norm. However, the worst is yet to come. Last week, the United States and Britain issued a joint warning regarding a new wave of cyberattacks, most likely aimed at governments and private organizations, but also on individual homes and offices.

Unfortunately, security incidents happen in all organizations. The only way to improve your company’s resilience, ensuring your customers’ and stakeholders’ confidence, as well as continuing to operate your business as normal, is to invest in incident management processes, such as DraaS. Such solutions help your business mitigate the harmful impacts of cyber – attacks.

Read below how you can prepare to fight possible business disruption caused by an aggressive cyber – attack.

Carrying out cyber security incident threat analysis –  For thousands of people living in the UK, the word – “ransomware” became comprehensible, when they were turned away from NHS hospitals last year, due to the malicious WannaCry attack. There is nothing unusual about this, as only recently businesses and private users around the world can see what cyber – crime means in practice, and what disastrous consequences to business continuity it can bring. One of the main stages for protecting your business from cyber – security incidents can be considered as a very epistemological one, that means, it will involve deep understanding what you might be dealing with and what is the level of threat to your organization.

Providers of Disaster Recovery as a Service help firms to contextualize cybersecurity threats by looking at key business processes and system interdependencies that might be targeted by hackers. It is important to channel all your worries to the investigators at this stage, to help them better tailor their services to your business operations.

Consider shifting the responsibility with service level agreements –  Building your own Disaster Recovery Team might be problematic, especially when you are running a small business. However, research shows, that formal cybersecurity incident teams are invaluable for dealing with disruptive events, as very often they are the only people who have the technical expertise needed to advise on business decisions quickly. It makes sense for small and medium organizations to often fully, or partially shift their responsibilities for creating and managing disaster recovery programs to Disaster Recovery Providers.

Transferring ownership can be done by signing service level agreement, which gives you the guaranty that aspects of the service to which you both agreed to, will be delivered. This essentially means that in the event of a cyber – security incident, an external Recovery Execution Team, not you, will be responsible for one or all of the following: identifying, investigating, taking appropriate action, or overseeing all the recovering processes.

Applying changes – When looking at vulnerabilities in your system, it’s highly likely that security investigators will recommend applying changes to your IT services within your company. Configuring your systems and networks, transferring mission-critical data to safe data centers as well as implementing adequate monitoring processes is crucial for eliminating single points of failure, that are often enough to compromise your infrastructure.

Securing and retaining your data is critical – These days companies run on data, so it is essential you take the proactive approach to properly recover not only your applications and servers, but ensure they are also working, and the data they store is recovered. Disaster Recovery providers can help you to identify data that needs to be protected, as well as where it is stored, and how it can be recovered, without the need to rely on outdated data deduplication.

Depending on your business objectives you might either choose replication services that create a fully working, ready – to – use, copy of your environment (this is especially important for companies with strict RTO ) or traditional back-up and vaulting methods, which are recommended for platforms that can afford being down between 4- 12 hours.

Continuous Review of your state of readiness –  Once you have realistic scenarios based on the conducted threat analysis, you might want to see if the changes you have applied to protect your infrastructure and data work properly. A good testing method usually involves initiating a fictional, yet very probable attack internally, and verifying how well you ( or your security provider)  can respond to it. This stage might also involve undergoing recovery exercises, that could prepare you even better for an actual disaster.


Guest Author: Matthew Walker-Jones

Specializing in content covering topics including data driven marketing, online data protection, data recovery and cyber security. With a passion for all things data, Matthew is constantly staying up to date with the latest news on data security information.

Cloud Cloud News Datacenter

DCspine Deploys PoP in New Amsterdam Facility, an international data center services provider, announced that DCspine has opened a Point-of-Presence (PoP) in’s new Amsterdam facility.

DCspine is an on-demand fully automated, scalable, high capacity – data center interconnection platform designed for the cloud era. It delivers a virtual Meet-Me-Room (MMR) interconnecting more than thirty Netherlands-based data centers through software-defined networking.

Developed and owned by Eurofiber Group, a Netherlands-based provider of digital infrastructure services in the Netherlands, Belgium and Germany, DCspine is a high capacity, ‘Terabit’ interconnection platform designed to innovate data center interconnection in the Netherlands and beyond.

Eurofiber has invested several millions of euros in its DCspine platform to provide carrier-neutral data centers and their customers with the flexibility required to optimize cloud service delivery and meet the requirements set by on-demand services models.

The DCspine PoP being deployed in’s new Amsterdam facility would allow customers to establish easy, fast and cost-efficient cross-connections (up to 100G per connection) with other data centers in the Amsterdam region and beyond.

DCspine can be seen as a Meet Me Room (MMR) for all connected datacenters. The online portal offers customers the opportunity to establish connectivity ‘on-demand’ with other data centers in both the Amsterdam metropolitan area as well as deeper to the edge of the network.

Cloud Service Providers

The DCspine Point-of-Presence allows customers to easily deploy disaster recovery (DR) data center locations thus executing their business continuity plans. It also enables them to ensure network continuity and uptime guarantees during IT infrastructure migration to the AMS1 facility.

The DCspine platform would provide its services truly on-demand, as connectivity products purchased through the DCspine online portal – such as bandwidth or a point-to-multipoint connection – can be ordered, adjusted or deleted at any time 24/7.

“Eurofiber has developed the DCspine interconnection platform to capitalize on the need for highly flexible data center connectivity required by cloud service providers,” said Bart Oskam, CTO of the Eurofiber Group. “We applaud for entering our growing partner ecosystem of interconnected data centers via software defined networking. We expect cloud services providers present in AMS1 to appreciate our investments in the DCspine interconnection platform.”

On-Demand Data Center Infrastructure

With the deployment of a PoP in’s new Amsterdam data center, DCspine has expanded its ecosystem of data centers to over thirty connected facilities in the Netherlands. Next to, the data center ecosystem includes companies like Equinix, Interxion, EvoSwitch, NLDC, Digital Realty, Alticom, Dataplace, and now also DCspine will further invest in connecting other data center locations.

Part of a planned targeted global roll-out of large-scale colocation data centers in selected markets,’s Amsterdam flagship facility opened in Q4 2017 will total 54,000 square feet (5,000 square meters) colocation space upon completion. AMS1 features a highly energy-efficient design with a calculated pPUE figure of 1.04. Other facility locations worldwide are soon to follow.

Start Direct Cabinets

“DCspine is a highly innovative interconnection platform that fully meets our expectations when it comes to establishing flexible and scalable data center and networking infrastructure,” said Jochem Steman, CEO of “Also DCspine’s flexible contract terms make this interconnection platform a seamless extension to our own on-demand colocation capabilities, as is uniquely offering Start Direct Cabinets – a pay-as-you-go colocation offering with month-to-month contract terms. Infrastructural and contractual flexibility will help cloud service providers achieve true elasticity within the colocation environment.”

“DCspine is actually revolutionizing the colocation data center market in the Netherlands,” added Mr. Steman. “They are taking a concept like cloud-neutrality to the next level by adding a new level of independence, data center neutrality. Strengthening our own on-demand colocation delivery model, this will allow customers to instantly and flexibly interconnect with other colocation data centers in the Amsterdam region and also a wide variety of edge locations in the Netherlands.”

Cloud Cloud News

Microsoft adds new monitoring and troubleshooting services to Azure Site Recovery 

Microsoft has announced a new monitoring service within its Azure Site Recovery, which will enable enterprises to have a deeper visibility into their business continuity plans.

The new comprehensive monitoring capabilities are aimed to integrate disaster recovery (DR) with Azure so that the businesses can have a DR plan ready for all their IT applications.

Azure Site Recovery helps customers to protect their mission critical IT systems, maintain compliance, and be confident about protection of business data during any disaster. However, running a business continuity plan is a little complex, and the only way to check whether the plan is working or not is by performing periodic tests.

But these periodic tests need to be performed on a regular basis because of variables like configuration drift and resource availability. The comprehensive monitoring addresses this issue, and allows deep visibility into business continuity objectives through a vault overview page.  

The vault overview page includes a dashboard which allows in-depth visibility into specific components to detect error symptoms, and provides a guide for remediation. It offers a simplified troubleshooting experience, letting users know the current health of business continuity plan, as well as the current preparation status to react to any disaster, by assessing a broad range of replication parameters.

Users can switch between the dashboard pages for Site Recovery and Backup using the toggle switch on overview page. The readiness model in the dashboard helps in monitoring the resource availability and suggest configurations on the basis of best practices.

Microsoft has also updated Azure Site Recovery service for VMware to Azure replication, with a new OVF (Open Virtualization Template). The OVF simplifies the getting-started experience, and enables the replication of virtual machines in less than half an hour. The configuration server now consists of a new web portal which acts as a single point for all the actions taken on configurations server.

The new update of Azure Site Recovery will use VMware tools to install or update mobility services on all the protected VMware VMs. The users won’t need to open firewall rules for Windows Management Instrumentation (WMI), and File and Printer Sharing services, before the replication of VMs from VMware environments into Azure. This will make the deployment of Azure Site Recovery mobility service easier.

Also read: Microsoft’s Accelerated Networking for virtual machines now generally available

The comprehensive monitoring for Azure Site Recovery is generally available now. The VMware tools based mobility service installation will be available to all the customers who update their configuration servers to version 9.13 and above.

Datacenter Hosting Interviews News Technology

“When it Comes to DR and BCP, Preventive Maintenance is Absolutely Critical”- Brad Ratushny, INetU

As Hurricane Sandy ravaged East Coast last year, causing more than $68 billion in damage , it also brought significant attention to the disaster readiness of data centers. Various data center facilities struggled with power problems amid widespread flooding and utility outages, immediately impacting the businesses that rely on those resources.

While several data centers who downplayed the adverse geological contingencies were caught completely off-guard, various state-of-the-art facilities with meticulous DR planning also found it difficult to stay up and running in face of the unprecedented scale of Sandy.

The storm exposed gaping holes in the the scope of existing disaster plans and hard-pressed the need of better monitoring measures, preemptive testing , backup power, and several other improvements.

In the wake of upcoming hurricane season, I spoke to Brad Ratushny, Director of Infrastructure, INetU on how the Allentown, PA based company stayed online and ensured zero downtime for its clients during Sandy. Brad, an industry veteran with over 15 years of experience, is in charge of all data centers in the INetU global footprint.

In this interview, he talks about several proactive steps data centers can take, including the testing of all backup systems, review of emergency procedures, final generator checks and having back up fuel vendors on standby to mitigate the effects of natural disasters.

Q: Let’s begin with a brief introduction of yours and a broad overview of INetU’s services.

A: My name is Brad Ratushny and I’m the Director of infrastructure at INetU. I have been with the company for 15 years and in my current role specifically for about 5 years. We at INetU have been providing dedicated business hosting and cloud services for more than 15 years. We pride ourselves on being the experts in engineering complex hosting solutions and having first-hand experience on compliance based projects in the US and throughout our global footprint.

Q: Please tell us about INetU’s Data Center facilities and the Infrastructure and Technical specs you have in place from the Disaster Recovery and the Business Continuity POV.

A: We have a total of 10 data centers in Seattle, Pennsylvania, Ashburn, Virginia, Amsterdam and Allentown, where we are headquartered. We’re a very risk-averse company and always try to ramp up whatever we do because we like to be a little bit more safe. For example, while the typical run-time for generators in the industry is 24 hours, our fuel tanks have capacity for 48 hrs of run-time.

In addition to N+1 UPSs and generators, we also have an additional portable units to make sure that we’re always safe in case of power outages.

Also, even though we’re not in the hot zone for lightning strikes, we have lightning rods on the roof of our facilities for deflecting thunder storm outages and the lightning surge suppression in our data centers is UL listed.

In addition to having the proper data center infrastructure in place, we also take care of proper maintenance of the building envelope, roof etc. to keep everything up and running.

Q: What would you say are some of the key measures that data centers need to have in place to mitigate the adverse impacts of the natural disasters? Also, can you share with us some examples to show how you approach data center disaster planning?

A: The biggest thing in my mind from the Director of infrastructure perspective is testing, testing and testing. When we’re talking about DR and BCP, preventive maintenance is absolutely critical. DR and BCP plans aren’t something that just sit on someone’s bookshelf. They’re living, breathing documents that’re often the blueprint for how people adapt to emergencies. I actually rely on emergency preparedness plans quite a bit.

Largely, systems are absolutely critical, but the people that operate those systems are even more important. So training your team for specific situations is very important. What I mean is, when you train for Hurricane Sandy, you look at possible power disruption, cooling disruption and disruption to your various other infrastructure components; the same training applies to other potential natural disasters as well, but you need to look at what disaster you could be faced with in the near term and accordingly adjust, train and be prepared for that.

Lets’s look at what happened at the east coast when Hurricane Sandy hit last year. A lot of data centers on the coast had their disaster plans ready. They had up to 5 days of fuel on site to run their generators. Now I know of a few examples where the generators didn’t run at all because the fuel wasn’t maintained properly. They had the fuel but it wasn’t rotated and maintained timely, so it started clogging up the generators, causing them to fail.

Also, what most people didn’t expect to happen was that the fuel trucks and fuel services couldn’t get the fuel they needed on the coast, because the fuel delivery up and down the coastline became a challenge in itself. So instead of getting their fuel along the coast, which is the usual practice, they started coming inland to areas like ours, where we were concerned about a fuel shortage ourselves.When we came to know about this possibility, we went out and started setting up contracts with people in Midwest and Western Pennsylvania to make sure we won’t be impacted.

Fortunately, it never got to that point, but it’s a good example of how you can’t just live by your plan and need to think everything through level by level to respond to a disaster effectively. And that’s why I said that your DR and BCP plans are living, breathing documents. You need to train on them properly and make sure that you’re adaptable to emergencies as you go through.

Q: How do INetU’s Disaster Recovery capabilities ensure continuity in the event of a site-level failure?

A: Our primary focus is keeping our mission critical websites up and running, but plenty of our clients do actually use us for disaster recovery for their primary site. Again, I’ll use Hurricane Sandy as an example. During Sandy, we were just on-boarding a DR client and working with them to get the configuration setup. Their main configuration was somewhere along the coast and unfortunately, they were very heavily impacted at their primary facility. Even though they weren’t fully live here yet, they physically brought us their equipment. Now colocation is not a focus for us, but when we have an enterprise client who we are working with, we are flexible and we do whatever we need to do to help them.

So the client walks into our lobby with mud on their shoes and server in hand and we help them get their business back up and running. Ever since that they have actually been using us for their primary site and they use theirs as a backup site. So we are proud that we go an extra mile to help our clients and that’s what we are here for.

Q: How does INetU ensure that their data centers remain energy efficient?

A: We are constantly striving to increase efficiency in each area of our operation. In addition to aiding our clients move to the cloud, where it makes sense for them, we also monitor and implement efficiencies in our data center operations as well as efficiencies in the building envelope as a whole. These efficiencies can include replacing aged, less efficient infrastructure with newer more efficient hardware, decommissioning underutilized equipment, or increasing insulation to improve a facilities R-value.

Q: Wrapping up, since you’ve been in the industry since long, what according to you are some of the questions organizations should ask while choosing an enterprise data center from the security POV?

First, you need to make sure that the data center has all the relevant industry certifications like PCI DSS compliance, SOC3, SOC2/SSAE 16 Type II and ISAE 3402. Then you need to go to a level deeper than that and check the physical security of the data center, security equipment, processes etc. You also need to check if they have proper procedures to control, monitor, and record access to the facility. For example, some legacy data centers are relatively unmanaged and don’t have 24×7 security, which is fine for certain applications, but definitely not for enterprise environments.

So you need to look into all these factors, weigh them and think further how they apply to your business specifically.

Cloud Cloud News Hosted Cloud Apps Hosting New Products News Technology Web Hosting

Enterhost Launches New Cloud-based IT Product Line; Streamlined Web site and Brand

Web hosting provider Enterhost today launched a new cloud-based IT product line that is aimed at providing businesses with phone (Lync 2013 Enterprise Voice) and email systems (Exchange 2013), virtualization, server and application hosting, backup, and disaster recovery.

“Our expertise in cloud technology provides a platform for hostingphone and email systems, servers and applications in the cloud, as well as offering scalable backup, disaster recovery and virtualization solutions. Together, these products enable our customers to create as virtual an office as they desire. – Kevin Valadez, co-president, Enterhost.

Enterhost has also launched a revamped brand and web site, which is being unveiled online this week. As a part of repositioning, Enterhost team has got rid of language and knowledge barriers that may impede a customer from making the right decision for his or her company.

The company has also created a mascot to support companies that need expert IT solutions but lack a resident technical expert.

“Some business owners benefit from having a human element they can relate to as they navigate the often-murky waters of making decisions around IT for their companies. We designed Tech Tom to be inclusive and straight-talking, guiding our customers to the right technology for their businesses.” said Ben Tiblets, co-president, Enterhost.

“Our goal is to provide any business, whether it has 20 desktops or 1,000, with practical, effective ways to leverage technology to compete in today’s marketplace,” he added.

With the new product line-up, applications and servers that were formerly required to run in the office can now run in the cloud.

EnterHost Cloud Services

“We are always looking at ways we can improve our offerings, and as our customers have recovered from the recent economic downtown, we realized they needed different solutions to improve their operations,” said Kevin Valadez, co-president, Enterhost.

“From small- to medium-sized businesses to our enterprise-level clientele, our customers were asking for functional products that heighten collaboration, protect assets and streamline costs,” he added.

Founded in 2000, Enterhost provides a wide range of cloud-based IT business solutions, and specializes in Windows applications for office phone and business email, as well as cloud storage, backup, disaster recovery, virtualization and colocation.

For more information, click here.

Cloud Cloud News Hosting New Products News Partnership Technology

ExaGrid & CA Technologies To Deliver Customized Disk Backup & Recovery For Physical & Virtual Servers

CA Technologies and ExaGrid Systems today announced that they’ve teamed up to deliver a customized backup solution, which pairs CA ARCserve® D2D image-based backup and recovery software and ExaGrid disk storage and deduplication appliance hardware.

Customers’ data protection requirements are evolving very quickly, and they want to make sure they get the right mix of functionality – but also efficiency, speed and simplicity – at the right ownership costs. By working with ExaGrid, CA Technologies is enabling its customers to fulfill their requirements. – Chris Ross, CA Technologies.Chris Ross, CA Technologies

The new solution will help mid-market and small enterprise organizations slash storage requirements and related costs while shortening backup windows, speeding recovery, and delivering more reliable offsite disaster recovery than slower tape devices.

The combination of CA ARCserve D2D and ExaGrid will help IT organizations back up complex heterogeneous environments that combine virtual and physical environments in a cost-effective manner.

It delivers significantly fast backup and recovery and post-process data deduplication to meet resource constraints without affecting backup performance.

“Customers’ data protection requirements are evolving very quickly, and they want to make sure they get the right mix of functionality – but also efficiency, speed and simplicity – at the right ownership costs,” said Chris Ross, Vice president, Worldwide Sales, Data Management, CA Technologies.

“By working with ExaGrid, CA Technologies is enabling its customers to fulfill their requirements,” he added.

“With data growing exponentially across both physical and virtual servers while IT budgets remain tight, we are seeing great demand for a simplified solution like this that delivers reliable, resource-efficient backup and recovery in mixed infrastructure environments,” said Marc Crespi, vice president, Product Management, ExaGrid.

“We look forward to showing customers the power of a combined ExaGrid and CA ARCserve D2D solution, and how they can achieve faster recovery of their business systems and the best scalability in the industry as their needs grow,” he added.

The CA ARCserve D2D software and ExaGrid disk storage appliances are now available through authorized resellers and service providers.

For more information, click here.

Page 1 of 3
1 2 3