Cloud News New Products News

Singapore Unicorn Acronis Released The World’s First Complete Cyber Protection Solution, Tailored To Fight Pandemic Cyberthreats And Improve Remote Work

Enabling service providers to cut costs and defend their clients against cyberthreats through AI-powered integration of data protection and cybersecurity – with total Zoom security, COVID-19 URL filtering and a 100% detection rate

Acronis, a global leader in cyber protection, announced today the availability of Acronis Cyber Protect, an innovative new cyber protection solution that integrates backup, disaster recovery, next-gen anti-malware, cybersecurity, and endpoint management tools into one service. Acronis Cyber Protect eliminates complexity, improves productivity, and enables managed service providers (MSP) to efficiently make security a focal point of their portfolio – ensuring their business can meet client expectations for data, applications, and systems security and protection.

Related Reading: Enhancing on-premise solutions market in India: ZNet becomes a distributor of Acronis

The primary challenges MSPs face are complexity, inadequate security, and low productivity. The lack of integration between the patchwork of vendor solutions they use to build their services decreases security, complicates employee training, productivity, and usability – all while increasing support and licensing costs with multiple vendors. Acronis Cyber Protect changes the game by giving MSPs a single solution to deliver backup, anti-malware, security, and endpoint management capabilities such as vulnerability assessments, URL filtering, and patch management.

These integrated capabilities create new opportunities for MSPs to deliver enhanced cybersecurity. With Acronis Cyber Protect, MSPs can proactively avoid cyberattacks, minimize downtime, ensure fast and easy recoveries, and automate the configuration of client protection to counter the latest cyberthreats. The result is improved margins, better profitability, easier SLA compliance, greater performance, and decreased churn – all at a lower cost.

Planned since last year, this is a truly unparalleled solution that’s been custom-made to help businesses that were caught off guard by the pandemic. A cyber protection life vest for MSPs struggling to cater for their customers changing needs.

Suggested Reading: India vulnerable to ransomware attacks: DSCI-PwC report. How to prevent your organization in 2020?

AV-Test, the respected German security institute known for putting malware security solutions through rigorous testing on a  computer running Windows 10 Professional – testing both the status and dynamic detection rates of Acronis Cyber Protect, scanning a set of 6,932 malicious Windows executable (PE) files. Acronis scored perfectly with a verified 100% detection rate, and delivered a perfect result in the lab’s false-positive test, causing zero false positives. More information about the test is available here.

Acronis recognizes that even with these test results and AI-enhanced protection, it is impossible to achieve 100% security at every moment. If a threat does slip through the security measures, Acronis Cyber Protect provides the best business continuity to ensure businesses are able to restore affected data, applications, and systems quickly and effectively.

Helping MSPs secure remote work amid COVID-19

Acronis rapidly developed a holistic set of features for Acronis Cyber Protect to help MSPs and their clients smoothly and safely meet the challenges of today’s remote work environments including:

  • To help with finances: The ability to pause billing for non-paying customers while preserving their backups; Launched Acronis #CyberFit Financing and promotions through July 31, 2020; No extra charges for Acronis Cyber Protect and no billing for Acronis Cyber Backup for new clients
  • To help protect employees: Voice-enabled, touchless control for remote connections to office machines for end-users; Security alerts from Acronis Cyber Protection Operations Centers related to COVID-19
  • For remote work devices and users: Default templates with secure protection plans for remote work devices; Native VPN capability; Secure file sync and share integration; Remote data wipe

The industry needs unified, modern cyber protection

“Traditional backup solutions are dead because they are not secure enough, and traditional anti-virus applications do not protect data from modern cyberthreats. Legacy solutions are no longer able to counter the dangers today’s businesses face,” said Acronis Founder and CEO Serguei “SB” Beloussov. “Service providers need to offer their clients integrated cyber protection that covers all Five Vectors of Cyber Protection – safety, accessibility, privacy, authenticity, and security. With Acronis Cyber Protect, service providers have the ability to deliver solutions that ensure their clients are #CyberFit and ready to face any new threats.”

Acronis Cyber Protect is a demonstration of the power of integration. It features one unified licensing model, one agent and backend, one management console, one user interface, and a high level of integration among services – sharing data and automating actions to greatly improve the security posture for endpoint and edge devices.

“The unique cyber security features of this solution are yet unrivalled on the market, they will allow businesses in India to operate remotely securely and indefinitely – even after the pandemic is over,” said Munesh Jadoun, CEO, ZNet Technologies, a distributor of Acronis solutions.

“The unique integration of AI-powered data protection and cybersecurity in Acronis Cyber Protect enables Ingram Micro Cloud to satisfy the cyber protection needs of service providers, small and medium businesses, and enterprise-edge workloads,” said Tim Fitzgerald, Vice President, Cloud Channel Sales, Ingram Micro.

“Acronis is among the companies on the forefront for integrated data protection and cyber protection,” said Phil Goodwin, Research Director, Infrastructure Systems, Platforms and Technologies Group, IDC. “We believe that Acronis Cyber Protect is among the most comprehensive attempts to provide data protection and cybersecurity to date.”

The Acronis Cyber Protect roadmap expands the availability of Acronis Cyber Protect to businesses worldwide, with an on-premises edition scheduled for release in the second half of 2020. Until then, any customer can leverage the advantages of Acronis Cyber Cloud as a managed service through their MSP. A personal version of the solution is also planned for release in Q3 2020.

Availability and Promotions

Please visit to sign-up online or request a fully-functional trial. To support businesses during the COVID-19 pandemic, Acronis Cyber Protect is available at the same cost of Acronis Cyber Backup Cloud for all service providers until July 31, 2020.

Read Next: ZNetLive rolls out Acronis Backup Cloud to provide businesses with constant data availability in changing threat landscape

Articles Cloud News New Products

“IT managers can’t tell you how 45% of their bandwidth is consumed”: Dirty Secrets of Network Firewalls report

One-in-four IT managers could not identify around 70% of network traffic, revealed a new report “The Dirty Secrets of Network Firewalls”. On average, 45% of the network traffic was going unidentified.

The report is result of a survey of 2700 IT decision makers across ten countries, by leading network and endpoint security provider- Sophos.

The most crucial finding of the survey was that most firewalls were failing to do their job adequately. The organizations had lack of visibility into the network traffic. Since, it was not visible, it could not be controlled.

Dirty Secrets of Network Firewalls

  • 84% of IT pros concerned about security due to lack of visibility into network traffic

84% of the respondents agreed that lack of application visibility was a serious security concern for their business and could impact effective network management. It could result in ransomware, malware, data breaches and other advanced threats.

The increased use of encryption, browser emulation, advanced evasion techniques were the factors that impacted the ability of network firewalls to provide adequate visibility into application traffic.

  • Organizations spent an average of seven working days per month in remediating infected machines

According to the report, the small-sized enterprises spent an average of five working days to remediate 13 machines per month. On the other hand, the large enterprises spent an average of ten working days to remediate 20 machines per month.

Overall, on average, the organizations spent around seven working days to remediate 16 infected machines per month.

The organizations were looking for an integrated network and endpoint security solution that could halt the threats. 99% of IT managers wanted a firewall technology that can automatically isolate infected computers.

79% of the IT managers wanted better protection from their current firewall, while 97% expected firewall protection from the same vendor which allowed direct sharing of security status information.

  • Other risks to businesses due to lack of visibility into network traffic

Other than the security risks, the lack of visibility concerned organizations on other aspects as well.

52% of IT managers said that lack of network visibility negatively impacted the business productivity. They could not prioritize the bandwidth for critical applications.

“For industries that rely on custom software to meet specific business needs, an inability to prioritize these mission critical applications over less important traffic could be costly,” revealed Sophos report.

50% of the respondents who invested in custom applications were unable to identify the traffic. It significantly impacted the return on investment.

  • Key findings of “The Dirty Secrets of Network Firewalls” survey:
  1. An average of 45% of network traffic was going unidentified, and hence couldn’t be controlled.
  2. 84% organizations concerned about security.
  3. 53% organizations concerned about productivity.
  4. 79% IT pros wanted better protection from current firewall.
  5. Organizations dealt with 10-20 infections per month.

Also read: Human error and misconfigured cloud servers responsible for most data breaches in 2017: IBM Security Report

The survey was conducted in October and November 2017, where IT decision makers in ten countries including the US, Canada, Mexico, France, Germany, UK, Australia, Japan, India, and South Africa, were interviewed.

Cloud News New Products

Intel’s new processor to enable new capabilities for cloud providers, extend intelligence to edge 

Intel has rolled out a new system-on-chip (SoC) processor called Intel Xeon D-2100 processor to fulfil the requirements of edge applications and network or data center applications that are restricted by power and space.

The workloads from sophisticated devices like industrial IoT sensors and self-driving cars, demand more compute, analytics and data protection closer to commercial and consumer endpoint devices. These endpoint devices generate and act on the data at the edge.

The Intel Xeon D-2100 processor addresses the needs of cloud service providers (CSPs) and network operators to provide high performance and capacity, with lower power consumption by leveraging the innovative Intel Xeon Scalable platform.

“To seize 5G and new cloud and network opportunities, service providers need to optimize their data center and edge infrastructures to meet the growing demands of bandwidth-hungry end users and their smart and connected devices,” said Sandra Rivera, senior VP and general manager of the Network Platforms Group at Intel. “The Intel Xeon D-2100 processor allows service providers and enterprises to deliver the maximum amount of compute intelligence at the edge or web tier while expending the least power.”

Using the new SoC processor, the network operators and CSPs can process more data closer to endpoint devices, growing the performance and capacity of applications, while reducing the latency and power consumption.

For example, the new processors will help communications service providers (CoSPs) to offer multi-access edge computing (MEC), which allows software applications to tap into local content and real-time information about local-access network conditions, reducing mobile core network of network congestion,” explained Intel.

Intel has integrated several capabilities into its Intel Xeon D-2100 processors. Some of which are ─ hardware-enhanced security, quad-port 10 gigabit Ethernet, 16 Serial ATA ports, Intel AVX-512, and enhanced Intel QuickAssist Technology with up to 100 Gbps of cryptography.

As compared to previous-generation Intel Xeon D-1500 processors, the new processors offer up to 2.9x network performance, 2.8x storage performance, and 1.6x general compute performance.

Also read: Nvidia prohibits datacenter deployment of GeForce GPUs

The Xeon D-2100 processors will well-suit a broad range of applications, including 5G technology, augmented reality and virtual reality, enterprise networking services, web-tier and content delivery networks, etc.

Intel Xeon D-2100 processors are now generally available, and Intel is working with its partners to deliver solutions to joint end-customers.

New Products News

Dell EMC uses AMD EPYC processors for new PowerEdge servers  

Dell EMC announced new PowerEdge servers, powered by AMD EPYC 700 series processors, for software-defined environments, and high-performance computing.

The new PowerEdge servers─ PowerEdge R6415, PowerEdge R7415, and PowerEdge R7425, are highly scalable and flexible to support today’s modern data centers. The R6415 and R7415 platforms are single-socket servers, while R7425 is a dual-socket server.

“As the bedrock of the modern data center, customers expect us to push server innovation further and faster,” said Ashley Gorakhpurwalla, president, Server and Infrastructure Systems at Dell EMC. “As customers deploy more IoT solutions, they need highly capable and flexible compute at the edge to turn data into real-time insights; these new servers that are engineered to deliver that while lowering TCO.”

The single-socket capabilities of AMD EPYC will help the new PowerEdge platforms to provide up to 20% total cost of ownership in a single-socket. AMD EPYC processors empower Dell EMC servers to handle usual high-performance workloads, including hybrid-cloud applications, virtualized storage area networks (vSAN), big data analytics, and dense virtualization.

The design features include EPYC processors, ranging from 32 to 64 cores, up to 4TB memory capacity, 8 memory channels, and 128 PCI-e (Peripheral Component Interconnect Express) lanes. The PowerEdge servers also include NVM-e (Non-volatile Memory Express) drives, which are optimized for database and analytics workloads.

“We are pleased to partner again with Dell EMC and integrate our AMD EPYC processors into the latest generation of PowerEdge servers to deliver enhanced scalability and outstanding total cost of ownership,” said Forrest Norrod, senior VP and general manager of the Datacenter and Embedded Solutions Business Group, AMD. “Dell EMC servers are purpose built for emerging workloads like software-defined storage and heterogeneous compute and fully utilize the power of AMD EPYC. Dell EMC always keeps the server ecosystem and customer requirements top of mind, this partnership is just the beginning as we work together to create solutions that unlock the next chapter of data center growth and capability.”

EPYC processors enable support for high bandwidth and dense GPU and FPGA in these PowerEdge servers, for HPC applications.

The PowerEdge R6415, PowerEdge R7415, and PowerEdge R7425, are new additions to 14th generation of the Dell EMC PowerEdge server portfolio. All the existing servers in PowerEdge server portfolio offer intelligent automation with iDRAC9 and Quick Sync 2 management support. The new servers too will offer these services.

The Meltdown and Spectre vulnerabilities which affected all computing and mobile devices with Intel processors, made vendors look for other options to Intel processors. AMD’s exposure to vulnerabilities were minimal and not as pronounced as Intel, but whether this will give it some benefits in the long run or not, remains to be seen.

Also read: Dell contemplating ‘reverse merger’ with VMware to go public  

Dell EMC’s new PowerEdge servers are now globally available.

Cloud Cloud News New Products News Technology

Google focuses on building an AI-first world with its new Cloud TPUs

Artificial Intelligence is continuously leading the way among the span of technologies in bringing out new software designs and capabilities. Google’s new AI chip is the latest one to get added to this.

At company’s annual developer conference held recently, CEO Sundar Pichai announced a new AI chip that can be a technological break-through in the world of machine learning and automation.

With this, Google signaled the increasing role of AI in software and hardware development.

The chips known as Cloud Tensor Processing unit (TPU) are designed to speed-up the machine learning operations and are also an upgrade from the first generation of chips Google announced at its last year’s I/O meet.

The second-generation chips can deliver nearly 180 teraflops of performance which is much higher as compared to its first-generation which could only handle inferences. The new chips can be used to train machine learning models. Training is an important part as it would help the machine learning program to identify and differentiate between images, data or other things.

Pichai also went forward announcing the creation of machine-learning supercomputers which will be based on Cloud TPUs to be used along with high-speed data connections. He said, “We are building what we think of as AI-first data centers Cloud TPUs are optimized for both training and inference. This lays the foundation for significant progress (in AI).”

Google emphasized on the application of machine learning and AI models to bring better work and performance oriented results. is also the result of collective efforts towards building an AI first world.

However, Google did not clearly say anything about introducing the chip in the market, rather said that it will let companies rent access to the chip via its (Google’s) cloud computing service.

Hosting New Products News Technology

eleven2 Upgrades Dedicated Server Range With RAID Protection, Backups and more

Web hosting provider eleven2 today announced that it has upgraded its entire dedicated server range.

The news comes a few days after eleven2 announced its plans to launch a secondary VPS location in Europe.

In an email conversation with Daily Host News today, Jon Eichler, VP Systems & Development, eleven2, said “We have upgraded all the CPUs on our dedicated server range to current generation. We have also made changes to ensure even our entry-level customers have RAID protection.”

“At eleven2, we take data integrity very seriously, and have gone the extra mile to ensure all of our dedicated server offerings are covered by both RAID and backups,” he added.

eleven2 has also updated pricing on the dedicated server plans. The dedicated servers are available in four offerings:

  • DS-100: Intel i3-3220 with 4 GB memory, 3 TB bandwidth and 500GB Raid1 HDs + 1TB backup. Priced at $199/mo.
  • DS-200: Intel e3-1270v2 with 8 GB memory, 6 TB bandwidth and 2x 1TB Raid1 HDs + 1TB backup. Priced at $299/mo.
  • DS-300: Hexa-Core Xeon e5-2620 with 16 GB memory, 10 TB bandwidth and 4x 1TB Raid10 HDs + 2TB backup. Priced at $499/mo.
  • DS-400: Dual Hexa-Core Xeon e5-2620 with 32 GB memory, 10 TB bandwidth and 4x 2TB Raid10 HDs + 4TB backup. Priced at $699/mo.

As customers’ sites grow, they can increase RAM, HDD space, Bandwidth, Port speed etc. eleven2 offers DD2/DD3 upgrades on all servers; SATA2, SAS and SSD drives; premium 3ware-based Raid 1/5/10 cards starting at $39/mo and up to 1000 mbps upgradable port speed.

All dedicated servers come with free cPanel & WHM, free data migration service and there is no setup fees involved. All dedicated servers are fully managed by eleven2’s in house support team, and are provisioned in their Los Angeles datacenter.

Here is more information on eleven2’s dedicated servers.

Earlier this year eleven2 unveiled its VPS services named “Virtual Premium Servers”.

Domain Ecommerce Hosting Hosting Innovation New Products News Start-Ups Technology

“POP Helps Users to Set up & Manage Their Online Presence Easily”- Juan Diego Calle,.CO Internet

Yesterday saw launch of POP.CO, a new platform that helps businesses get online instantly. The company behind POP.CO, .CO Internet S.A.S, is well known for coming up with fresh ideas to help entrepreneurs and start ups establish successful online presence.

POP.CO, the company’s latest endeavor, is a bundle of online services that comes with a .CO domain name, a custom POP Page and an email address powered by Google Apps.

Entrepreneurs, startups and small businesses who’re not tech-savvy enough don’t need to hire a web developer to build a professional web presence or spend extra time finding a Web address, hosting provider, email account, etc. as a central POP account handles all that. It also enables users to add and install proprietary or third-party apps to their domains directly through the platform.

Currently in Beta, POP is free for 15 days. Once the free trial ends, users can subscribe for $5 per user per month for the whole POP package.

We had a quick Q & A with Juan Diego Calle, CEO, .CO Internet, on additional features of POP.CO and the future roadmap. If you’ve any more questions or feedback about POP, you can get in touch with the team directly at

We are continuing to enhance in a variety of ways, including the addition of many more integrated apps, as well as new tools and services to help our users to build and grow their online presence with ease.
– Juan Diego Calle, CEO, .CO Internet.

Juan Diego Calle, CEO, .CO Internet.
Juan Diego Calle, CEO, .CO Internet.

Q: How much flexibility does POP provide in terms of choosing the hosting provider and third party apps?

A: While the basic POP bundle includes the domain name, a Google Apps account, and the POP page, the user has the ability to use any hosting provider they choose.

POP includes a simple DNS editor that allows the user, with a click of a button, to set the appropriate DNS records to point their domain to a number of popular hosting applications. If the user’s hosting provider isn’t included in the simple editor, POP has an advanced DNS editor where the user can create and manage any DNS record.

Additionally, for those advanced users who wish to use another party’s DNS we provide the user with the means to change their name servers completely. The goal of POP is to provide users with the tools to easily set up and manage their online presence while at the same time offering them the flexibility to use any hosting provider they choose.

Q: Although the official blog post makes a passing mention to it, how exactly can POP platform be customized for new gTLDs?

A: The POP platform has been designed to work with any TLD. Some customization of the business rules may be necessary for some TLDs, but this is to be expected and something we are prepared to handle. We believe POP is excellent fit for many of the new TLDs which will be competing for distribution channels. POP provides them with a simple, fully managed solution for controlling their own distribution.

Q: Once the beta period ends, and the service is available for $5/mo, what additional value will POP provide other than being a centralized app-marketplace?

A: In coming months we will continue to enhance POP in a number of different ways. We’ll be adding additional integrated apps that will provide users with more choices, as well as some new tools and services. The focus will remain on apps that improve the user’s online presence, such as website builders, communication platforms, and marketing tools.

Q: Will a .CO domain (which is relatively expensive) be also available in the $5/mo POP package once the beta period ends? Also, can users choose to retain their .CO and discontinue with POP?

A: Yes. The .CO domain will be available in the $5/month POP package once the beta period ends. offers a bundled package that includes the .CO domain name, a Google Apps account, and the basic POP web page. After the free trial, a user can choose to subscribe for only $5/mo. There are no additional costs for the domain name, as the cost is bundled into the monthly fee. This is an excellent value as the retail price of a simple Google Apps account is $5 per month/per user, without any of the extra benefits that come with a account, including the custom .CO domain name, the POP web page, and integrated access to additional third party apps.

While POP does not support users who wish to simply have the domain name without the other bundled services, users do have the option to transfer their domain name to the registrar of their choice – or to cancel their account at anytime. There are no annual commitments.

Q: What criterion will POP be judging third party apps submitted by developers on for integration in the platform? Also, what additional value can developers expect to derive from submitting their apps at POP than other Self-Serve Marketplaces?

A: We have not yet established a firm set of criteria to be used when reviewing third party apps. In general, we’re looking for awesome apps that directly address the needs of our target market, and can be fully integrated via an API. While the details are still evolving, you can learn more here.

Q: What extra enhancements and features can users expect to be integrated in the POP platform with time?

A: As noted above, we are continuing to enhance in a variety of ways, including the addition of many more integrated apps, as well as new tools and services to help our users to build and grow their online presence with ease. We will also be focusing on supporting the unique needs of our new TLD clients, as they roll out instances of the POP platform in their own namespaces.

Cloud Hosted Cloud Apps Hosting Innovation Interviews New Products News Start-Ups Technology

“HybridCluster Allows Hosters to Differentiate, Sell New Services & Drive up Margins”- Luke Marsden, HybridCluster

There is something great about products and services that are developed by industry veterans to relieve their own pain points, to figure a way around the very problems they face day in and day out, and in the process build something that is a valuable contribution to the industry as a whole. This is where necessity, ingenuity, shortage and perspicacity hold hands in order to give birth to something that has substantial impact on the work cycle of the entire industry ecosystem.

In May this year, when HybridCluster completed $1 millions fundraising and launched HybridCluster 2.0, I was supposed to prepare an Interview Questionnaire for Luke Marsden, CEO, HybridCluster. I knew little about the product at that time, but somewhere during my research work for the same, I decided that HybridCluster is not just a very interesting product, but it is a success story.

Why? I’ll let the interview do the talking. But I’ll leave you with this interesting excerpt on the company blog, where Luke talks about the genesis of HybridCluster:

Running our own hosting company since 2001 exposed all the problems. We were continuously battling hardware, software and network issues. After a few too many late-night trips to the data centre, I thought to myself: there has to be a better way. Studying theoretical computer science at Oxford University helped me crystallize my vision for an ambitious new project — one which uses ZFS, local storage, graph theory, and a perfect combination of open source components to create a platform uniquely aligned to solving the problems faced by hosters and cloud service providers.

The HybridCluster software allows applications to get auto-scaled. It can detect a spike in traffic, and rather than throttling the spike, it can burst that application to a full dedicated server by moving other busy things on that server onto quieter servers.

– Luke Marsden, CEO, HybridCluster.

Luke Marsden, CEO, HybridCluster.
Luke Marsden, CEO, HybridCluster.

Q : Let’s begin with a brief introduction of yours and a broad overview of HybridCluster.

A: Hi. 🙂 Thanks for inviting me to be interviewed! It’s really great to be on DailyHostNews.

My background is a combination of Computer Science (I was lucky enough to study at Oxford University, where I graduated with a first class degree in 2008) and a bunch of real world experience running a hosting company.

HybridCluster is really a radical new approach to solving some of the tricky problems every hosting company has while trying to manage their infrastructure: it’s an ambitious project to replace storage, hypervisor and control panel with something fundamentally better and more resilient.

In fact I have a bigger vision than that: I see HybridCluster as a new and better approach to cloud infrastructure – but one which is backwardly compatible with shared hosting. Finally, and most importantly – HybridCluster allows hosters to differentiate in the market, sell new services, drive up margins – whilst also reducing the stress and cost of operating a web hosting business. We help sysadmins sleep at night!

Q: Did the idea for a solution like HybridCluster stem from issues you faced first-hand during your decade-long experience in the web hosting industry?

A: Yes, absolutely. Without the real-world pain of having to rush off to the data center in the middle of the night, I wouldn’t have focused my efforts on solving the three real world problems we had:

The first problem is that hardware, software and networks fail resulting in website downtime. This is a pain that every hoster will know well. There’s nothing like the horrible surge of adrenaline you get when you hear the Pingdom or Nagios alert in the middle of the night – or just as you get to the pub on a Friday night – you just know it’s going to ruin the next several hours or your weekend. I found that I had become – like Pavlov’s dog – hard-wired to fear the sound of my phone going off. This was the primary motivation to invent a hosting platform which is automatically more resilient.

Other problems we had in the hosting company included websites getting spikes in traffic – so we knew we needed to invest a hosting platform which could auto-scale an application up to dedicated capacity – and users making mistakes and getting hacked – so we knew we needed to invent something which exposes granular snapshots to the end user so they can log in and roll back time themselves if they get hacked – or if they accidentally delete a file.

Q : Can you please throw some light on the modus-operandi of HybridCluster? How exactly does it help web hosts with automatic detection and recovery in the event of outages?

A: Sure. I decided early on that a few key design decisions were essential:

Firstly, any system which was going to stop me having to get up in the middle of the night would have to have no single point of failure. This is easy to say but actually quite hard to implement! You need some distributed system smarts in order to be able to make a platform where the servers can make decisions as a co-operative group.

Secondly, I decided that storage belongs near the application, not off on a SAN somewhere. Not only is the SAN itself a single point of failure, but it also adds a lot of cost to the system and can often slow things down.

Thirdly, I decided that full hardware virtualization is too heavy-weight for web application hosting. I could already see the industry going down the route of giving each customer their own VM, but this is hugely wasteful! It means you’re running many copies of the operating system on each server, and that limits you to how many customers you can put on each box. OS level virtualization is a much better idea, which I’ll talk about more later.

Basically, I designed the platform to suit my own needs: as a young hoster, I was scared of outages, I couldn’t afford a SAN, and I knew I couldn’t get the density I needed to make money with virtualization. 🙂

Q: How does OS virtualisation used by you differ from Hypervisor based Virtualisation used by other Virtualised solutions?

A: OS level virtualization (or “containers”) are simply a better way of hosting web applications. They are higher density: because each container shares system memory with all other containers, the memory on the system is more effectively “pooled”. They are better performing: there’s no overhead of simulating the whole damn universe just to run an app. And they’re more scalable, each app can use the whole resource of a server, especially when combined with the unique capability that HybridCluster brings to the table: the ability to live-migrate containers around between servers in the cluster and between data centers.

Live migration is useful because it allows things to get seamlessly moved around. This has several benefits: administrators can easily cycle servers out of production in order to perform maintenance on them simply by moving the applications off onto other servers, but also, perhaps more excitingly, it allows applications to get auto-scaled – the HybridCluster software can detect a spike in traffic, and rather than throttling the spike (like CloudLinux), it can burst that application to a full dedicated server by moving other busy things on that server onto quieter servers. This is also a unique feature.

Q: How does HybridCluster enable an end user to self-recover lost files and data from even less than a minute ago? This feature, if I’m not wrong, isn’t available with any other solution out there.

A: It’s quite simple really. Every time that website, database or email data changes, down to 30 second resolution or less, we take a new ZFS snapshot and also replicate the history to other nodes in the cluster. ZFS is a core enabling technology for HybridCluster, and we’ve built a smart partition-tolerant distributed filesystem on top of it! Each website, database or mailbox gets its own independently replicated and snapshotted filesystem.

Anyway, these replicas act both as a user-facing backup and a hot spare. It’s a simple idea, but this is actually a revolution in backup technology – rather than having a backup separate from your RAID or other replication system (where the problem with a replication system like RAID is that it will happily replicate a failure, and the problem with backups is that they take ages to restore) our “hybrid” approach to replicated snapshots kills two birds with one stone, bringing backup restore times down to seconds, and also letting users fetch files/emails/database records out of snapshots which are taken at with very fine grained accuracy.

Indeed, HybridCluster is the first hosting platform to expose this feature to the end user and we have seen a number of clients adopt our technology for this benefit alone!

Q: Is the low-cost storage system able to deliver the efficiency of high-end SANs? Also, what additional value does ZFS data replication bring into the picture?

A: I’m glad you mentioned ZFS again 🙂 One of the really nice things about being backed onto ZFS is that hosters using HybridCluster can choose how powerful they want to make their hosting boxes. Remember, with HybridCluster, the idea is that every server has a local storage and uses that to keep the data close and fast. But because ZFS is the same awesome technology which powers big expensive SANs from Oracle, you can also chuck loads of disks in your hosting boxes and suddenly every one of your servers is as powerful as a SAN in terms of IOPS. In fact, one of our recent hires, a fantastic chap by the name of Andrew Holway, did some hardcore benchmarking of ZFS versus LVM and found that ZFS completely floored the Linux Volume Management system when you throw lots of spindles at it.

I won’t go into too much detail about how ZFS achieves awesome performance, but if you’re interested, try Googling “ARC”, “L2ARC” and “ZIL”. 🙂

The other killer feature in ZFS is that it checksums all the data that passes through it – this means the end to bit-rot. Combined with our live backup system across nodes, that makes for a radically more resilient data storage system than you’ll get with Ext4 on a bunch of web servers, or even a SAN solution.

There’s lots more – call us on +44 (0)20 3384 6649 and ask for Andrew who would love to tell you more about how ZFS + HybridCluster makes for awesome storage.

Q: How does HybridCluster achieve fault-tolerant DNS?

A: Something I haven’t mentioned yet is that HybridCluster supports running a cluster across multiple data centers, so you can even have a whole data center fail and your sites can stay online!

So quite simply the cluster allocates nameservers across its data centers, so if you have DC A and B, with nodes A1, A2, B1, B2, the ns1 and ns2 records will be A1 and B1 respectively. That gives you resilience at the data center level (because DNS resolvers support failover between nameservers). Then, if a node fails, or even if a data center fails, the cluster has self-reorganising DNS as a built-in feature.

We publish records with a low TTL, and we publish multiple A records for each site: our AwesomeProxy layer turns HybridCluster into a true distributed system – you can send any request for anything (website, database, mailbox, or even FTP or SSH session – to any node and it’ll get revproxied correctly to the right backend node (which might dynamically change, eg during a failover or an auto-scaling event). So basically under all failure modes (server, network, data center) we maximize the chances that the user will quickly – if not immediately – get a valid A record which points to a server which is capable of satisfying that request.

In other words HybridCluster makes the servers look after themselves so that you can get a good night’s sleep.

Q: How do you see the future of data center industry?

A: That’s an interesting question 🙂 I’ll answer it for web applications (and databases + email), specifically.

Personally I see cloud infrastructure as a broken promise. Ask the man (or woman) on the street what they think cloud means, and they’ll probably tell you about increased reliability, better quality of service, etc. But really all that cloud does today is provide less powerful unreliable infrastructure with which software engineers are expected to build reliable software on top of. That’s a big ask!

My vision is for a fundamentally more reliable way of provisioning web applications – one where the underlying platform takes responsibility for implementing resilience well, once, at the platform level. Developers are then free to deploy applications knowing that they’ll scale well under load, and get failed over to another server if the physical server fails, or even if the whole data center goes pop.

I think that’s the promise of PaaS, and my vision is for a world where deploying web applications gets these benefits by default, without millions of sysadmins in hosting companies all over the world having to get paged in the middle of the night to go fix stuff manually. Computers can be smarter than that, it’s just up to us to teach them how. 🙂

Q: Tell our readers a bit about the team HybridCluster?

A: Since we got funded in December 2012 we’ve been lucky enough to be able to grow the team to 9 people, and I’m really proud of the team we’ve pulled together.

We’re a typical software company, and so unfortunately our Dave to female ratio is 2:0. That is, we have two Daves and no females (but we’re working on that!). Anyway, some highlights in the team are Jean-Paul Calderone, who’s the smartest person I’ve ever met, and the founder of the Twisted project. Twisted is an awesome networking framework and without Twisted – and JP’s brain – we wouldn’t be where we are today. Also on the technical side, we’ve got Rob Haswell, our CTO, who’s a legend, and doing a great job of managing the development of the project as we make it even more awesome. We’ve also just hired one of JP’s side-kicks on Twisted, Itamar Turner-Trauring, who once built a distributed airline reservation system and sold it to Google.

We’ve also got Andriy Gapon, FreeBSD kernel hacker extraordinaire, without whom we wouldn’t have a stable ZFS/VFS layer to play with. Dave Gebler is our Control Panel guru and we’re getting him working on our new REST API soon, so he’ll become a Twisted guru soon 😉 And our latest hire on support, Marcus Stewart Hughes, is a younger version of me – a hosting geek – he bought his first WHMCS license when he was 15, so I knew we had to hire him.

On the bizdev side, we’ve got Dave Benton, a legend in his own right, who’s come out of an enterprise sales background with IBM, Accenture and Star Internet, he’s extremely disciplined and great at bringing process into our young company. Andrew Holway is our technical pre-sales guy, previously building thousand-node clusters for the University of Frankfurt, and he loves chatting about ZFS and distributed systems. He’s also great at accents and can do some pretty awesome card tricks.

Q: To wrap up, with proper funding in place for development of the products, what’s in the bag for Q3 and Q4 of 2013?

A: We’re working on a few cool features for the 2.5 release later this year: we’re going to have first class Ruby on Rails and Python/Django support, mod_security to keep application exploits out of the containers, Memcache and Varnish support. We’re also working on properly supporting IP-based failover so we don’t have to rely on DNS, and there are some massive improvements to our Control Panel on their way.

It’s an exciting time to be in hosting 😉 and an even more exciting time to be a HybridCluster customer!

Thanks for the interview and the great questions.

Cloud Cloud News New Products News Technology

LeaseWeb Releases LeaseWeb Cloud Load Balancer for Public Cloud

Infrastructure as a Service (IaaS) provider LeaseWeb today added a load balancing feature to its Public Cloud. The new feature will enable customers to optimize their use of cloud resources by managing the distribution of traffic across instances.

The news comes nearly two months after LeaseWeb launched its global Content Delivery Network.

Load balancing divides traffic evenly across servers and lets customers optimize the use of their resources and minimize server response time. Traffic is redirected according to a set of rules as defined by the customer for the individual TCP ports on their public load balancer address.

These rules range from balancing simple traffic (for example, for mail servers), to advanced rules with sticky policies for web applications. Rules can be managed via LeaseWeb’s Customer Portal, where customers can configure unlimited number of rules for a multitude of protocols and ports.

There are a lot of confusing billing models being used for load balancing services, often charging per incoming request. We wanted to bring a bit of transparency to the table. Our load balancer offers the same robust options to direct traffic flow, at a set price unmatched by other providers.
– Robert van der Meulen, Cloud Manager, LeaseWeb.

The LeaseWeb Cloud load balancer supports a variety of algorithms to balance web traffic:

  • Source Based, always assigns requests from the same source IP to the same instance.
  • Round Robin, cycles though all the instances in order.
  • Least Connections, allocates the traffic to the instance with the lowest number of connections.

The load balancing feature is available immediately to customers of all globally deployed LeaseWeb Cloud platforms, at a set price per month, regardless of the request load.

“There are a lot of confusing billing models being used for load balancing services, often charging per incoming request. We wanted to bring a bit of transparency to the table. Our load balancer offers the same robust options to direct traffic flow, at a set price unmatched by other providers,” said Robert van der Meulen, Cloud Manager, LeaseWeb.

“Our load balancing feature is fully integrated in our Customer Portal, ensuring it works flawlessly with our other cloud services. In combination with our global network, which has a capacity of 3.5 Tbps, our cloud products are well equipped to handle vast amounts of traffic,” he added.

Here is more information on Load balancer for LeaseWeb Cloud.

Earlier this year, LeaseWeb released Law Enforcement Transparency Report for 2012.

Cloud Innovation New Products News Technology

Aquilent Launches New Cloud Resource Management Portal: Olympus Powered by Aquilent™

Aquilent today announced launch of its Cloud Resource Management Portal, Olympus Powered by Aquilent.

The new solution has been developed solely for federal agencies and  provides an application and environment centric model for managing the cloud, helping meet the increasingly complex requirements of the Federal Cloud First Policy.

Olympus enables federal customers to bridge the gap between native cloud tools and their individualized cloud infrastructures and significantly reduces the need for cloud expertise so that federal organizations can readily take full advantage of their cloud resources while remaining focused on their mission.

Olympus  works within any FISMA or DIACAP compliant environment, integrating with the Cloud Service Provider (CSP) security model and within the agency’s environment.  It  empowers the federal user to take full control of all cloud assets in one easy-to-use portal from the ground up.

Customers choosing to utilize Aquilent’s cloud hosting environment can now take full advantage of Olympus and can:

Olympus enables our customers to rapidly adopt the cloud while saving money from both hosting and the often hidden costs of the labor needed to support the cloud. With this fully secure, easy to use portal, our customers can further delegate cloud management tasks to the people who most need to perform them, ensuring an accurate audit trail along the way.
– Mark Pietrasanta, CTO, Aquilent.
  • Manage all cloud resources in consistent, unified and logical categories.
  • Reduce system administration costs with the intuitive, full-featured interface.
  • Easily view all billing and utilization activity down to the resource level.
  • Ensure the branded agency’s cloud portal is Section 508 compliant.
  • Deploy the portal securely within the agency’s cloud environment.
  • Easily manage access and fully audit all activities with a single sign-on.
  • Manage and reduce environmental costs through scheduled tasks.
  • Integrate business systems and processes through the flexible portal platform.

“Our federal customers have one main focus, and that is to serve citizens effectively as set forth within their missions,” stated David Fout, CEO, Aquilent.

“The cloud should be a strategic enabler and with Olympus Powered by Aquilent, our customers will be able to leverage, monitor and manage their cloud assets in support of their missions in the most efficient and cost-effective manner,” he added.

“Olympus enables our customers to rapidly adopt the cloud while saving money from both hosting and the often hidden costs of the labor needed to support the cloud,” said Mark Pietrasanta, CTO, Aquilent.

“With this fully secure, easy to use portal, our customers can further delegate cloud management tasks to the people who most need to perform them, ensuring an accurate audit trail along the way,” he added.

Here is more information on  Olympus Powered by Aquilent.

Page 1 of 25
1 2 3 25