Cloud Hosted Cloud Apps News

Major Outage Downs Microsoft Cloud Services

Microsoft’s web services were hit by a major outage on Tuesday afternoon. The outage caused its services – Xbox Live,, Hotmail, Office 365, Skype, OneDrive and more to go down.

This is the second time that the services have went down this month.

Users are unable to sign into Microsoft hosted accounts; gamers online are facing problem downloading games from Xbox One, Xbox 360 and other platforms that are running on company’s game system.

Our engineers and developers are actively continuing to work to resolve the issue causing some members to have problems finding previously-purchased content or purchasing new content. Stay tuned, and thanks for your patience,” Xbox support forum reported. flooded with over 200 reports, further confirming the outage. The service health dashboard of Office 365 acknowledging the problem said that Microsoft is working on it.

While status page of also admitted, “Users may be intermittently unable to sign in to the service. As the issue is intermittent in nature, users may be able to reload the page or make another attempt successfully. We’re analyzing system logs to determine the next troubleshooting steps.”

Azure users also faced login errors. As per the Microsoft Azure Status history, “Between 17:30 and 18:55 UTC on 21 Mar 2017, a subset of Azure customers may have experienced intermittent login failures while authenticating with their Microsoft Accounts. Engineers deployed a fix to mitigate the issue. Users may need to exit or refresh their browsers in order to successfully sign in.

After getting some services like Skype, Outlook and Hotmail backed up, Microsoft said, “We’ve determined that the previously resolved issue had some residual impact to the service configuration for OneDrive. We’re performing an analysis of the affected systems to determine what further steps are needed for full recovery.”

Cloud Hosted Cloud Apps Hosting Innovation Interviews New Products News Start-Ups Technology

“HybridCluster Allows Hosters to Differentiate, Sell New Services & Drive up Margins”- Luke Marsden, HybridCluster

There is something great about products and services that are developed by industry veterans to relieve their own pain points, to figure a way around the very problems they face day in and day out, and in the process build something that is a valuable contribution to the industry as a whole. This is where necessity, ingenuity, shortage and perspicacity hold hands in order to give birth to something that has substantial impact on the work cycle of the entire industry ecosystem.

In May this year, when HybridCluster completed $1 millions fundraising and launched HybridCluster 2.0, I was supposed to prepare an Interview Questionnaire for Luke Marsden, CEO, HybridCluster. I knew little about the product at that time, but somewhere during my research work for the same, I decided that HybridCluster is not just a very interesting product, but it is a success story.

Why? I’ll let the interview do the talking. But I’ll leave you with this interesting excerpt on the company blog, where Luke talks about the genesis of HybridCluster:

Running our own hosting company since 2001 exposed all the problems. We were continuously battling hardware, software and network issues. After a few too many late-night trips to the data centre, I thought to myself: there has to be a better way. Studying theoretical computer science at Oxford University helped me crystallize my vision for an ambitious new project — one which uses ZFS, local storage, graph theory, and a perfect combination of open source components to create a platform uniquely aligned to solving the problems faced by hosters and cloud service providers.

The HybridCluster software allows applications to get auto-scaled. It can detect a spike in traffic, and rather than throttling the spike, it can burst that application to a full dedicated server by moving other busy things on that server onto quieter servers.

– Luke Marsden, CEO, HybridCluster.

Luke Marsden, CEO, HybridCluster.
Luke Marsden, CEO, HybridCluster.

Q : Let’s begin with a brief introduction of yours and a broad overview of HybridCluster.

A: Hi. 🙂 Thanks for inviting me to be interviewed! It’s really great to be on DailyHostNews.

My background is a combination of Computer Science (I was lucky enough to study at Oxford University, where I graduated with a first class degree in 2008) and a bunch of real world experience running a hosting company.

HybridCluster is really a radical new approach to solving some of the tricky problems every hosting company has while trying to manage their infrastructure: it’s an ambitious project to replace storage, hypervisor and control panel with something fundamentally better and more resilient.

In fact I have a bigger vision than that: I see HybridCluster as a new and better approach to cloud infrastructure – but one which is backwardly compatible with shared hosting. Finally, and most importantly – HybridCluster allows hosters to differentiate in the market, sell new services, drive up margins – whilst also reducing the stress and cost of operating a web hosting business. We help sysadmins sleep at night!

Q: Did the idea for a solution like HybridCluster stem from issues you faced first-hand during your decade-long experience in the web hosting industry?

A: Yes, absolutely. Without the real-world pain of having to rush off to the data center in the middle of the night, I wouldn’t have focused my efforts on solving the three real world problems we had:

The first problem is that hardware, software and networks fail resulting in website downtime. This is a pain that every hoster will know well. There’s nothing like the horrible surge of adrenaline you get when you hear the Pingdom or Nagios alert in the middle of the night – or just as you get to the pub on a Friday night – you just know it’s going to ruin the next several hours or your weekend. I found that I had become – like Pavlov’s dog – hard-wired to fear the sound of my phone going off. This was the primary motivation to invent a hosting platform which is automatically more resilient.

Other problems we had in the hosting company included websites getting spikes in traffic – so we knew we needed to invest a hosting platform which could auto-scale an application up to dedicated capacity – and users making mistakes and getting hacked – so we knew we needed to invent something which exposes granular snapshots to the end user so they can log in and roll back time themselves if they get hacked – or if they accidentally delete a file.

Q : Can you please throw some light on the modus-operandi of HybridCluster? How exactly does it help web hosts with automatic detection and recovery in the event of outages?

A: Sure. I decided early on that a few key design decisions were essential:

Firstly, any system which was going to stop me having to get up in the middle of the night would have to have no single point of failure. This is easy to say but actually quite hard to implement! You need some distributed system smarts in order to be able to make a platform where the servers can make decisions as a co-operative group.

Secondly, I decided that storage belongs near the application, not off on a SAN somewhere. Not only is the SAN itself a single point of failure, but it also adds a lot of cost to the system and can often slow things down.

Thirdly, I decided that full hardware virtualization is too heavy-weight for web application hosting. I could already see the industry going down the route of giving each customer their own VM, but this is hugely wasteful! It means you’re running many copies of the operating system on each server, and that limits you to how many customers you can put on each box. OS level virtualization is a much better idea, which I’ll talk about more later.

Basically, I designed the platform to suit my own needs: as a young hoster, I was scared of outages, I couldn’t afford a SAN, and I knew I couldn’t get the density I needed to make money with virtualization. 🙂

Q: How does OS virtualisation used by you differ from Hypervisor based Virtualisation used by other Virtualised solutions?

A: OS level virtualization (or “containers”) are simply a better way of hosting web applications. They are higher density: because each container shares system memory with all other containers, the memory on the system is more effectively “pooled”. They are better performing: there’s no overhead of simulating the whole damn universe just to run an app. And they’re more scalable, each app can use the whole resource of a server, especially when combined with the unique capability that HybridCluster brings to the table: the ability to live-migrate containers around between servers in the cluster and between data centers.

Live migration is useful because it allows things to get seamlessly moved around. This has several benefits: administrators can easily cycle servers out of production in order to perform maintenance on them simply by moving the applications off onto other servers, but also, perhaps more excitingly, it allows applications to get auto-scaled – the HybridCluster software can detect a spike in traffic, and rather than throttling the spike (like CloudLinux), it can burst that application to a full dedicated server by moving other busy things on that server onto quieter servers. This is also a unique feature.

Q: How does HybridCluster enable an end user to self-recover lost files and data from even less than a minute ago? This feature, if I’m not wrong, isn’t available with any other solution out there.

A: It’s quite simple really. Every time that website, database or email data changes, down to 30 second resolution or less, we take a new ZFS snapshot and also replicate the history to other nodes in the cluster. ZFS is a core enabling technology for HybridCluster, and we’ve built a smart partition-tolerant distributed filesystem on top of it! Each website, database or mailbox gets its own independently replicated and snapshotted filesystem.

Anyway, these replicas act both as a user-facing backup and a hot spare. It’s a simple idea, but this is actually a revolution in backup technology – rather than having a backup separate from your RAID or other replication system (where the problem with a replication system like RAID is that it will happily replicate a failure, and the problem with backups is that they take ages to restore) our “hybrid” approach to replicated snapshots kills two birds with one stone, bringing backup restore times down to seconds, and also letting users fetch files/emails/database records out of snapshots which are taken at with very fine grained accuracy.

Indeed, HybridCluster is the first hosting platform to expose this feature to the end user and we have seen a number of clients adopt our technology for this benefit alone!

Q: Is the low-cost storage system able to deliver the efficiency of high-end SANs? Also, what additional value does ZFS data replication bring into the picture?

A: I’m glad you mentioned ZFS again 🙂 One of the really nice things about being backed onto ZFS is that hosters using HybridCluster can choose how powerful they want to make their hosting boxes. Remember, with HybridCluster, the idea is that every server has a local storage and uses that to keep the data close and fast. But because ZFS is the same awesome technology which powers big expensive SANs from Oracle, you can also chuck loads of disks in your hosting boxes and suddenly every one of your servers is as powerful as a SAN in terms of IOPS. In fact, one of our recent hires, a fantastic chap by the name of Andrew Holway, did some hardcore benchmarking of ZFS versus LVM and found that ZFS completely floored the Linux Volume Management system when you throw lots of spindles at it.

I won’t go into too much detail about how ZFS achieves awesome performance, but if you’re interested, try Googling “ARC”, “L2ARC” and “ZIL”. 🙂

The other killer feature in ZFS is that it checksums all the data that passes through it – this means the end to bit-rot. Combined with our live backup system across nodes, that makes for a radically more resilient data storage system than you’ll get with Ext4 on a bunch of web servers, or even a SAN solution.

There’s lots more – call us on +44 (0)20 3384 6649 and ask for Andrew who would love to tell you more about how ZFS + HybridCluster makes for awesome storage.

Q: How does HybridCluster achieve fault-tolerant DNS?

A: Something I haven’t mentioned yet is that HybridCluster supports running a cluster across multiple data centers, so you can even have a whole data center fail and your sites can stay online!

So quite simply the cluster allocates nameservers across its data centers, so if you have DC A and B, with nodes A1, A2, B1, B2, the ns1 and ns2 records will be A1 and B1 respectively. That gives you resilience at the data center level (because DNS resolvers support failover between nameservers). Then, if a node fails, or even if a data center fails, the cluster has self-reorganising DNS as a built-in feature.

We publish records with a low TTL, and we publish multiple A records for each site: our AwesomeProxy layer turns HybridCluster into a true distributed system – you can send any request for anything (website, database, mailbox, or even FTP or SSH session – to any node and it’ll get revproxied correctly to the right backend node (which might dynamically change, eg during a failover or an auto-scaling event). So basically under all failure modes (server, network, data center) we maximize the chances that the user will quickly – if not immediately – get a valid A record which points to a server which is capable of satisfying that request.

In other words HybridCluster makes the servers look after themselves so that you can get a good night’s sleep.

Q: How do you see the future of data center industry?

A: That’s an interesting question 🙂 I’ll answer it for web applications (and databases + email), specifically.

Personally I see cloud infrastructure as a broken promise. Ask the man (or woman) on the street what they think cloud means, and they’ll probably tell you about increased reliability, better quality of service, etc. But really all that cloud does today is provide less powerful unreliable infrastructure with which software engineers are expected to build reliable software on top of. That’s a big ask!

My vision is for a fundamentally more reliable way of provisioning web applications – one where the underlying platform takes responsibility for implementing resilience well, once, at the platform level. Developers are then free to deploy applications knowing that they’ll scale well under load, and get failed over to another server if the physical server fails, or even if the whole data center goes pop.

I think that’s the promise of PaaS, and my vision is for a world where deploying web applications gets these benefits by default, without millions of sysadmins in hosting companies all over the world having to get paged in the middle of the night to go fix stuff manually. Computers can be smarter than that, it’s just up to us to teach them how. 🙂

Q: Tell our readers a bit about the team HybridCluster?

A: Since we got funded in December 2012 we’ve been lucky enough to be able to grow the team to 9 people, and I’m really proud of the team we’ve pulled together.

We’re a typical software company, and so unfortunately our Dave to female ratio is 2:0. That is, we have two Daves and no females (but we’re working on that!). Anyway, some highlights in the team are Jean-Paul Calderone, who’s the smartest person I’ve ever met, and the founder of the Twisted project. Twisted is an awesome networking framework and without Twisted – and JP’s brain – we wouldn’t be where we are today. Also on the technical side, we’ve got Rob Haswell, our CTO, who’s a legend, and doing a great job of managing the development of the project as we make it even more awesome. We’ve also just hired one of JP’s side-kicks on Twisted, Itamar Turner-Trauring, who once built a distributed airline reservation system and sold it to Google.

We’ve also got Andriy Gapon, FreeBSD kernel hacker extraordinaire, without whom we wouldn’t have a stable ZFS/VFS layer to play with. Dave Gebler is our Control Panel guru and we’re getting him working on our new REST API soon, so he’ll become a Twisted guru soon 😉 And our latest hire on support, Marcus Stewart Hughes, is a younger version of me – a hosting geek – he bought his first WHMCS license when he was 15, so I knew we had to hire him.

On the bizdev side, we’ve got Dave Benton, a legend in his own right, who’s come out of an enterprise sales background with IBM, Accenture and Star Internet, he’s extremely disciplined and great at bringing process into our young company. Andrew Holway is our technical pre-sales guy, previously building thousand-node clusters for the University of Frankfurt, and he loves chatting about ZFS and distributed systems. He’s also great at accents and can do some pretty awesome card tricks.

Q: To wrap up, with proper funding in place for development of the products, what’s in the bag for Q3 and Q4 of 2013?

A: We’re working on a few cool features for the 2.5 release later this year: we’re going to have first class Ruby on Rails and Python/Django support, mod_security to keep application exploits out of the containers, Memcache and Varnish support. We’re also working on properly supporting IP-based failover so we don’t have to rely on DNS, and there are some massive improvements to our Control Panel on their way.

It’s an exciting time to be in hosting 😉 and an even more exciting time to be a HybridCluster customer!

Thanks for the interview and the great questions.

Cloud Cloud News Hosted Cloud Apps Hosting Interviews News Start-Ups Technology Web Hosting

“We Provide Constant Visibility Into a Company’s Cloud Spending”- Mat Ellis, Cloudability

Self-service, one of the greatest features cloud computing, makes lives of enterprises jumping on the cloud bandwagon easier in more than one ways. It makes it possible for them to have software, security, infrastructure and many other full-blown enterprise-capable applications up and running in minutes. All their website data could sit in the cloud on infrastructure they don’t even own or operate.

But such ease-of-use and flexibility also brings with it less visibility of resources, less control over computing, unintended expenses and ballooned bills. A big majority of companies are unaware of the services and resources deployed by them that they don’t even need or aren’t properly utilizing.

Large number of workloads, lax monitoring and lack of usage alert thresholds lead to bill shocks for such companies who find keeping track of what they’re spending too herculean a task. Add to it unwieldy spreadsheets with heavy amount of billing data and a decentralized financial view and they’re up for a nightmare.

We spoke to Mat Ellis, co-founder and CEO, Cloudability on the importance of avoiding unexpected or unnecessary cloud computing expenses and how Cloudability helps companies do so.

Cloudability is designed to be used by anyone in an organization, from engineers and IT/Ops pros to finance, management and C-suite team members. This means that cloud cost and usage data is accessible by everyone who needs it without having to mess around with spreadsheets and powerpoint presentations.

– Mat Ellis, CEO, Cloudability.

Mat Ellis, CEO, Cloudability
Mat Ellis, CEO, Cloudability.

Q: Let’s begin with a broad overview of Cloudability.

A: The cloud has spurred a revolutionary increase in growth. Pinterest, Instagram, Netflix are all growing at unheard of rates. But managing cloud resources during that kind of growth presents a huge problem.

Cloudability helps companies overcome the growing pains so that they can continue to take full advantage of the cloud revolution and grow at unheard-of speeds.

We provide comprehensive tools in an easy-to-use SaaS format that measure cloud infrastructure costs and usage throughout your organization, allowing our users to:

  • Find cloud resources that they’re paying for but not using.
  • Track their cloud spending and usage trends over time, and plan for the future.
  • View their spending in the context of important business actions, e.g. “we spend $2 per user per month on the cloud”.
  • Predict and track ROI on large cloud purchases like AWS Reserved Instances.

Q: Can you please throw some light on the modus-operandi of Cloudability? How exactly does it help organizations in mitigating their cloud costs?

A: Most people assume that Cloudability’s primary benefit is in mitigating cloud costs. The reality is that we have a lot of customers who would like nothing more than to increase their cloud usage.

We provide constant visibility into a company’s cloud spending so that Operations, IT, Finance and Management are always confident that any dollar spent on the cloud is a dollar well-spent. It’s critical to develop that level of assurance when you’re spending so much money on a variable resource like cloud computing.

Q: Is there any difference in how Cloudability works with respect to the service provider? I mean does your monitoring process differ between a client having Git Hub and another one one having Amazon?

A: Ideally, we’d love to provide the same level of visibility for all of the cloud services our customers use. But we’re sometimes restricted by the amount of data that the service provider provides.

For instance, AWS provides hourly billing and usage data with a lot of granularity. That’s allowed us to build out a very deep analytics interface to track and analyze a user’s AWS costs and usage. For other providers, though, we only have access to daily or even monthly billing data.

Regardless of the level of integration, though, our users love having one report at the end of the month that contains all of their cloud spending.

Q: Founded in 2011, Cloudability passed $250M in cloud spending in a very short span of time. When you look back, what is one factor that you would say contributed most to your growth?

A: Our growth is a product of two factors:

First, we were the first company to recognize that the cloud was going to radically change the way companies managed their IT spending. That gave us a big head start and helped us reach a lot of cloud users right when they started feeling the pain.

Second, the cloud market itself is growing and our customers are growing with it. There are companies that we started working with two years back, who had one team of a few people working on AWS. Now their entire company is moving to the cloud … and creating their own Cloudability accounts.

Q: Do you see any major shift in the market’s perception of ‘Cloud’ during these 3 years?

A: Absolutely. At a fundamental level, the cloud has gone from “It’s coming. Are you ready?” to “It’s here. Are you on board?”.

While there’s still some discussion to be had about things like maximizing security in some applications or uptime in others, the conversation is now less about whether or not you should use the cloud and more about how you should be using the cloud.

That’s why we’re seeing revenue predictions of $20B/year by 2020 for cloud services like AWS. It’s also why we’re seeing traditional IT companies like VMWare and Oracle coming out with their own public cloud services.

Q: You recently launched your new product-AWS Cost Analytics. Now it has a lot of features which I think can be of real value to heavy AWS users. So can you talk about in detail about how each one of them helps in enhanced monitoring and analysis of cloud costs:

AWS tag mapping:Cloudability AWS tag mapping

AWS tag mapping is, hands-down, our most popular feature. Ever since AWS started allowing their users to tag resources, those users have been desperate for an easy way to apply those tags to spending and usage data.

Cloudability’s AWS tag mapping lets finance and management teams break down their AWS costs from one or more accounts by cost centers like department, project or client. Meanwhile, operations and engineering teams can see usage and optimization data broken out along functional lines like environment, team or role.

AWS Product-level spending reports:

Seeing your AWS spending by product (EC2, S3, RDS, etc.) is hard enough with one account. If you have more than one account, it becomes a huge monthly chore involving hours with a spreadsheet. Cloudability automates the entire process by pulling in billing data from all of your AWS accounts and giving you an easy way to see that spending broken down by product, time frame or anything else you can think of.

Segmented reports for multiple AWS accounts :

In larger companies using AWS, it’s pretty common to find multiple AWS accounts that have been set up for different teams, departments, projects, etc. This is the old school way of breaking down the company’s costs.

Inevitably, though, you need to look at the costs across all of the company’s accounts broken down by another dimension. Suppose you have three different AWS accounts for three different products. Within each product’s account, you have a dev environment, a testing environment and a production environment. So how do you show what your company is spending on testing across all three of your products?

Now it’s simple. You can tag your resources in all three accounts as environment=dev, environment=testing, or environment=production. Then, with all three accounts added to Cloudability, you can view your aggregate spending broken down by the tag “environment.” Now your finance and management teams can make better, more informed decisions with a better understanding of their costs.

Customized Metric Reports:

Let’s face it. There are a lot of AWS dashboards out there; plenty of static reports that can show you a simple analysis of your company’s AWS cost and usage data. But what happens when you need to see the data in a new way? Broken down by a new dimension?

Cloudability’s AWS Cost Analytics tool was built on a foundational principle that the best person to design a report for your organization is you. You know which questions need to be answered better than anyone else. So, instead of just creating another AWS dashboard, we’ve created a platform that allows IT, DevOps and Engineering pros to create any report they need.

Cloudability AWS Cost Analytics

Q: Since you’ve over 6000 clients, can you tell us three things most organizations over spend on when it comes to cloud service?

A: First, they often don’t accurately know their own spending. Finding out who is actually using the cloud can be challenging, even in smaller organizations. And even when you think you know, finding out exactly what is being spent is very time consuming to keep up to date and accurate. We often see spend drop 20% when new users sign up, simply because they know their spending for the first time. You can’t control what you aren’t measuring.

Secondly, engineers are notorious for over-provisioning. They will readily turn on new services but sometimes aren’t so diligent in turning them off when they are no longer needed. And when you ask what all these services are being used for, or what will happen when you turn them off, you can get a mouthful of technical details that can be hard to parse. So make sure you know how to gauge what’s actually being used. (Fortunately, tools like ours can point out services that appear under-utilized.)

Finally, the biggest and most spectacular overages are often caused by human error and/or malice. Scripts that turn servers on but not off and security compromises are the leading causes. In these cases we’ve seen overages in the hundreds of thousands of dollars, but they’re easy to detect if you’re watching your costs on a daily basis.

Q : With the complexity and size of data increasing, many organizations have started taking cloud expenditure seriously . We’ve thus seen a sudden boom in analysis tools like yours in a last couple of years. With such a bracing competition out there, how do you plan to stand out?

A: At Cloudability, we’ve always differentiated ourselves along three lines:

  • Fully customizable reporting: With our Cost and Usage analytics tools, users can mix, match, slice and dice their cost and usage data any way that they need to see it. This gives a much greater level of flexibility than traditional dashboard tools with pre-defined reports.
  • Organization-wide ease-of-use: Cloudability is designed to be used by anyone in an organization, from engineers and IT/Ops pros to finance, management and C-suite team members. This means that cloud cost and usage data is accessible by everyone who needs it without having to mess around with spreadsheets and powerpoint presentations.
  • Cloud-agnostic cost management: Cloudability has always worked to support as many IaaS, PaaS and SaaS vendors as possible. This means that organizations can track, manage and report their entire cloud spending from one tool.

Q: What do you have to say to Non-technology companies who probably aren’t very conscious when it comes to budget-allocation for the cloud?


A: Cloud cost management is no different than any other area of budgeting. It takes three steps:

  • You have to monitor your cloud spending daily so that you can react to changes before they get out of hand.
  • You have to be able to segment your cloud spending based on cost and profit centers so that you know what impact it’s having on your bottom line.
  • And you have to be able to communicate your cloud spending quickly and effectively to anyone in your organization who needs it.

Q: Tell our readers a bit about team Cloudability?

A: A picture is worth a 1000 words. Here’s a team photo from our last off-site which was held Sunriver Resort in July:

Cloudability Team Group Photo

Q: To wrap up, what changes can we expect in the cloud computing market in 2013 and your footprint in it?

A: 2013 is the year of the enterprise. The world’s largest organizations are embracing the cloud, and their usage and spending is only increasing. It’s not uncommon anymore to talk to companies that are spending $1M-$2M per month on their cloud infrastructure. With that much money at stake, it’s more critical than ever to be able to quickly and effectively track, manage and communicate a company’s cloud spending throughout the month.

Cloudability is stepping up to that challenge with a whole new suite of enterprise-ready features, like multi-user support and account grouping and views, that are designed to make it easier and easier to manage cloud spending in large organizations.

Cloud Cloud News Hosted Cloud Apps Hosting Innovation New Products News Technology

Progress Software Announces General Availability of Progress Pacific Application Platform-as-a-Service

Progress Software Corporation today announced general availability of its application platform-as-a-service (PaaS) offering, Progress Pacific.

Progress Pacific application platform-as-a-Service (aPaas) has new data connectivity and application capabilities and provides developers with flexibility that helps them avoid vendor lock-in and develop and deploy key business applications anywhere they want, including openstack, on-premise, hybrid, etc.

The key data connectivity and application capabilities include:

Progress DataDirect Cloud service: The new Progress DataDirect Cloud service enables applications to easily integrate data from popular SaaS, relational database, Big Data, social, CRM and ERP systems. It has a standards-based SQL interface that works with any ODBC or JDBC compatible application.

Progress DataDirect Cloud service

It’s clear that the move to aPaaS is gaining momentum as time-to-market and data capabilities in new apps are key requirements for developers and end users. Pacific addresses these needs with rich data, visual design and open deployment capabilities in a single platform.
– John Goodson, CPO, Progress Software.

John Goodson, Chief Product officer, Progress Software
John Goodson, Chief Product officer, Progress Software.

The connection management service uses a single standards-based interface to execute SQL queries against a wide range of cloud data sources, thereby allowing applications to be built in less time.

Progress Rollbase: Progress Rollbase infrastructure has been improved and now offers an enhanced user experience for standards-based business applications created with drag-and-drop tools using any web browser.

It also includes support for OpenEdge 11.3, fine-grained locking for published applications and integration of Progress DataDirect JDBC drivers.

Progress OpenEdge 11.3: The new integrated Progress DataDirect Cloud (BPM) and business rules management system (BRMS) capabilities simplify application customization using Progress OpenEdge 11.3 development software.

Flexible processes, rules and workflows can be easily configured to meet business requirements while greatly accelerating productivity.

“Customer and partner response to the Progress Pacific launch in June has been extremely positive and we are now delivering on our vision,” said John Goodson, Chief product officer, Progress Software.

“It’s clear that the move to aPaaS is gaining momentum as time-to-market and data capabilities in new apps are key requirements for developers and end users.”

“Pacific addresses these needs with rich data, visual design and open deployment capabilities in a single platform,” he added.

Here is more information on Progress DataDirect Cloud service and Progress Rollbase.

Cloud Datacenter Hosted Cloud Apps Hosting News Partnership

Bit Refinery Chooses Digital Fortress’ Seattle Data Center to Host its Hadoop Hosting solution

Cloud infrastructure as a service provider  Bit Refinery has chosen Digital Fortress’  Seattle data center to host its newly launched Hadoop Hosting solution.

BitRefinery’s Hadoop Hosting solution allows companies to store massive amounts of data through a very low cost solution. Organizations can also analyze the data without having to purchase expensive, massively parallel computing (MPC) appliances.

We selected Digital Fortress as our colocation partner for our newly launched service because the company was able to support our need for mission-critical infrastructure. In this environment we can support our customers need to store and analyze vast amounts of data while ensuring their data is replicated and secure.
– Brandon Hieb, Managing partner, Bit Refinery.

Other features of the new solution are fully dedicated servers, private high-speed network and full console control.

“Hadoop Hosting will provide Bit Refinery’s customers with an affordable way to get up and running with this new technology,” said Paul Gerrard, CEO, Digital Fortress.

“With Digital Fortress as its colocation partner, customers are assured the highest standard of uptime and full redundancy backed by a dedicated technical staff on-site 24x7x365, among other value-add services,” he added.

“We selected Digital Fortress as our colocation partner for our newly launched service because the company was able to support our need for mission-critical infrastructure,” said Brandon Hieb, Managing partner, Bit Refinery.

“In this environment we can support our customers need to store and analyze vast amounts of data while ensuring their data is replicated and secure,” he added.

Earlier this year,  Bit Refinery launched it’s Disaster-Recovery-as-a-Service and a new cloud solution – vDev™.

Cloud Cloud News Event Hosted Cloud Apps Hosting Interviews News Partnership Technology Web Hosting

“OnApp Federation Gives You Capacity That’s Much More Affordable and Flexible” -Stein Van Stichel,

Last week OnApp announced that hosting provider successfully deployed OnApp CDN to host websites for Tomorrowland, an electronic dance music festival that took place on July 26–28 at De Schorre in Boom, Belgium. used 46 locations from OnApp’s federated CDN to create a global hosting platform for Tomorrowland’s customer information, ticket pre-registration and ticket activation websites.

By using OnApp’s federated CDN, was able to support 4.6 million web pages with a peak of 1.4 million page views in just one hour on the ticket sales day with zero downtime. A total of one million fans from 214 countries pre-registered for the 180,000 Tomorrowland tickets.

DailyHostNews had a quick Q & A with Stein Van Stichel, CEO, and Kosten Metreweli, CCO, OnApp on how OnApp’s federated CDN and’s website optimization services ensured Tomorrowland’s global coverage while maintaining a fast, local experience for fans around the world with zero website downtime.

Q: How did OnApp and work in tandem to ensure secure transactions on the ticket sales day?

Stein: The ticket sale itself was handled by the payment processor Paylogic. In order to join the ticket sale customers had to do a preregistration (handled by Stone & OnApp) – this ensured a buffer for the ticketing system and provided Paylogic the needed information on what crowd they had to expect on their ticketing gateways.

On pre-registration day, 500,000 people registered for access to the ticketing gateway within one hour. OnApp provided a list of partners in the federation that could provide the needed SLAs and that list was extensively tested by

After the sale, tickets had to be activated on the Tomorrowland website, and are linked to each individual customer. This procedure helps Tomorrowland trace and flag potential ticket abuse. That’s how we helped out, to provide end users with the ability to check and secure their ticket purchase.

Q: How were the 46 locations used strategically chosen from OnApp’s federated CDN to ensure global coverage and fast local experience?

Kosten: Tomorrowland has been running for a few years, so they have a pretty good understanding of their audience profile and the countries they needed to cover.

It’s a huge event with visitors from 214 countries, and one of the main reasons they went with and OnApp is because of the number of locations we could give them presence in, through the OnApp federation.

Stein: We worked with OnApp to identify the PoPs already available in the federation that offered the right performance profile, and added to that capacity from other OnApp clouds to give them the global scale they needed. Tomorrowland has used other CDNs for previous festivals, including Amazon, and one of the biggest problems has been a lack of coverage in Asia-Pacific and Australia.

OnApp gave us plenty of capacity in every region, and in all we used 46 PoPs to handle the event. We also load-tested the PoP selection to make sure they could handle the projected capacity per region.

Q: What made OnApp an ideal choice for, both from technology/resources and financial POV?

Stein: We’re a pretty successful hosting provider in our local markets, but when you’re faced with a truly global event, it’s hard for any provider to find the capacity you need to handle high traffic volumes from multiple regions in a very short timescale. But that’s the power of the OnApp federation. We could host the core web application on our clouds in Brussels and Antwerp, and get immediate access to any of the federation locations we needed to keep the Tomorrowland site fast and stable, wherever their visitors came from.

OnApp had support teams on standby as well, to fix any issues with the PoPs we were using, though in fact the whole CDN performed perfectly.

Of course every customer wants their service to be as efficient as possible, and the OnApp federation is very cost-effective. Other CDNs force you to take crazy contracts with long tie-ins, and force you to commit to a certain level of bandwidth. The OnApp federation gives you capacity that’s more affordable and much more flexible, because you only pay for what you use.

This flexibility offered is an added factor for global events like this, where last minute changes, band announcements or even links to the website on social media can create huge unexpected traffic spikes. With our solution and OnApp CDN, Tomorrowland can make announcements and their systems scale automatically with the load.

Cloud Cloud News Hosted Cloud Apps Hosting Innovation New Products News Technology Web Hosting

6fusion Launches The UC6 Meter for Amazon; Enables Cost Comparison of AWS Against Other Providers

6fusion today released UC6 Meter for Amazon to open beta enabling customers to quantify and compare resource utilization and costs in Amazon Web Services (AWS) to any other public and private IT infrastructure.

The ability to normalize metering for Amazon is the basis for Amazon users to create an apples-to-apples comparison with other infrastructure suppliers for the first time. 6fusion allows all suppliers to compete openly on a clear price-to-value ratio.
– Rob Bissett, SVP Product Management, 6fusion.

Some of the key features of the UC6 Meter for Amazon are:

  • It leverages 6fusion’s Workload Allocation Cube (WAC) as a normalizing metric.
  • It is built on 6fusion’s Open Market Framework and utilizes the flexible architecture designed to separate the infrastructure layer from 6fusion’s metering and orchestration capabilities.
  • It collects usage data from the AWS API through a customer’s account and translates that usage data into the equivalent WAC units for analysis in the 6fusion Platform.
  • Customers can view AWS Elastic Compute Cloud and Elastic Block Service usage in the same unit of measure they use to track their other infrastructure usage, thereby simplifying the process of comparing AWS costs and usage to other internal and external options.
  • Customers can quickly deploy 6fusion’s library of pre-built adapters for common infrastructure platforms or build their own adapters using 6fusion’s open API.
  • It enables organizations to include AWS usage in their internal cost visibility and allocation methodologies and better manage their overall IT infrastructure investments.

“6fusion is building the world’s only true open marketplace for infrastructure services,” said Rob Bissett, SVP of Product Management, 6fusion.

“The ability to normalize metering for Amazon is the basis for Amazon users to create an apples-to-apples comparison with other infrastructure suppliers for the first time. 6fusion allows all suppliers to compete openly on a clear price-to-value ratio,” he added.

Hosted Cloud Apps Hosting New Products News Partnership Uses 46 OnApp CDN Federation PoPs to Host Websites for Tomorrowland Festival

OnApp today announced that  hosting provider successfully deployed  OnApp CDN to host websites for Tomorrowland, a huge electronic dance music festival taking place this weekend in Boom, Belgium.

OnApp’s federated CDN is the only way we could provide enough global capacity for this size event. Using OnApp CDN, we could distribute Tomorrowland traffic from cities all over the world for the launch, and scale back when ticket sales closed.
– Stein Van Stichel, CEO, used 46 locations from OnApp’s federated CDN   to create a global hosting platform for Tomorrowland’s customer information, ticket pre-registration and ticket activation websites.

By using OnApp’s federated CDN, was able to support  4.6 million web pages with a peak of 1.4 million page views in just one hour on the ticket sales day with zero downtime.

A total of one million fans from 214 countries pre-registered for the 180,000 Tomorrowland tickets.  On the day ticket sales opened, two million customers were forwarded to Tomorrowland’s ticket system via websites hosted by  The festival saw record-breaking ticket sales that sold in just one second.

OnApp’s federated CDN, and’s website optimization services  ensured Tomorrowland’s  global coverage while maintaining a fast, local experience for fans around the world with  zero website downtime.

OnApp’s federated CDN  offers more than 170 PoPs and the potential to add capacity from the 2,000+ OnApp clouds deployed to date across 87 countries. OnApp CDN makes these locations available on demand, with no tie-ins, and  offers instant scale for Internet events, promotions and launches.

Service providers can choose any mix of locations,  and , and pay only for the capacity they need without any tie-ins and bandwidth commitments.

The hosting platform, featuring OnApp CDN, will also be used for upcoming Tomorrowworld event in the US.

Tomorrowland is a great example of federated CDN in action. Using the OnApp federation means any business with a global audience can turn to a local provider for a close-touch service, with proper SLAs and support, and still get access to as much global capacity as they need.
– Kosten Metreweli, CCO, OnApp.

Here is more information on how OnApp’s federated CDN was also used recently for the Miss Universe Organization website.

“Tomorrowland is a great example of federated CDN in action,” said Kosten Metreweli, Chief Commercial Officer, OnApp.

“Using the OnApp federation means any business with a global audience can turn to a local provider for a close-touch service, with proper SLAs and support, and still get access to as much global capacity as they need. And, for service providers, it opens up a whole range of new hosting products and revenue streams that are easy to take to market with no CAPEX – whether it’s a straightforward content acceleration service, or a full-scale, application-delivery platform. With OnApp, you don’t have to be big to be global,” he added.

“OnApp’s federated CDN is the only way we could provide enough global capacity for this size event. We optimized Tomorrowland’s sites to ensure there were no issues with the registration and activation process, and worked with OnApp to ensure we had the global coverage we needed. ” said Stein Van Stichel, Founder and CEO,

“Using OnApp CDN, we could distribute Tomorrowland traffic from cities all over the world for the launch, and scale back when ticket sales closed. With access to capacity from the OnApp federation, we can design hosting packages for high availability and high load, and guarantee uptime for global brands like Tomorrowland,” he added.

Earlier this year, OnApp launched OnApp Cloud v3.1 and, the first user customizable and usage-based content delivery network.

Cloud Cloud News Hosted Cloud Apps Hosting Innovation Interviews New Products News Partnership Technology

Anturis to Provide Comprehensive IT Infrastructure Monitoring For ReadySpace’s Business Customers

Anturis Inc. today announced that its cloud-based monitoring & troubleshooting solution has been selected by ReadySpace, a cloud and managed hosting services provider to provide comprehensive IT infrastructure monitoring solutions for its business customers. ReadySpace will now offer Anturis’ solutions as an integrated addition to its Managed Service packages.

Anturis came to market early this year in beta phase and recently emerged from beta with the launch of its commercial product availability.

In an interview with DailyHostNews today, Sergey Nevstruev, CEO, Anturis, said:

ReadySpace selected Anturis for its IT infrastructure monitoring and troubleshooting needs because of two key reasons:

  • infrastructure model representation (i.e. users work with real-life entities like a database or a web-server, rather than with separate metrics) – which makes Anturis best suited to the needs of server monitoring.
  • integration with Parallels Automation platform via APS – which let Anturis have rather deep integration (including billing and customer portal) very quickly.
Anturis is the perfect addition to our suite, and will be utilized as the primary tool set of our technical team for monitoring and troubleshooting our customers’ various IT services.
– David Loke, CEO, ReadySpace.

David Loke, CEO, ReadySpace.
David Loke, CEO, ReadySpace.

ReadySpace will deploy Anturis to support its over 5,000 business customers, primarily in the Asia Pacific region, and especially in Singapore and Hong Kong.

Offered in ReadySpace’s Managed Service platform, the new Anturis IT monitoring solution delivers:

  • Website Monitoring: Monitoring the uptime and performance of websites. It checks for DNS, SSL, HTTP, network and application-level problems.
  • Server Monitoring: Keeps an eye on servers’ resources utilization and software performance (CPU, memory, swap, disk, OS processes, log files and more).
  • Web App Monitoring: Uses synthetic transactions to ensure visitors can successfully sign up, search, check out, log in and otherwise interact with your website.
  • MYSQL Monitoring: Watches over key database performance metrics, such as slow query rate, connection usage, Innodb buffer pool usage, and more.
  • Network Monitoring: Keeps watch over LAN and WAN connectivity and network devices using ICMP ping, SNMP and TCP checks and other network protocols.

The commercially launched version of Anturis comes with several new features and enhancements, including numerous GUI and usability enhancements, such as improved wizards. It also includes extended diagnostic data for faster troubleshooting, such as presenting the list of top five CPU-consuming processes at the time of CPU overload.

“As an international leader of cloud and managed hosting services, we are always looking for ways to improve and enhance our Managed Services,” said David Loke, CEO, ReadySpace.

“Anturis is the perfect addition to our suite, and will be utilized as the primary tool set of our technical team for monitoring and troubleshooting our customers’ various IT services.”

DailyHostNews used Anturis earlier this year to monitor its own IT Infrastructure and found it an extremely feature rich, affordable, compressive and promising product.

Cloud Cloud News Datacenter Hosted Cloud Apps Hosting Innovation New Products News Technology

Savvis Launches Cloud Data Center Services Built on vCloud Director 5.1

Cloud infrastructure and hosted IT solutions provider Savvis has launched  Savvis Cloud Data Center, a virtual data center service built on VMware vCloud Director 5.1 and Cisco’s Unified Data Center technologies.

Some of the key features of Savvis Cloud Data Center are:

1] It uses VMware’s vCloud Director 5.1 cloud stack on top of infrastructure from Cisco, Intel and Dell to simplify infrastructure provisioning and enables IT to move at the speed of business.

2] It deploys  VMware’s  Linked Clone technology, enabling  end users are able to rapidly clone base vApps into children vApps by only storing changes made by children and reading all other data from the base. This enables significant storage savings for IT and acceleration for end users who have highly cloned applications.

3] End users can unwind changes made to a virtual machine for rapid destructive testing without the need to re-provision multiple times.

Designed for customers who rely on VMware technologies, Savvis Cloud Data Center streamlines extensions into the cloud with easy-to-use familiar interfaces and tools for scaling performance to their needs.
– Andrew Higginbotham, Chief technology officer, Savvis.

4] Administrators can group users into organizations  representing any policy group such as a business unit, division or subsidiary company. Each has isolated virtual resources, independent LDAP-authentication, specific policy controls, and unique catalogs. These features enable a multi-tenant environment with multiple organizations sharing the same infrastructure.

5] Administrators can also  allow  users to log-in once and then access all instances of vCenter Server and vCloud Director without the need for further authentication.

6] It simplifies workload migration and integration and enables users to flexibly build software-defined data center services that manage compute capabilities, storage, network connectivity and security operations.

7] Initially available in London, Washington DC and Frankfurt, Savvis Cloud Data Center’s pricing starts from $67/Month.

“Businesses are looking to migrate applications hosted on-premise into the cloud using a variety of hybrid solutions,” said Andrew Higginbotham, Chief technology officer, Savvis.

“Designed for customers who rely on VMware technologies, Savvis Cloud Data Center streamlines extensions into the cloud with easy-to-use familiar interfaces and tools for scaling performance to their needs,” he added.

“Cloud Data Center, built on vCloud Director® 5.1, can provide businesses with a simple and flexible platform for hybrid cloud delivery that enables control and choice on where to run applications and effective asset protection through true data center extensibility,” said Dave O’Callaghan, Senior vice president, Global Channels and Alliances, VMware.

Savvis has data centers across North America, Europe and Asia. Here is more information on Savvis’ Cloud Data Center.

Page 1 of 12
1 2 3 12