Categories
Datacenter News

Intel doubles capacity of its data center SSD to address demanding storage and memory challenges 

The semiconductor chip maker – Intel, made an announcement to enhance its data center solid state drive, Intel Optane SSD DC P4800X Series, with increased capacity and multiple form factors.

When Intel had launched Optane SSD DC P4800X Series earlier this year, it was available with 375GB half-height, half-length add-in card. It’s now available with 750GB capacity in both half-height, half-length add-in card and a hot-swappable 2.5-inch U.2 form factor.

Intel’s aim behind launching Optane SSD DC was to address the most demanding storage and memory challenges, since DRAM is very expensive to scale, and NAND doesn’t offer sufficient performance to function in memory space.

Image source- Intel

Intel said that Optane SSD DC P4800X is an ideal SSD (solid state drive) for storage workloads, such as SAN (storage area network), software defined storage, and cloud, database, big data, and high-performance computing workloads.

The increased capacity and multiple form factors enhance this drive with more options for data center deployment and ensure flexibility in total cost of ownership as well.

Since Optane SSD DC P4800X integrates the attributes of memory and storage, it results in low latency, high throughput, high QoS, and high endurance. It will help companies overcome their storage challenges and increase their scale per server, while decreasing the transaction costs.

Intel has integrated its Xeon Scalable processors into Optane technology, which allows more datasets (even with generous size) to gain new insights from larger memory tools.

Also read: Ledger partners with Intel to enhance Blockchain apps security 

Intel Optane SSD DC P4800X Series will be broadly available this month. It’s also available as a part of Intel Select Solutions program and through additional OEMs, cloud service providers and distributors starting this month.

Categories
Cloud Hosted Cloud Apps Hosting Innovation Interviews New Products News Start-Ups Technology

“HybridCluster Allows Hosters to Differentiate, Sell New Services & Drive up Margins”- Luke Marsden, HybridCluster

There is something great about products and services that are developed by industry veterans to relieve their own pain points, to figure a way around the very problems they face day in and day out, and in the process build something that is a valuable contribution to the industry as a whole. This is where necessity, ingenuity, shortage and perspicacity hold hands in order to give birth to something that has substantial impact on the work cycle of the entire industry ecosystem.

In May this year, when HybridCluster completed $1 millions fundraising and launched HybridCluster 2.0, I was supposed to prepare an Interview Questionnaire for Luke Marsden, CEO, HybridCluster. I knew little about the product at that time, but somewhere during my research work for the same, I decided that HybridCluster is not just a very interesting product, but it is a success story.

Why? I’ll let the interview do the talking. But I’ll leave you with this interesting excerpt on the company blog, where Luke talks about the genesis of HybridCluster:

Running our own hosting company since 2001 exposed all the problems. We were continuously battling hardware, software and network issues. After a few too many late-night trips to the data centre, I thought to myself: there has to be a better way. Studying theoretical computer science at Oxford University helped me crystallize my vision for an ambitious new project — one which uses ZFS, local storage, graph theory, and a perfect combination of open source components to create a platform uniquely aligned to solving the problems faced by hosters and cloud service providers.

The HybridCluster software allows applications to get auto-scaled. It can detect a spike in traffic, and rather than throttling the spike, it can burst that application to a full dedicated server by moving other busy things on that server onto quieter servers.

– Luke Marsden, CEO, HybridCluster.

Luke Marsden, CEO, HybridCluster.
Luke Marsden, CEO, HybridCluster.

Q : Let’s begin with a brief introduction of yours and a broad overview of HybridCluster.

A: Hi. 🙂 Thanks for inviting me to be interviewed! It’s really great to be on DailyHostNews.

My background is a combination of Computer Science (I was lucky enough to study at Oxford University, where I graduated with a first class degree in 2008) and a bunch of real world experience running a hosting company.

HybridCluster is really a radical new approach to solving some of the tricky problems every hosting company has while trying to manage their infrastructure: it’s an ambitious project to replace storage, hypervisor and control panel with something fundamentally better and more resilient.

In fact I have a bigger vision than that: I see HybridCluster as a new and better approach to cloud infrastructure – but one which is backwardly compatible with shared hosting. Finally, and most importantly – HybridCluster allows hosters to differentiate in the market, sell new services, drive up margins – whilst also reducing the stress and cost of operating a web hosting business. We help sysadmins sleep at night!

Q: Did the idea for a solution like HybridCluster stem from issues you faced first-hand during your decade-long experience in the web hosting industry?

A: Yes, absolutely. Without the real-world pain of having to rush off to the data center in the middle of the night, I wouldn’t have focused my efforts on solving the three real world problems we had:

The first problem is that hardware, software and networks fail resulting in website downtime. This is a pain that every hoster will know well. There’s nothing like the horrible surge of adrenaline you get when you hear the Pingdom or Nagios alert in the middle of the night – or just as you get to the pub on a Friday night – you just know it’s going to ruin the next several hours or your weekend. I found that I had become – like Pavlov’s dog – hard-wired to fear the sound of my phone going off. This was the primary motivation to invent a hosting platform which is automatically more resilient.

Other problems we had in the hosting company included websites getting spikes in traffic – so we knew we needed to invest a hosting platform which could auto-scale an application up to dedicated capacity – and users making mistakes and getting hacked – so we knew we needed to invent something which exposes granular snapshots to the end user so they can log in and roll back time themselves if they get hacked – or if they accidentally delete a file.

Q : Can you please throw some light on the modus-operandi of HybridCluster? How exactly does it help web hosts with automatic detection and recovery in the event of outages?

A: Sure. I decided early on that a few key design decisions were essential:

Firstly, any system which was going to stop me having to get up in the middle of the night would have to have no single point of failure. This is easy to say but actually quite hard to implement! You need some distributed system smarts in order to be able to make a platform where the servers can make decisions as a co-operative group.

Secondly, I decided that storage belongs near the application, not off on a SAN somewhere. Not only is the SAN itself a single point of failure, but it also adds a lot of cost to the system and can often slow things down.

Thirdly, I decided that full hardware virtualization is too heavy-weight for web application hosting. I could already see the industry going down the route of giving each customer their own VM, but this is hugely wasteful! It means you’re running many copies of the operating system on each server, and that limits you to how many customers you can put on each box. OS level virtualization is a much better idea, which I’ll talk about more later.

Basically, I designed the platform to suit my own needs: as a young hoster, I was scared of outages, I couldn’t afford a SAN, and I knew I couldn’t get the density I needed to make money with virtualization. 🙂

Q: How does OS virtualisation used by you differ from Hypervisor based Virtualisation used by other Virtualised solutions?

A: OS level virtualization (or “containers”) are simply a better way of hosting web applications. They are higher density: because each container shares system memory with all other containers, the memory on the system is more effectively “pooled”. They are better performing: there’s no overhead of simulating the whole damn universe just to run an app. And they’re more scalable, each app can use the whole resource of a server, especially when combined with the unique capability that HybridCluster brings to the table: the ability to live-migrate containers around between servers in the cluster and between data centers.

Live migration is useful because it allows things to get seamlessly moved around. This has several benefits: administrators can easily cycle servers out of production in order to perform maintenance on them simply by moving the applications off onto other servers, but also, perhaps more excitingly, it allows applications to get auto-scaled – the HybridCluster software can detect a spike in traffic, and rather than throttling the spike (like CloudLinux), it can burst that application to a full dedicated server by moving other busy things on that server onto quieter servers. This is also a unique feature.

Q: How does HybridCluster enable an end user to self-recover lost files and data from even less than a minute ago? This feature, if I’m not wrong, isn’t available with any other solution out there.

A: It’s quite simple really. Every time that website, database or email data changes, down to 30 second resolution or less, we take a new ZFS snapshot and also replicate the history to other nodes in the cluster. ZFS is a core enabling technology for HybridCluster, and we’ve built a smart partition-tolerant distributed filesystem on top of it! Each website, database or mailbox gets its own independently replicated and snapshotted filesystem.

Anyway, these replicas act both as a user-facing backup and a hot spare. It’s a simple idea, but this is actually a revolution in backup technology – rather than having a backup separate from your RAID or other replication system (where the problem with a replication system like RAID is that it will happily replicate a failure, and the problem with backups is that they take ages to restore) our “hybrid” approach to replicated snapshots kills two birds with one stone, bringing backup restore times down to seconds, and also letting users fetch files/emails/database records out of snapshots which are taken at with very fine grained accuracy.

Indeed, HybridCluster is the first hosting platform to expose this feature to the end user and we have seen a number of clients adopt our technology for this benefit alone!

Q: Is the low-cost storage system able to deliver the efficiency of high-end SANs? Also, what additional value does ZFS data replication bring into the picture?

A: I’m glad you mentioned ZFS again 🙂 One of the really nice things about being backed onto ZFS is that hosters using HybridCluster can choose how powerful they want to make their hosting boxes. Remember, with HybridCluster, the idea is that every server has a local storage and uses that to keep the data close and fast. But because ZFS is the same awesome technology which powers big expensive SANs from Oracle, you can also chuck loads of disks in your hosting boxes and suddenly every one of your servers is as powerful as a SAN in terms of IOPS. In fact, one of our recent hires, a fantastic chap by the name of Andrew Holway, did some hardcore benchmarking of ZFS versus LVM and found that ZFS completely floored the Linux Volume Management system when you throw lots of spindles at it.

I won’t go into too much detail about how ZFS achieves awesome performance, but if you’re interested, try Googling “ARC”, “L2ARC” and “ZIL”. 🙂

The other killer feature in ZFS is that it checksums all the data that passes through it – this means the end to bit-rot. Combined with our live backup system across nodes, that makes for a radically more resilient data storage system than you’ll get with Ext4 on a bunch of web servers, or even a SAN solution.

There’s lots more – call us on +44 (0)20 3384 6649 and ask for Andrew who would love to tell you more about how ZFS + HybridCluster makes for awesome storage.

Q: How does HybridCluster achieve fault-tolerant DNS?

A: Something I haven’t mentioned yet is that HybridCluster supports running a cluster across multiple data centers, so you can even have a whole data center fail and your sites can stay online!

So quite simply the cluster allocates nameservers across its data centers, so if you have DC A and B, with nodes A1, A2, B1, B2, the ns1 and ns2 records will be A1 and B1 respectively. That gives you resilience at the data center level (because DNS resolvers support failover between nameservers). Then, if a node fails, or even if a data center fails, the cluster has self-reorganising DNS as a built-in feature.

We publish records with a low TTL, and we publish multiple A records for each site: our AwesomeProxy layer turns HybridCluster into a true distributed system – you can send any request for anything (website, database, mailbox, or even FTP or SSH session – to any node and it’ll get revproxied correctly to the right backend node (which might dynamically change, eg during a failover or an auto-scaling event). So basically under all failure modes (server, network, data center) we maximize the chances that the user will quickly – if not immediately – get a valid A record which points to a server which is capable of satisfying that request.

In other words HybridCluster makes the servers look after themselves so that you can get a good night’s sleep.

Q: How do you see the future of data center industry?

A: That’s an interesting question 🙂 I’ll answer it for web applications (and databases + email), specifically.

Personally I see cloud infrastructure as a broken promise. Ask the man (or woman) on the street what they think cloud means, and they’ll probably tell you about increased reliability, better quality of service, etc. But really all that cloud does today is provide less powerful unreliable infrastructure with which software engineers are expected to build reliable software on top of. That’s a big ask!

My vision is for a fundamentally more reliable way of provisioning web applications – one where the underlying platform takes responsibility for implementing resilience well, once, at the platform level. Developers are then free to deploy applications knowing that they’ll scale well under load, and get failed over to another server if the physical server fails, or even if the whole data center goes pop.

I think that’s the promise of PaaS, and my vision is for a world where deploying web applications gets these benefits by default, without millions of sysadmins in hosting companies all over the world having to get paged in the middle of the night to go fix stuff manually. Computers can be smarter than that, it’s just up to us to teach them how. 🙂

Q: Tell our readers a bit about the team HybridCluster?

A: Since we got funded in December 2012 we’ve been lucky enough to be able to grow the team to 9 people, and I’m really proud of the team we’ve pulled together.

We’re a typical software company, and so unfortunately our Dave to female ratio is 2:0. That is, we have two Daves and no females (but we’re working on that!). Anyway, some highlights in the team are Jean-Paul Calderone, who’s the smartest person I’ve ever met, and the founder of the Twisted project. Twisted is an awesome networking framework and without Twisted – and JP’s brain – we wouldn’t be where we are today. Also on the technical side, we’ve got Rob Haswell, our CTO, who’s a legend, and doing a great job of managing the development of the project as we make it even more awesome. We’ve also just hired one of JP’s side-kicks on Twisted, Itamar Turner-Trauring, who once built a distributed airline reservation system and sold it to Google.

We’ve also got Andriy Gapon, FreeBSD kernel hacker extraordinaire, without whom we wouldn’t have a stable ZFS/VFS layer to play with. Dave Gebler is our Control Panel guru and we’re getting him working on our new REST API soon, so he’ll become a Twisted guru soon 😉 And our latest hire on support, Marcus Stewart Hughes, is a younger version of me – a hosting geek – he bought his first WHMCS license when he was 15, so I knew we had to hire him.

On the bizdev side, we’ve got Dave Benton, a legend in his own right, who’s come out of an enterprise sales background with IBM, Accenture and Star Internet, he’s extremely disciplined and great at bringing process into our young company. Andrew Holway is our technical pre-sales guy, previously building thousand-node clusters for the University of Frankfurt, and he loves chatting about ZFS and distributed systems. He’s also great at accents and can do some pretty awesome card tricks.

Q: To wrap up, with proper funding in place for development of the products, what’s in the bag for Q3 and Q4 of 2013?

A: We’re working on a few cool features for the 2.5 release later this year: we’re going to have first class Ruby on Rails and Python/Django support, mod_security to keep application exploits out of the containers, Memcache and Varnish support. We’re also working on properly supporting IP-based failover so we don’t have to rely on DNS, and there are some massive improvements to our Control Panel on their way.

It’s an exciting time to be in hosting 😉 and an even more exciting time to be a HybridCluster customer!

Thanks for the interview and the great questions.

Categories
Cloud Datacenter Hosted Cloud Apps Hosting Innovation New Products News Technology

ZNetLive Unveils Cloud Shared Hosting and CloudNet Servers with Build Your Own Cloud Server tool

Asian Web Hosting provider ZNetLive today announced the launch of ZNetLive CloudNet server and ZNetLive Cloud Shared Hosting. The news coincides with the launch of ZNetLive SSD Hosting services.

ZnetLive’s CloudNet servers are based on 2.0GHz (or faster) processor, come with customer’s choice of operating system, access to multiple image templates for fast deployment and leverage Flex Images technology.

Flex Images image management and deployment system lets customers scale their cloud according to their real-time demand. A customer can scale up his cloud resources if he sees a sudden spike in traffic and scale down when the traffic recedes, thus paying only for his temporary cloud utilization rather than purchasing a permanent surplus of expensive, in-house hardware.

Our CloudNet Servers come with Build Your Own Cloud server tool, which lets users custom configure number of cores, RAM and storage capacity. They can also seamlessly integrate CloudNet servers with the rest of our ZnetLive dedicated servers, virtual servers, automated services and additional CloudNet services.
– Mr. Munesh Jadoun, CEO, ZNetLive.Mr. Munesh Jadoun, CEO, ZNet.

“Our CloudNet Servers come with Build Your Own Cloud server tool, which lets users custom configure number of cores, RAM and storage capacity. They can also seamlessly integrate CloudNet servers with the rest of our ZnetLive dedicated servers, virtual servers, automated services and additional CloudNet services,” said Mr. Munesh Jadoun, CEO, ZNetLive.

There are two storage options available with the CloudNet servers:

  • Local Disk Storage
    Primary storage implemented at RAID 10 with faster input/output speeds and is ideal for storage-intensive applications and a lower total cost of operations.
  • SAN Storage
    A primary storage solution, featuring automatic recovery for higher hardware resiliency and compute-intensive applications.

Interested customers can deploy their ZNetLive CloudNet Servers in a total of 7 Data Center locations:

  • AMS01 – Amsterdam – Western Europe
  • DAL01 – Dallas – Central U.S.
  • DAL05 – Dallas – Central U.S.
  • SJC01 – San Jose – West Coast U.S.
  • SEA01 – Seattle – West Coast U.S.
  • SNG01 – Singapore – Southeast Asia
  • WDC01 – Washington, DC – East Coast U.S

For more details, click here.

Also unveiled alongside CloudNet servers is ZNetLive Cloud Shared Hosting. An ideal solution for websites that need enterprise grade hardware & uptime but have no requirements of hardware resources of VPS or Dedicated server, ZNetLive Cloud Shared Hosting is available in three plans- Plus, Advanced and Premium. Plus plan is the cheapest one staring from Rs. 1069/mo and gives 10GB Web space. It is followed by Advanced and Premium plans which are priced at Rs. 1349/mo and Rs. 1959/mo and provide 20 GB Web space and 30 GB Web space respectively.

“Our new Cloud Shared Hosting services will help our customers keep their website & email downtime to a bare minimum,” said Mr. Munesh Jadoun, CEO, ZNetLive. “Our enterprise grade hardware is designed for full redundancy and there is no possibility of hardware failures whatsoever. We are thus one of the few providers that offer 100% Hardware uptime backed up by 24×7 support and a 30 day money back guarantee,” he added.

Interested customers have an option of two datacenters to choose from: US and Europe. For more information, please click here.

Categories
Cloud Hosting New Products News

SingleHop Launches VMware Enterprise Private Cloud powered by VMware vSphere Enterprise Plus

SingleHop, an infrastructure-as-a-service provider, has launched VMware Enterprise Private Cloud, offering a private cloud environment powered by the VMware vSphere Enterprise Plus platform. The product is a secure single-tenant pool of computing resources, custom-built for every client, and connected with a highly scalable SAN storage solution. The news comes a few days after SingleHop launched VMware based public cloud service.

The Enterprise Private Cloud utilizes SingleHop’s tried and tested automated infrastructure platform in accordance with VMware’s enterprise-class cloud computing technology. Some of the features of this blend are:

  • Purpose Designed and Custom Built: SingleHop sales engineers work with the client to build a network diagram that balances efficiency of design with raw computing power to determine the client’s ideal configuration.
  • Rule-Based AutoScaling: Clients can write business rules that govern increases or decreases in resources based upon real-time data.
  • Automatic Failover: Users receive high availability using redundant SAN-based Cloud Storage, leveraged by the SingleHop Cloud.
  • Live Migrations: Users can perform live migrations between virtual machines and cloud servers with no downtime.
  • Remote System Administration: Access is provided by both the VMware vCenter™ Server and SingleHop LEAP 3 control panels.

When it comes to Private Clouds, we’re setting the gold standard. With our new Enterprise Private Cloud offering, we’re combining the power of two of the industry’s best platforms; our own infrastructure automation platform and VMware’s vCloud Enterprise Plus cloud technology. The result is a powerful blend of best-in-breed technologies that offer our customers the best of both worlds; scalability of cloud computing in a private yet scalable environment. – Andy Pace, COO, SingleHop.

Wxisting users of the VMware Enterprise Private Cloud can deploy unlimited virtual machines within the private cloud, and they can apply any Windows or Linux workload as well as a number of enterprise applications. These applications are managed through VMware’s vFabric™ Suite, a middleware layer for data-intensive custom applications that enables users to deploy Microsoft stack applications as well as other enterprise apps within their secure private cloud.

“Our VMware Enterprise Private Cloud stands out in the market because it enables users to deploy any number of virtual machines with no fluctuation in costs, as there is no hourly billing or per-user fee,” says Andy Pace, Chief Operating Officer at SingleHop. “Organizations can use it to deploy resources much faster to individual departments, because they can create and destroy virtual machines on the fly. It’s ideal for companies that have concerns relating to privacy or need to meet certain compliance standards that cannot be fulfilled with public cloud options.”

“When it comes to Private Clouds, we’re setting the gold standard. With our new Enterprise Private Cloud offering, we’re combining the power of two of the industry’s best platforms; our own infrastructure automation platform and VMware’s vCloud Enterprise Plus cloud technology. The result is a powerful blend of best-in-breed technologies that offer our customers the best of both worlds; scalability of cloud computing in a private yet scalable environment,” said Andy.

VMware Enterprise Private Cloud also includes the complete SingleHop capabilities, with outstanding technical support, SLA’s, and the company’s industry-leading Customer Bill of Rights. It is available as a custom-built solution. For more information, click here.

SingleHop also launched Hosted Cloud Apps built on Standing Cloud’s marketplace platform in January end.

About VMware VMware is the leader in virtualization and cloud infrastructure solutions that enable businesses to thrive in the Cloud Era. Customers rely on VMware to help them transform the way they build, deliver and consume Information Technology resources in a manner that is evolutionary and based on their specific needs. Visit vmware.com for more information.

About SingleHop

SingleHop is a cloud hosting company that offers highly scalable, on-demand infrastructure services to both end-users and resellers. With clients in 114 countries, three US-based data centers, and over 10,000 servers online, SingleHop delivers state of the art resources and services with industry-leading deployment time and custom support. SingleHop was established in 2006 and makes its home in Chicago, IL. In 2011, the company was named #25 on the Inc. 500 list for the fastest growing companies in America.

Categories
Articles Domain Legal News Web Security Web Security Website Development

What is a Multi Domain EV SSL Certificate?

Maintaining a  high level of online trust and security in compliance with industry-wide security regulations can be a daunting task for organizations  as it requires timely updates to the IT security infrastructure which are sometimes very expensive. To keep a sense of trust and security intact in the minds of website visitors and at the same time keeping expenditure within manageable limits is thus a very herculian task. This is where a  Multi Domain EV SSL security certificate comes in.

 Multi Domain EV SSL security certificateis a ‘best of both worlds’ product in a way that it provides stringent and tough authentication at par with  industry standard EV SSL (Extended Validation) certificate, and has the ability to package multiple domains , thereby effectively cutting down the costs for the buyer. For example, a single EV SSL MDC can secure- domainA.com, domainB.com, secure.domainA.com, login.domainB.com and anydomainunderthesky.any-tld. The most important thing to note here is that a EV Multi Domain SSL certificate covering all these five domains will cost significantly lesser than the cost for five separate security certificates for the same five domains.

A  Multi Domain EV SSL certificate also saves a lot of time as even though it requires each domain to  go through the domain authentication process separately, the identity of the website owner has to be authenticated only once. This makes it the perfect security solution for small and medium scale business  looking to secure their online transactions.

How do I choose the best  Multi Domain EV SSL certificate for me?
Like every other security solution, the selection of a  Multi Domain EV SSL certificate best suitable for you also depends on a number of factors, such as  price, the number of domains needed initially and flexibility in adding new ones during the time period covered by the certificate. For example, you plan to secure only 5 domains now under the Multiple Domain EV SSL, but anticipate a healthy growth of your business in future and hope to secure 10 domains in an year or so, then you must go for a provider who is flexible in adding new domains under a single certificate and has sales representatives/support available for live chat 24*7. You must also do a proper research on the provider and look for online reviews of their products online.

A detailed article to choose the best SSL provider is here, but these are  some vital features one must surely check while buying a Multi Domain EV SSL security certificate:

  • Security Level: Complete Business or Organization Validation.
  • Encryption Level: The Toughest 256 Bit SSL Encryption.
  • Serve License: Unlimited Server Licenses. (Without Any Extra Charges)
  • Issuance Speed: Within 1 to 10 working days.
  • Compatibility: 99.99% the latest web browsers and mobile device compatibility.
  • Assortment: SAN / Multi-Domain / UCC option obtainable.
  • Additional Plus: Order www.domain.com & additional plus secure.domain.com.

The multiple domain packages offered by SSL security certificate authorities differ considerably. For Example, GeoTrust offers five additional multiple domains with its starting package and provides an option to  add additional domains in increments of five, up to a total of 25. This is completely different from Comodo, while offers only three additional multiple domains with its starting package but gives an option to  add up to 100 total domains, one at a time. Every Multi Domain EV SSL certificate package thus has it’s own pluses and minuses depending on their price, difficulty of installation etc., the key lies in choosing one which best suits your needs.

Categories
Cloud Hosting New Products News

Webair Launches ProtoCloud, an Infrastructure-as-a-service Cloud Offering

Webair, a leading provider of Cloud Hosting and Managed solutions, yesterday announced ProtoCloud, the newest addition to its highly popular family of cloud hosting services. ProtoCloud is Webair’s true Infrastructure as a Service (IaaS) cloud offering. Built with scalability and availability in mind, ProtoCloud provides IaaS scalability on top of a highly available foundation. By default, and at no additional costs, ProtoCloud instances are provisioned with no single points of failure via the use of redundant & scalable computing nodes, Network infrastructure, SAN and FusionIO storage. In february, Webair also completed control procedures and compliance for its NY1 data center as verified in a Service Organization Control Report (SOC 1) prepared under the terms of SSAE 16 by independent auditing firm BrightLine CPAs & Associates, Inc.ProtoCloud is Webair’s true Infrastructure as a Service (IaaS) cloud offering.

ProtoCloud comes with an easy to use management interface and integrates with a number of Webair’s existing service offerings including Cloud Storage, Carrier Neutral Cloud, FusionCloud, CDN services, and dedicated servers. Webair clients can interconnect ProtoCloud instances with other dedicated or virtual servers via secure internal networks to form true hybrid solutions. A RESTful API makes it convenient for customers to integrate ProtoCloud into their applications, requiring minimal administration and management.

Our customers can conveniently manage all their hosting resources from EZPanel’s friendly interface. Likewise, resource allocations and bandwidth can be aggregated between services and shown on the same bill. This empowers our clients with a true end-to-end solution that allows them to match use cases to the most appropriate solutions, and all under a fully managed umbrella of our ServerGenius support. – Sagi Brody,CTO, Webair.
Sagi Brody,CTO, Webair.

“We’ve added ProtoCloud to our portfolio of Cloud services to provide our clients with true highly available IaaS out of the box without the complexity or the guess work it takes to build it themselves. The ability for clients to integrate ProtoCloud with various other Cloud hosting and traditional hosting services means highly customized and hybrid environments can be built to match even the most demanding needs ,” said Gerard Hiner, Executive Sales Manager at Webair. ProtoCloud’s on-demand provisioning and utility billing models provide customers with an easy way to utilize the infrastructure for requirements such as Big Data analytics, NoSQL database workloads, load balanced web hosting and application testing/ development at a low cost resulting in excellent value for clients.

ProtoCloud integrates seamlessly with EZPanel, Webair’s popular, feature rich control panel. Customers can sign up for ProtoCloud with the click of a button and easily add additional services such as CDN, dedicated servers and FusionCloud to their account from within EZPanel. ”Our customers can conveniently manage all their hosting resources from EZPanel’s friendly interface. Likewise, resource allocations and bandwidth can be aggregated between services and shown on the same bill. This empowers our clients with a true end-to-end solution that allows them to match use cases to the most appropriate solutions, and all under a fully managed umbrella of our ServerGenius support,” said Sagi Brody, CTO of Webair.

ProtoCloud will be launched first in Webair’s NY1 datacenter and will be expanded to Amsterdam, Los Angeles and Montreal in the coming weeks. Designed around industry standard best practices and security, ProtoCloud provides a robust computing platform that delivers on performance, convenience and reliability.

About Webair
Founded in 1996, Webair is a leader in managed hosting solutions, including Managed & Secure Cloud Infrastructure for companies of all sizes and is recognized as a global leader in the industry. Webair offers a variety of Hosting services including Public, Private & Hybrid Clouds, Dedicated Servers, Colocation, CDN and Video Streaming Solutions. Webair, headquartered in New York, operates an international network of datacenters located in New York, Los Angeles, Montreal and Amsterdam.

Categories
Cloud Hosted Cloud Apps Interviews News

“OnApp Storage Grows as the Service Provider’s Cloud Grows”- Kosten Metreweli, OnApp

On 8 August 2011, the then $2.7 billion valued CDN market changed irreversibly. OnApp, a one year London based cloud platform at that time, grabbed the CDN market out of the hands of the big players and opened it up at prices so nominal that no one could’ve imagined. Using spare cloud capacity of hosts around the globe to provide CDN PoPs, OnApp got rid of the mammoth investment that was earlier needed to build new infrastructure. Starting with 40, OnApp now has over 150 PoPs across 40 countries in a short span of 18 months.

With the recently rolled out OnApp Cloud v3.0, the latest version of its cloud platform for service providers, OnApp has established itself as a software builder which develops new features at an aggressive pace and whose progress is to be watched out for. DHN got an opportunity for a Q&A session with Mr. Kosten Metreweli, CCO, OnApp. Read away as he shares interesting bits about the products OnApp has to offer, their USPs, OnApp’s robust growth and threatens us with homicide. (Yes!)

OnApp Storage is what we call ‘VM-aware’ – it tries to make sure that a copy of a virtual disk always sits on the same hypervisor server as the compute that is consuming it – that basically takes most of the ‘read’ load off the network, reducing network bottlenecks while at the same delivering near-local-disk performance. It really allows storage to grow as the service provider’s cloud grows.

– Kosten Metreweli, CCO, OnApp.

Kosten Metreweli, CCO, OnApp.
Kosten Metreweli, CCO, OnApp.

Q : What is your name and role with OnApp? How long have you been in this role?

A : Kosten Metreweli – I’m Chief Commercial Officer at OnApp, responsible for marketing and sales. I’ve been with OnApp for about 18 months now.

Q : For those who don’t know what OnApp is, can you please brief it a bit?

A: OnApp software powers cloud services (IaaS) at over 500 hosting companies, MSPs and telcos across over 40 countries. Our products allow them to deliver fully-featured cloud services profitably and with minimum CapEx spend. We have three key products:

OnApp Cloud – this provides the orchestration, user management, metering, and user interfaces that are fundamental components of any cloud, as well as value-added features such as load balancing, autoscaling and Managed DNS.

OnApp CDN – this is a great additional revenue stream for service providers – it accelerates customer applications such as ecommerce, media and gaming by caching content close to end-users. We give service providers two bites at the cherry here – first of all they generate revenue by selling the CDN service itself, but they can also contribute their spare infrastructure to the OnApp Marketplace to become part of the CDN itself – generating a second revenue stream.

OnApp Storage – this is our newest product, and we’re extremely excited about it. This is a replacement for a traditional SAN, built to support cloud workloads, and requiring minimal upfront capex.

Q: The recently launched OnApp V3.0 is unique as it comes with an integrated SAN which scales naturally as the cloud grows. Can you elaborate a bit more on it? What additional features does OnApp V3.0 have when compared to OnApp V 2.3.3?

A: OnApp v3 comes with an integrated SAN called OnApp Storage. We found the biggest obstacle for service providers deploying cloud services was the SAN. Traditional SANs are big and expensive, and frankly don’t support cloud workloads that well. OnApp Storage is a totally new approach. It basically allows you to put commodity hard disks and SSDs into the empty drive bays on your existing hypervisor servers, and it turns those into one big pool of storage that can then be cut up into virtual disks, each with it’s own unique ‘enterprise’ properties. OnApp Storage is what we call ‘VM-aware’ – it tries to make sure that a copy of a virtual disk always sits on the same hypervisor server as the compute that is consuming it – that basically takes most of the ‘read’ load off the network, reducing network bottlenecks while at the same delivering near-local-disk performance. It really allows storage to grow as the service provider’s cloud grows.

We’ve added a couple of other headline features in OnApp v3.0. First in OnApp CDN, we’ve added live streaming of video – this not only makes the service more attractive to end-users, but also generates more revenue for service providers that contribute POPs. Secondly, we’ve added VMware support to OnApp Cloud – meaning you can now manage Xen, KVM and VMware hypervisors, all from within one pane of glass. Lastly – we’ve done a complete revamp of the user interface – it is not only a great looking UI, but we’ve also focused on usability by adding several wizards for non-trivial tasks.

Q: How does the new Cloud Control Panel of OnApp V3.0 enhance customer and administrator experience?

A: The Control Panel has had a complete makeover – so it looks extremely slick, and at the same time is easy for a service provider to rebrand – allowing them to get-to-market extremely fast. But the changes are not just skin-deep. We’ve included several wizards that guide the user step-by-step through various activities – such as creating a virtual machine, and we’ve put the most common activities for an object within instant reach. This not only enhances the end-user experience, but also reduces support calls for the service provider.

OnApp Products

Q: Can you please throw some light on OnApp Storage’s ‘smart disk’ technology? What value does it add to SAN?

A: There are two really important technologies in OnApp Storage. The first is ‘SmartDisk’. Unlike other distributed SAN technologies out there, this removes the need for a metadata server, instead each disk understands it’s own content, and relationship to the rest of the array. This not only removes a single point of failure – the metadata server, but also allows very fast recovery in the event of a hypervisor failure – disks can simply be swapped over to another hypervisor, and they will resync and pick up where they left off, without the need to completely rebuild. The second important technology is what we call ‘VM-aware’ – this tries to ensure that one copy of a virtual disk always sits on the same hypervisor as the workload that is using it. What that means is that any reads from the SAN are effectively a local read – meaning they are fast, and add no load to the network. Writes are obviously distributed across all the copies of a virtual disk, but for most workloads, reads are the bulk of the IO. The combination of these technologies makes the SAN fast, reliable and allows it scale as the cloud service grows.

Q : What would you say makes OnApp Storage more suitable for Cloud than the traditional SANs out there?

A: There are a few problems with traditional hardware SANs. They require a large up-front investment, and when you need to grow the SAN, you have to grow in large chunks – with OnApp Storage, you add commodity hard disks and/or SSDs as you need them. They take up valuable rack space and power – with OnApp Storage, you’re using the spare drive bays in your existing Hypervisor servers. With cloud workloads, things tend to get bottle-necked at the SAN controller – with OnApp Storage, and it’s VM-aware technology, a lot of that network traffic is removed. You don’t have much flexibility in what you offer your end-customers because you have to define properties like redundancy, striping etc.. up-front for the whole SAN – with OnApp Storage, you define those properties on a virtual disk level, giving as much granularity as you need. In short – we’ve designed this from the ground up to be suitable for Cloud.

Q: OnApp CDN was revolutionary in a way that it took the CDN market out of the hands of the big players (who charged grotesquely high prices for it) and enabled it for hosting providers. How has the response been so far? What would you like to say to hosting providers who don’t sell CDN as of now?

A: The response has been amazing. We’ve gone from a standing start to 150 points of presence across over 40 countries within the space of 12 months – that makes us one of the top 3 global CDNs by number of points of presence. Each one of those points of presence is an additional revenue stream for our service providers. The CDN service itself has been growing strongly, with thousands of new websites being accelerated every month. For service providers not yet selling CDN – I’d urge them to get started – you don’t even have to be using OnApp Cloud. End customers are increasingly realizing the value of accelerating their websites, especially in ecommerce, media and gaming, so offering the service not only increases ARPU, but also stickiness.

Q : To wrap up, what is in the box for 2013?

A: I could tell you, but then I’d have to kill you. Seriously, though – we’ve got big plans for our federation – both in terms of adding more capabilities beyond CDN, but also driving more revenue across it for our service providers. We’re adding new capabilities to OnApp Cloud, OnApp CDN and OnApp Storage that will continue to make our service provider customers the fastest growing in the industry while driving down their costs.

Categories
Hosted Cloud Apps News

OnApp Rolls Out OnApp Cloud v3.0 with VM support, Helps Cloud Providers Build SANs

OnApp has announced the General Availability of OnApp Cloud v3.0, the latest version of its cloud platform for service providers. OnApp Cloud v3.0 sets a new standard for cloud platform technology by moving far beyond simple cloud orchestration. With an integrated SAN, global CDN and new support for VMware clouds, it reduces the capital expenditure needed to build cloud services, and offers new ways for cloud providers to generate revenue by selling a wider range of services through their core cloud platform.

OnApp Cloud v3.0 gives you much more than just a cloud – it gives you the tools you need to build a global cloud business. – Ditlev Bredahl, CEO, OnApp.

OnApp Cloud v3.0 includes OnApp Storage, an enterprise-class Storage Area Network (SAN) designed for cloud environments; OnApp CDN, an integrated global CDN with new support for video on demand and live streaming; and new support for VMware clouds.  Combined with highly automated VM management, DNS, load balancing, autoscaling, failover, firewalls, billing, user management and other core cloud management functions, OnApp Cloud v3.0 makes it easier than ever before for service providers to build a successful global cloud business.

“OnApp Cloud v3.0 gives you much more than just a cloud – it gives you the tools you need to build a global cloud business,” said OnApp CEO, Ditlev Bredahl. “With VMware support in v3.0, you can take your cloud brand into the huge VMware market. You can create new services like CDN, and use our network of 140-plus locations to accelerate apps, video and other content for your customers. You can offer utility cloud, packaged cloud, private and public cloud, load balancing, autoscaling, firewalls, DNS and more, all managed through a single control panel.”

“With OnApp Storage included as well, we’ve removed the last real entry barrier to the cloud – which has been finding a storage system that matches the performance and cost profile cloud providers need, ” he added. “OnApp Storage is designed for cloud providers. You can build a SAN in your cloud by adding disks to your existing servers. You can use your current 1Gbit network environment, which is unique for a distributed SAN, and we’ve built some very smart technology that optimizes throughput, so you’re getting close to raw disk performance. It’s fully integrated with our billing, VM provisioning and other cloud management functions, and it scales naturally as your cloud service grows – if you need more space, just add more disks.”

Using OnApp Cloud v3.0, we’re building a new global cloud service that aims to give customers twice the performance of the Amazon cloud, at half the cost. The cost and performance benefits of OnApp Storage are an essential part of that mission. – Clint Chapman,CTO, Ubiquity.

Ubiquity Servers, who recently collaborated with Arbor Networks,  is one of the first cloud providers to adopt OnApp Cloud v3.0. Ubiquity was a key participant in the OnApp Storage beta test, which began in early 2012, and the company will soon launch new cloud services built around OnApp’s integrated SAN.

“OnApp Cloud v3.0 is the platform we’ve been waiting for since we launched our first cloud service back in 2007,” said Clint Chapman, Ubiquity’s CTO. “Traditional SANs are really not suited to the cloud. They’re very expensive, difficult to maintain and not very efficient for typical cloud workloads. Using OnApp Cloud v3.0, we’re building a new global cloud service that aims to give customers twice the performance of the Amazon cloud, at half the cost. The cost and performance benefits of OnApp Storage are an essential part of that mission.”

Highlighted features of OnApp Cloud v3.0 include:

A built-in, high-performance SAN: OnApp Storage is bundled with the full version of OnApp Cloud. It enables cloud providers to pool the capacity of SATA and SSD disks in their hypervisor servers to create fast, reliable and cost-effective storage for cloud services. OnApp Storage provides intuitive control of striping and redundancy, supports multiple tiers of storage, over-commit and other key functionality for service providers. Crucially, it includes OnApp’s VM-aware technology, which removes performance bottlenecks by ensuring that data resides on the same server as the application that needs it.

VMware support: OnApp Cloud v3.0 enables cloud providers to manage clouds running on VMware vSphere 5, as well as those built on the leading open source virtualization platforms, Xen and KVM. Using OnApp to manage VMware clouds gives cloud providers more control over billing and user management, a much more intuitive control panel, and reduces licensing costs compared with using VMware’s vCloud Director, vChargeback and other tools.

Integrated CDN with video support: OnApp Cloud v3.0 includes OnApp CDN, which enables cloud providers to accelerate web apps and content for their clients by hosting them at more than 140 locations in 39 countries worldwide. In version 3.0, OnApp CDN integrates media streaming software from the industry leader, Wowza, to enable video-on-demand and live streaming for all popular video formats.

Other new features in OnApp Cloud v3.0 include a redesigned control panel with new wizards for VM provisioning; Cloud Boot, a new way for cloud providers to automate the set-up and discovery of hypervisor servers, saving up to an hour per hypervisor; and more than 100 other enhancements and improvements.

To read our interview with Mr. Clint Chapman, CEO, Ubiquity servers, please click here.

Availability and Pricing
OnApp Cloud v3.0 is a free upgrade for customers with previous versions of OnApp Cloud. New customers can deploy OnApp Cloud v3.0 under OnApp’s standard monthly license.

About OnApp
OnApp software powers cloud, CDN and storage services for companies all over the world. OnApp products include OnApp Cloud, the leading cloud platform for hosts and service providers; OnApp CDN, a unique federated CDN platform for service providers; and OnApp Storage, a low-cost, high performance distributed SAN for cloud environments. OnApp launched in July 2010, has more than 110 staff across the EU, U.S. and Asia-Pacific, and can be found online at  www.onapp.com.